SlideShare a Scribd company logo
Pull, don’t push!
Architectures for monitoring and configuration in a
microservices era
Julian Dunn, Director of Product Marketing, Chef
@julian_dunn
Fletcher Nichol, Senior Software Development Engineer, Chef
@fnichol
Pull, don’t push: Architectures for monitoring and configuration in a microservices era
• Modular, self-contained, pre-fabricated components
• Neighbors share components
• Complex shares services as a whole
Pull, don’t push: Architectures for monitoring and configuration in a microservices era
Pull, don’t push: Architectures for monitoring and configuration in a microservices era
Orchestration
An ordered set of operations
Across a set of independent machines
Connected to an orchestrator only via a
network.
Pull, don’t push: Architectures for monitoring and configuration in a microservices era
Humans acting on Microsoft Visio acting on
machines
Humans acting on code acting on machines
An ordered set of operations
Defined in code
Across a set of independent machines
Connected to an orchestrator only via a
network.
mylaptop:~$ ./disable-load-balancer.sh
mylaptop:~$ ssh db01 do-database-migration.sh
mylaptop:~$ for i in app01 app02; do
> ssh $i do-deployment.sh
> done
mylaptop:~$ ./enable-load-balancer.sh
Problems with Orchestration
Resilience Scalability
Deployment Technical
Operational Cognitive
Deployment Resilience
for i in app01 app02 app03; do
do-deploy.sh –server $i
done
Deployment Resilience
for i in app01 app02 app03; do
do-deploy.sh –server $i
if $? != 0; then
failed=$i
break
end
done
# what goes down here?
# roll back $failed?
# roll back all others?
# ignore it?
Pull, don’t push: Architectures for monitoring and configuration in a microservices era
Operational Resilience
Operational Resilience
Orchestration Backplane – must be up at all times!
Application Plane – delegated resilience to the backplane
Operational Resilience
Orchestration Backplane
Application Plane
Orchestration Backplane
Cognitive Scalability
Cognitive Scalability
Technical Scalability
Mainframes
Time Sharing
Client/Server
Web 1.0
Web 2.0
Cloud
Internet of
Things
Edge
Time
Distributed
Centralized
The Future Is Distributed
Pull, don’t push: Architectures for monitoring and configuration in a microservices era
Distributed Devices Need Distributed Management
• Adaptive
Learning
• Configuration
Updates
• Software
Updates
Distributed, Autonomous Systems
Make progress towards promised
desired state
Expose interfaces to allow others to
verify promises
Can promise to take certain behaviors
in the face of failure of others
The Design of Sensu
and
The Design of Habitat
The Design of Sensu vs. Traditional “Monitoring”
Nagios master
Agent
1
Agent
2
1. Poll
(orchestrate)
2. Run
checks
1. Run
checks
Agent
1
Agent
2
Sensu Backend
2. Post data
Habitat supervisor in a nutshell
•Network-connected supervision system
•Like systemd+consul/etcd (process supervision with
lifecycle hooks + shared state for reactive realtime change
management)
•Eventually-consistent global state using SWIM masterless
(peer-to-peer) membership protocol
sensu-
backend
hab-sup
sensu-
backend
hab-sup
sensu-
backend
hab-sup
backend.default
sensu-
agent
hab-sup
agent.default
--bind sensu:backend.default
Resolve symbol “sensu” in configs to
properties of service group
backend.default
Let’s See it in Action!
Demo: Sensu running under Habitat
• Modern architectures demand a
choreographed rather than an
orchestrated approach
• At scale, fleet management and
cognitive complexity is the biggest
problem
• Habitat and Sensu are both examples
of edge-centric, autonomous actor
systems, and they work well together
😺
Pull, don’t push: Architectures for monitoring and configuration in a microservices era

More Related Content

PDF
Order from chaos: automating monitoring configuration
PDF
Keynote: Sensu as a multi-cloud monitoring control plane
PDF
The Bonsai Asset Index : A new way for the community to share resources
PDF
7 Years of Sensu: Then, Now, and Soon
PPTX
PPB's Sensu Journey
PDF
Keynote: Scaling Sensu Go
PDF
Netflix Open Source Meetup Season 4 Episode 1
PPTX
Herding cats & catching fire: Workday's telemetry & middleware
Order from chaos: automating monitoring configuration
Keynote: Sensu as a multi-cloud monitoring control plane
The Bonsai Asset Index : A new way for the community to share resources
7 Years of Sensu: Then, Now, and Soon
PPB's Sensu Journey
Keynote: Scaling Sensu Go
Netflix Open Source Meetup Season 4 Episode 1
Herding cats & catching fire: Workday's telemetry & middleware

What's hot (20)

PDF
SRECon16: Moving Large Workloads from a Public Cloud to an OpenStack Private ...
PDF
Triangle Devops Meetup 10/2015
PDF
Netflix oss season 1 episode 3
PDF
Netflix Open Source Meetup Season 4 Episode 2
PDF
Netflix and Containers: Not A Stranger Thing
PPTX
Running a Massively Parallel Self-serve Distributed Data System At Scale
PDF
Netflix oss season 2 episode 1 - meetup Lightning talks
PDF
以 Kubernetes 部屬 Spark 大數據計算環境
PPTX
Nova Updates - Kilo Edition
PDF
OpenNebula Conf 2014 | OpenNebula as alternative to commercial virtualization...
PDF
SuiteWorld16: Mega Volume - How TubeMogul Leverages NetSuite
PDF
An approach for migrating enterprise apps into open stack
PDF
Owain Perry (Just Giving) - Continuous Delivery of Windows Micro-Services in ...
PDF
Moving from Icinga 1 to Icinga 2 + Director - Icinga Camp Zurich 2019
PPTX
Nagios Conference 2014 - Luis Contreras - Monitoring SAP System with Nagios Core
PDF
Netflix Open Source Meetup Season 3 Episode 2
PPTX
OpenContrail Implementations
PPTX
Apache Cassandra Lunch #72: Databricks and Cassandra
PDF
CS80A Foothill College Open Source Talk
PDF
Modern Monitoring - SysAdminDay 2017
SRECon16: Moving Large Workloads from a Public Cloud to an OpenStack Private ...
Triangle Devops Meetup 10/2015
Netflix oss season 1 episode 3
Netflix Open Source Meetup Season 4 Episode 2
Netflix and Containers: Not A Stranger Thing
Running a Massively Parallel Self-serve Distributed Data System At Scale
Netflix oss season 2 episode 1 - meetup Lightning talks
以 Kubernetes 部屬 Spark 大數據計算環境
Nova Updates - Kilo Edition
OpenNebula Conf 2014 | OpenNebula as alternative to commercial virtualization...
SuiteWorld16: Mega Volume - How TubeMogul Leverages NetSuite
An approach for migrating enterprise apps into open stack
Owain Perry (Just Giving) - Continuous Delivery of Windows Micro-Services in ...
Moving from Icinga 1 to Icinga 2 + Director - Icinga Camp Zurich 2019
Nagios Conference 2014 - Luis Contreras - Monitoring SAP System with Nagios Core
Netflix Open Source Meetup Season 3 Episode 2
OpenContrail Implementations
Apache Cassandra Lunch #72: Databricks and Cassandra
CS80A Foothill College Open Source Talk
Modern Monitoring - SysAdminDay 2017
Ad

Similar to Pull, don’t push: Architectures for monitoring and configuration in a microservices era (20)

PPTX
Simplifying SDN Networking Across Private and Public Clouds
PPTX
M.Tech Internet of Things Unit - IV.pptx
DOC
Neeraj_Virmani_Resume
PPTX
Build Time Hacking
PPTX
TechWiseTV Workshop: Open NX-OS and Devops with Puppet Labs
PDF
Meteor South Bay Meetup - Kubernetes & Google Container Engine
DOC
Remote sensing and control of an irrigation system using a distributed wirele...
PDF
Twelve Factor App
PPTX
System center 2012 configurations manager
PPTX
Continous delivvery devops Tools Technologies.pptx
PPTX
InfrastructureDevOps.pptx it is most sui
PDF
Application Streaming is dead. A smart way to choose an alternative
PDF
Containerization Principles Overview for app development and deployment
PDF
Operational Visibiliy and Analytics - BU Seminar
PPTX
Meet Puppet's new product lineup 12/7/2017
PDF
Sdn primer pdf
PPTX
DEVNET-1169 CI/CT/CD on a Micro Services Applications using Docker, Salt & Ni...
PPTX
Netflix Cloud Architecture and Open Source
PDF
Open shift and docker - october,2014
PDF
Cloud-Native Patterns and the Benefits of MySQL as a Platform Managed Service
Simplifying SDN Networking Across Private and Public Clouds
M.Tech Internet of Things Unit - IV.pptx
Neeraj_Virmani_Resume
Build Time Hacking
TechWiseTV Workshop: Open NX-OS and Devops with Puppet Labs
Meteor South Bay Meetup - Kubernetes & Google Container Engine
Remote sensing and control of an irrigation system using a distributed wirele...
Twelve Factor App
System center 2012 configurations manager
Continous delivvery devops Tools Technologies.pptx
InfrastructureDevOps.pptx it is most sui
Application Streaming is dead. A smart way to choose an alternative
Containerization Principles Overview for app development and deployment
Operational Visibiliy and Analytics - BU Seminar
Meet Puppet's new product lineup 12/7/2017
Sdn primer pdf
DEVNET-1169 CI/CT/CD on a Micro Services Applications using Docker, Salt & Ni...
Netflix Cloud Architecture and Open Source
Open shift and docker - october,2014
Cloud-Native Patterns and the Benefits of MySQL as a Platform Managed Service
Ad

More from Sensu Inc. (15)

PPTX
Introducing GoAlert: a brand-new on-call scheduling and notification open sou...
PDF
Monitoring Graceful Failure
PDF
Testing and monitoring and broken things
PDF
Keynote: Measuring the right things
PDF
AIOps & Observability to Lead Your Digital Transformation
PDF
Ecosystem session: Sensu + Puppet
PPTX
Assets in Sensu 2.0
PPTX
The Box.com success story: migrating 350K Nagios objects to Sensu
PPTX
Project 3M: Meaningful Monitoring and Messaging
PPTX
Sharing Sensu with Multiple Teams using Ansible
PPTX
Where's My Beer: Building a Better Kegerator with a Raspberry Pi & Sensu
PDF
Reimagining Sensu
PPTX
Alert Fatigue: Avoidance and Course Correction
PDF
Sensu and Kubernetes 1.x
PDF
Sensu and Puppet
Introducing GoAlert: a brand-new on-call scheduling and notification open sou...
Monitoring Graceful Failure
Testing and monitoring and broken things
Keynote: Measuring the right things
AIOps & Observability to Lead Your Digital Transformation
Ecosystem session: Sensu + Puppet
Assets in Sensu 2.0
The Box.com success story: migrating 350K Nagios objects to Sensu
Project 3M: Meaningful Monitoring and Messaging
Sharing Sensu with Multiple Teams using Ansible
Where's My Beer: Building a Better Kegerator with a Raspberry Pi & Sensu
Reimagining Sensu
Alert Fatigue: Avoidance and Course Correction
Sensu and Kubernetes 1.x
Sensu and Puppet

Recently uploaded (20)

PPTX
Big Data Technologies - Introduction.pptx
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
cuic standard and advanced reporting.pdf
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Machine learning based COVID-19 study performance prediction
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
A Presentation on Artificial Intelligence
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Empathic Computing: Creating Shared Understanding
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PPTX
sap open course for s4hana steps from ECC to s4
Big Data Technologies - Introduction.pptx
Programs and apps: productivity, graphics, security and other tools
cuic standard and advanced reporting.pdf
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
MIND Revenue Release Quarter 2 2025 Press Release
“AI and Expert System Decision Support & Business Intelligence Systems”
Advanced methodologies resolving dimensionality complications for autism neur...
20250228 LYD VKU AI Blended-Learning.pptx
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
Mobile App Security Testing_ A Comprehensive Guide.pdf
Machine learning based COVID-19 study performance prediction
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Building Integrated photovoltaic BIPV_UPV.pdf
A Presentation on Artificial Intelligence
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Encapsulation_ Review paper, used for researhc scholars
Empathic Computing: Creating Shared Understanding
Assigned Numbers - 2025 - Bluetooth® Document
sap open course for s4hana steps from ECC to s4

Pull, don’t push: Architectures for monitoring and configuration in a microservices era

Editor's Notes

  • #2: Fletcher and I were part of the original team that launched Habitat by Chef in 2016; I was the product manager and Fletcher was one of the lead engineers. We both have technical backgrounds, except that we do different jobs now. Fletcher’s computer boots into Linux and mine boots into PowerPoint.
  • #3: So this is a talk about architecture and systems design, and if we’re going to talk about architecture maybe a good way to think about good architecture is via, well, actual architecture. One of the most famous buildings in the world is the Habitat 67 complex in Montreal, built, as you can see, for Expo 67, which was Canada’s 100th anniversary. Shout out, by the way, to the Canadians in the room, including Sean Porter, Sensu’s CTO; Fletcher and I are both Canadians so we have to make a pitch for the Great White North anytime we're up here. Universal health care! One year of paid maternity leave! Super-hot prime minister! Ok, that's enough of that Anyway, Habitat 67 was such an iconic building that Canada Post put it on the stamp for Canada’s 150th anniversary last year.
  • #4: Here’s another picture, in its full glory. Probably would have actually used shipping containers today but remember, TEU (standardized) containerization didn’t arrive until the late 1960’s. But the components were standardized as you can see from the middle versus the right One unit’s roof is the other neighbor’s garden Shopping, schools, common services built into the ground floor of each complex These things sound a lot like software architectural principles Every component is responsible for its own resiliency (like Bezos’ infamous memo) Components declare peer-to-peer level dependencies All components share a base substrate of services and management (e.g. deployment, monitoring, observability, etc.)
  • #5: The Habitat 67 complex is actually quite large
  • #6: I wanted to put the big pictures up of Habitat 67 because, well, architecture starts to look a lot like architecture, right? These are visual diagrams (probably several years old) of microservice architectures at Amazon and Netflix. When you have complex systems this big, there are architectural patterns you’ll need to put in place to deal with it. Because when you get to something big and complex, your issue isn’t adding more to it – your issue becomes how do you manage this. Today’s talk which is really about how you design complex systems so that you can _manage_ them. It’s better to design systems with these characteristics built-in up front rather than to try and bolt them on later.
  • #7: Which brings me to the patterns of management for complex systems. Traditionally, we have and in many scenarios we continue to try and manage things using a centralized approach, which I call “orchestration”. So does everyone else, unfortunately, so let me define what I mean by this.
  • #9: IBM Cloud Orchestrator HP Operations Orchestration VMWare vRealize Orchestrator
  • #11: But since I’m in the orchestration track I’d better try to define it so that I actually have a talk, right? Here is the definition I'll be using for the rest of the talk. And then I’m still going to tell you how and why that breaks down.
  • #12: This is a trivial example of orchestration. Last year I said I at least hope you’re doing your orchestration in code, if you’re doing orchestration, because this is pretty awful. And as you can see, it causes downtime because you need to wait for the previous thing to complete before you can proceed with the next one. You can add more fancy error checking and branching to orchestration to try and handle no-downtime deploys, but that orchestration gets really complicated – more complexity means more error conditions means more things that need to be handled.
  • #13: Resilience Deployment Operational Scalability Technical Cognitive
  • #14: Treating machines all connected via an unreliable network as an atomic unit to which updates must be applied in full, or not at all This *used* to work when you had a small fleet and/or your network was mostly reliable (e.g. on a LAN) - not so good in a cloud
  • #15: An atomic set that is assumed to succeed as a whole or not. What happens when it doesn't? A lot of complexity in failure conditions that need to be encapsulated and dealt with. Or more commonly, the approach is to drop this all off on the operator's lap and have them deal with it.
  • #16: Modern orchestration systems try to get around this fundamental issue by creating more disposability and just throwing away larger and larger parts of the infrastructure. The theory goes, let’s get the exact right “new” setup first, and then cut over to it. The problem is that while this mostly works, it is an incredibly complicated and slow way to make changes – you’re saying that for every config change or deployment I have to stand up a whole new production environment and cut over everything to it? For example, how do I do things like quiesce writes to a database? I think this creates more complexity even though the interfaces seem really attractive.
  • #17: Orchestration systems treat application components as dumb entities to be scheduled. Those entities don’t know about each other except through the orchestration system. This means that if components fail, they depend on the orchestration backplane (and here I’m picking on Kubernetes again) to manage their lifecycle. They also depend on the orchestration backplane to tell them where the other entities are (like where the database server is, if I’m the app server). The apps themselves are deliberately kept in the dark about their execution context.
  • #18: Now remember, we’re running in the cloud now – a place where machines and networks can go down at any time. And we’re trying to build reliable applications on top of that unreliable fabric.
  • #19: Now who does such a system design benefit? It only benefits the person or organization that is running the orchestration backplane – that is, if it’s external to the unreliable vagaries of the “cloud”. In other words, if it’s, say, a hosted service provided by your cloud vendor? Kubernetes and other orchestration systems soften you up for that approach so that when you run into the inherent resilience limitations, you outsource. Therefore I believe Google has never intended that you run a Kubernetes cluster on your own, but to buy it from someone (hopefully them) as a managed service. And don’t get me wrong, it’s an amazing business model, and, if you can offer your developers an experience on top of all this that’s just “push a container and it runs”, then that’s great. This is why there has been this Precambrian explosion of hosted Kubernetes solutions... Because these vendors know that this architectural model locks you into building applications on their platform. When your app is operationally dumb and the backplane is operationally smart, they have your money forever.
  • #20: I don’t have that much to say about this one other than that orchestration systems or operations become really difficult to understand the more entities you’re trying to address. In particular because an orchestration activity (“play”) is intended to run to completion, atomically, trying to debug failures halfway through and figure out what to do is really hard. When things go wrong, it’s easier for the human brain to try and understand a small part of the system – where the fault is – rather than the entire global state. We know this with computer programming (“locality of reference”) and that’s why we have techniques like “information hiding” (i.e. abstracting logic).
  • #22: We used to show this slide as part of old Opscode training materials when I first started at Chef. I’m sure you’ve seen slides like this before, where we talk about the # of nodes running applications, etc, and how they grow over time. While this is all true, I think these graphs neglect one key thing, which is not that the *quantity* of machines increases over time, but the fact that systems as a whole tend towards becoming more *distributed*. By "distributed" I mean that more of the computing runs at the "edge" if you will and not in a centralized way.
  • #23: It’s not a straight line, though. <Talk through the build> Cloud: ML, databases, etc. – now starting to centralize more stuff into the cloud. The more that our systems become distributed, the less a centralized approach makes sense. This is true not only for data processing (why can’t it happen at the edge), but also to configuration updates and even software upgrades.
  • #24: https://medium.com/@timanglade/how-hbos-silicon-valley-built-not-hotdog-with-mobile-tensorflow-keras-react-native-ef03260747f3 Tensorflow, Keras, React Native First version was centralized – too much latency So the final version runs an entire neural network on your phone.
  • #25: Nike HyperAdapt shoe Number of devices continues to increase Machine Learning, Analytics, AI Latency becomes currency At-scale problems will re-emerge just like they did with Client/Server and the Web Distributed devices need distributed management
  • #26: Sounds a lot like wherein we started with convergent configuration management and this guy, right? Everything old is new again.
  • #29: Using SWIM rather than something like RAFT, because SWIM is masterless
  • #30: This slide will be a build to show some of Habitat’s terminology, specifically: Service group Contains one or more entities that share a configuration template, but run the same workload Leaders and followers are in the same group Have a name Supervisors are responsible for [re-]writing configuration of the workload and restarting the process, possibly in coordination with other supervisors in that group Supervisors have a REST interface that allows you to modify their config (inject new configs as rumors into the network – they will be propagated. Can use any authorized supervisor as an entrypoint, doesn’t have to be the group we care about) External service groups can be subscribed to the configuration of this service group using binding Talk about communication protocol across the fleet – SWIM membership protocol/failure detector, with a gossip layer on top for distributed consensus Because we get asked a lot of questions about the protocol, it is an implementation of SWIM It's an implementation of SWIM+Infection+Suspicion for membership, and a ZeroMQ based newscast-inspired gossip protocol. Goals Eventually consistent. Over a long enough time horizon, every living member will converge on the same state. Reasonably efficient. The protocol avoids any back-chatter; messages are sent but never confirmed. Reliable. As a building block, it should be safe and reliable to use.
  • #31: Config changes: injected into any peer, ACL is checked, and if accepted, gossiped around the network. No SPOF.