SlideShare a Scribd company logo
Brian Brazil
Founder
Evolution of
Monitoring and Prometheus
Who am I?
Engineer passionate about running software reliably in production.
â—Ź One of the main developers of Prometheus
â—Ź Author of Prometheus: Up&Running
â—Ź Founder of Robust Perception
â—Ź Based in Dublin
Historical Roots
A lot of what we do today for monitoring is based on tools and techniques that
were awesome decades ago.
A lot of our practices still come from that time, and often we are still feeding the
machine with human blood.
Treating systems as pets rather than cattle.
A little history - MRTG and RRD
In 1994 Tobias Oetiker created a perl script, which became MRTG 1.0 in 1995.
Used to graph metrics from SNMP or external programs. Stored data in constantly
rewritten ASCII file.
MRTG 2.0 moved some code to C, released 1997.
RRD started in 1997 by Tobias Oetiker to further improve performance, released
1999.
Many tools use/used RRD, e.g. Graphite and Munin.
A little history - MRTG and RRD
Source: Wikipedia
A little history - Nagios
Written initially by Ethan Galstad in 1996. MS-DOS application to do pings.
Started more properly in 1998, first release in 1999 as NetSaint (renamed in 2002
for legal reasons).
Runs scripts on a regular basis, and sends alerts based on their exit code.
Many projects inspired by/based off Nagios such as Icinga, Sensu and Zmon.
A little history - Nagios
Source: Wikipedia
Historical Heritage
These very popular tools and their offspring left us with a world where graphing is
whitebox and alerting is blackbox, and they are separate concerns.
They come from a world where machines are pets, and services tend to live on
one machine.
They come from a world where even slight deviance would be immediately
jumped upon by heroic engineers in a NOC.
We need a new perspective in a cloud native environment.
What is Different Now?
It's no longer one service on one machine that will live there for years.
Services are dynamically assigned to machines, and can be moved around on an
hourly basis.
Microservices rather than monoliths mean more services created more often.
More dynamic, more churn, more to monitor.
So what should be monitoring?
We need a view that goes beyond alerting, graphing and jumping on every
potential problem.
We need to consider additional data sources, such as logs and browser events.
What about looking at the problem statement rather than what the tools can give
us?
What is the ultimate goal of all this "monitoring"?
Why do we monitor?
â—Ź Know when things go wrong
â—Ź Be able to debug and gain insight
â—Ź Trending to see changes over time
â—Ź Plumbing data to other systems/processes
Knowing when things go wrong
The first thing many people think of you say monitoring is alerting.
What is the wrongness we want to detect and alert on?
A blip with no real consequence, or a latency issue affecting users?
Symptoms vs Causes
Humans are limited in what they can handle.
If you alert on every single thing that might be a problem, you'll get overwhelmed
and suffer from alert fatigue.
Key problem: You care about things like user facing latency. There are hundreds
of things that could cause that.
Alerting on every possible cause is a Sisyphean task, but alerting on the symptom
of high latency is just one alert.
Example: CPU usage
Some monitoring systems don't allow you to alert on the latency of your servers.
The closest you can get is CPU usage.
False positives due to e.g. logrotate running too long.
False negatives due to deadlocks.
End result: Spammy alerts which operators learn to ignore, missing real problems.
Human attention is limited
Alerts should require intelligent human action!
Alerts should relate to actual end-user problems!
Your users don't care if a machine has a load of 4.
They care if they can't view their cat videos.
Debugging to Gain Insight
After you receive an alert notification you need to investigate it.
How do you work from a high level symptom alert such as increased latency?
You methodically drill down through your stack with dashboards to find the
subsystem that's the cause.
Break out more tools as you drill down into the suspect process.
Complementary Debugging Tools
Trending and Reporting
Alerting and debugging is short term.
Trending is medium to long term.
How is cache hit rate changing over time?
Is anyone still using that obscure feature?
When will I need more machines?
Plumbing
When all you have is a hammer, everything starts to look like a nail.
Often it'd be really convenient to use a monitoring system as a data transport as
part of some other process (often a control loop of some form).
This isn't monitoring, but it's going to happen.
If it's ~free, it's not necessarily even a bad idea.
How do we monitor?
Now knowing the three general goals of monitoring (plus plumbing), how do we go
about it?
What data do we collect? How do we process it?
What tradeoffs do we make?
Monitoring resources aren't free, so everything all the time is rarely an option.
The core is the Event
Events are what monitoring systems work off.
An event might be a HTTP request coming in, a packet being sent out, a library
call being made or a blackbox probe failing.
Events have context, such as the customer ultimately making the request, which
machines are involved, and how much data is being processed.
A single event might accumulate thousands of pieces of context as it goes through
the system, and there can be millions of events per second.
How do we monitor?
We're going to look at 4 classes of approaches to monitoring.
â—Ź Profiling
â—Ź Metrics
â—Ź Logs
â—Ź Distributed Tracing
When someone says "monitoring" they often have one of these stuck in their
head, however they're all complementary with different tradeoffs.
Profiling
Profiling is using tools like tcpdump, gdb, strace, dtrace, BPF and pretty much
anything Brendan Gregg talks about.
They give you very detailed information about individual events.
So detailed that you can't keep it all, so use must be targeted and temporary.
Great for debugging if have an idea what's wrong, not so much for anything else.
Metrics
Metrics make the tradeoff that we're going to ignore individual events, but track
how often particular contexts show up. Examples: Prometheus, Graphite.
For example you wouldn't track the exact time each HTTP request came in, but
you do track enough to tell that there were 3594 in the past minute.
And 14 of them failed. And 85 were for the login subsystem. And 5.4MB was
transferred. With 2023 cache hits.
And one of those requests hit a weird code path everyone has forgotten about.
Metric Tradeoffs
Metrics give you breadth.
You can easily track tens of thousands of metrics, but you can't break them out by
too many dimensions. E.g. breaking out metrics by email address is rarely a good
idea.
Great for figuring out what's generally going on, writing meaningful alerts,
narrowing down the scope of what needs debugging.
Not so great for tracking individual events.
Logs
Logs take the opposite approach to metrics.
They track individual events. So you can tell that Mr. Foo visited /myendpoint
yesterday at 7pm and received a 7892 byte response with status code 200.
The downside is you're limited in how many fields you can track due, likely <100.
Data volumes involved can also mean it takes some time for data to be available.
Examples: ELK stack, Graylog
Distributed Tracing
Distributed tracing is really a special case of logging.
It gives each request an unique ID that is logged as the request is handled
through various services in your architecture.
It ties these logs back together to see how the request flowed through the system,
and where time was spent.
Essential for debugging large-scale distributed systems.
Examples: OpenTracing, OpenZipkin
Distributed Tracing
Source: OpenZipkin
Where does Prometheus fit in?
Prometheus is a metrics-based monitoring system.
It tracks overall statistics over time, not individual events.
It has a Time Series DataBase (TSDB) at its core.
Powerful Data Model and Query Language
All metrics have arbitrary multi-dimensional labels.
Supports any double value with millisecond resolution timestamps.
Can multiply, add, aggregate, join, predict, take quantiles across many metrics in
the same query. Can evaluate right now, and graph back in time.
Can alert on any query.
Prometheus and the Cloud
Dynamic environments mean that new application instances continuously appear
and disappear.
Service Discovery can automatically detect these changes, and monitor all the
current instances.
Even better as Prometheus is pull-based, we can tell the difference between an
instance being down and an instance being turned off on purpose!
Heterogeneity
Not all Cloud VMs are equal.
Noisy neighbours mean different application instances have different performance.
Alerting on individual instance latency would be spammy.
But PromQL can aggregate latency across instances, allowing you to alert on
overall end-user visible latency rather than outliers.
Reliability is Key
Core Prometheus server is a single binary.
Each Prometheus server is independent, it only relies on local SSD.
No clustering or attempts to backfill "missing" data when scrapes fail. Such
approaches are difficult/impossible to get right, and often cause the type of
outages you're trying to prevent.
Option for remote storage for long term storage.
Dashboards with Grafana
Prometheus History - 1
Prometheus started in 2012 by Matt Proud and Julius Volz in Berlin.
In 2013 developed within SoundCloud, expanded to support Bazooka (cluster
manager/scheduler), Go, Java and Ruby clients.
In 2014 other companies start using it, including myself working at Boxever.
Project matures: new v2 storage by Beorn, new text format.
In January 2015 we "publicly release", adoption increases.
Prometheus History - 2
In May 2016 Prometheus is 2nd project to join the CNCF.
In July 2016 Prometheus releases 1.0.
In September 2016, first Promcon in Berlin.
In early 2017, Fabian starts work on a new TSDB.
In November 2017, Prometheus releases 2.0.
In August 2018, Prometheus is 2nd project to graduate within the CNCF. This is
announced at the 3rd Promcon.
Community Growth
2016: 100+ users, 250+ contributors, 35+ integrations
2017: 500+ users, 300+ contributors, 100+ integrations, 600+ on lists
2018: 10k+ users, 900+ contributors, 300+ integrations, 1.2k+ on lists
Prometheus: The Book
Released in August 2018 with O'Reilly, 386
pages.
Core content was written of the course of 61
days.
Copies to give out today!
Questions?
Book: https://www.amazon.co.uk/dp/1492034142
Prometheus: prometheus.io
Demo: demo.robustperception.io
My Company Website: www.robustperception.io

More Related Content

PPTX
Prometheus: From Berlin to Bonanza (Keynote CloudNativeCon+Kubecon Europe 2017)
PPTX
Prometheus - Open Source Forum Japan
PPTX
What does "monitoring" mean? (FOSDEM 2017)
PPTX
Prometheus for Monitoring Metrics (Percona Live Europe 2017)
PPTX
Evolving Prometheus for the Cloud Native World (FOSDEM 2018)
PPTX
Evolution of the Prometheus TSDB (Percona Live Europe 2017)
PDF
Microservices and Prometheus (Microservices NYC 2016)
PPTX
Monitoring What Matters: The Prometheus Approach to Whitebox Monitoring (Berl...
Prometheus: From Berlin to Bonanza (Keynote CloudNativeCon+Kubecon Europe 2017)
Prometheus - Open Source Forum Japan
What does "monitoring" mean? (FOSDEM 2017)
Prometheus for Monitoring Metrics (Percona Live Europe 2017)
Evolving Prometheus for the Cloud Native World (FOSDEM 2018)
Evolution of the Prometheus TSDB (Percona Live Europe 2017)
Microservices and Prometheus (Microservices NYC 2016)
Monitoring What Matters: The Prometheus Approach to Whitebox Monitoring (Berl...

What's hot (20)

PPTX
Anatomy of a Prometheus Client Library (PromCon 2018)
PDF
Prometheus: A Next Generation Monitoring System (FOSDEM 2016)
PDF
End to-end monitoring with the prometheus operator - Max Inden
PPTX
Prometheus for Monitoring Metrics (Fermilab 2018)
PPTX
An Introduction to Prometheus (GrafanaCon 2016)
PDF
Cloud Monitoring with Prometheus
PPTX
Prometheus (Prometheus London, 2016)
PPTX
Prometheus design and philosophy
PDF
Prometheus Overview
PDF
Ansible at FOSDEM (Ansible Dublin, 2016)
PPTX
OpenMetrics: What Does It Mean for You (PromCon 2019, Munich)
PPTX
Prometheus (Monitorama 2016)
PDF
Your data is in Prometheus, now what? (CurrencyFair Engineering Meetup, 2016)
PDF
Efficient monitoring and alerting
PDF
Monitoring Cloud Native Applications with Prometheus
PDF
An Introduction to Prometheus
PDF
The Open-Source Monitoring Landscape
PDF
Prometheus (Microsoft, 2016)
PDF
Introduction to Prometheus and Cortex (WOUG)
PDF
Prometheus course
Anatomy of a Prometheus Client Library (PromCon 2018)
Prometheus: A Next Generation Monitoring System (FOSDEM 2016)
End to-end monitoring with the prometheus operator - Max Inden
Prometheus for Monitoring Metrics (Fermilab 2018)
An Introduction to Prometheus (GrafanaCon 2016)
Cloud Monitoring with Prometheus
Prometheus (Prometheus London, 2016)
Prometheus design and philosophy
Prometheus Overview
Ansible at FOSDEM (Ansible Dublin, 2016)
OpenMetrics: What Does It Mean for You (PromCon 2019, Munich)
Prometheus (Monitorama 2016)
Your data is in Prometheus, now what? (CurrencyFair Engineering Meetup, 2016)
Efficient monitoring and alerting
Monitoring Cloud Native Applications with Prometheus
An Introduction to Prometheus
The Open-Source Monitoring Landscape
Prometheus (Microsoft, 2016)
Introduction to Prometheus and Cortex (WOUG)
Prometheus course
Ad

Similar to Evolution of Monitoring and Prometheus (Dublin 2018) (20)

PDF
Monitoring - deeper dive
PDF
The present and future of serverless observability (QCon London)
PDF
The present and future of Serverless observability
PDF
The present and future of Serverless observability
PPTX
Observability
PDF
Seeing RED: Monitoring and Observability in the Age of Microservices
PPTX
Observability for Application Developers (1)-1.pptx
PDF
I pushed in production :). Have a nice weekend
PPTX
ADDO Open Source Observability Tools
PDF
IRJET- Real Time Monitoring of Servers with Prometheus and Grafana for High A...
PPTX
MeetUp Monitoring with Prometheus and Grafana (September 2018)
PDF
Monitoring Big Data Systems - "The Simple Way"
PPTX
Agile Gurugram 2023 | Observability for Modern Applications. How does it help...
PDF
The Open-Source Monitoring Landscape
PPTX
Prometheus - Intro, CNCF, TSDB,PromQL,Grafana
PDF
Monitoring Kubernetes with Prometheus (Kubernetes Ireland, 2016)
PDF
System monitoring
PDF
Multi Layer Monitoring V1
PPTX
Observability - the good, the bad, and the ugly
PDF
Thinking DevOps in the era of the Cloud - Demi Ben-Ari
Monitoring - deeper dive
The present and future of serverless observability (QCon London)
The present and future of Serverless observability
The present and future of Serverless observability
Observability
Seeing RED: Monitoring and Observability in the Age of Microservices
Observability for Application Developers (1)-1.pptx
I pushed in production :). Have a nice weekend
ADDO Open Source Observability Tools
IRJET- Real Time Monitoring of Servers with Prometheus and Grafana for High A...
MeetUp Monitoring with Prometheus and Grafana (September 2018)
Monitoring Big Data Systems - "The Simple Way"
Agile Gurugram 2023 | Observability for Modern Applications. How does it help...
The Open-Source Monitoring Landscape
Prometheus - Intro, CNCF, TSDB,PromQL,Grafana
Monitoring Kubernetes with Prometheus (Kubernetes Ireland, 2016)
System monitoring
Multi Layer Monitoring V1
Observability - the good, the bad, and the ugly
Thinking DevOps in the era of the Cloud - Demi Ben-Ari
Ad

More from Brian Brazil (11)

PPTX
Evaluating Prometheus Knowledge in Interviews (PromCon 2018)
PPTX
Staleness and Isolation in Prometheus 2.0 (PromCon 2017)
PPTX
Rule 110 for Prometheus (PromCon 2017)
PPTX
Counting with Prometheus (CloudNativeCon+Kubecon Europe 2017)
PPTX
Provisioning and Capacity Planning (Travel Meets Big Data)
PPTX
So You Want to Write an Exporter
PPTX
An Exploration of the Formal Properties of PromQL
PPTX
Life of a Label (PromCon2016, Berlin)
PDF
Monitoring Hadoop with Prometheus (Hadoop User Group Ireland, December 2015)
PDF
Prometheus and Docker (Docker Galway, November 2015)
PDF
Better Monitoring for Python: Inclusive Monitoring with Prometheus (Pycon Ire...
Evaluating Prometheus Knowledge in Interviews (PromCon 2018)
Staleness and Isolation in Prometheus 2.0 (PromCon 2017)
Rule 110 for Prometheus (PromCon 2017)
Counting with Prometheus (CloudNativeCon+Kubecon Europe 2017)
Provisioning and Capacity Planning (Travel Meets Big Data)
So You Want to Write an Exporter
An Exploration of the Formal Properties of PromQL
Life of a Label (PromCon2016, Berlin)
Monitoring Hadoop with Prometheus (Hadoop User Group Ireland, December 2015)
Prometheus and Docker (Docker Galway, November 2015)
Better Monitoring for Python: Inclusive Monitoring with Prometheus (Pycon Ire...

Recently uploaded (20)

PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
 
PDF
cuic standard and advanced reporting.pdf
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
DOCX
The AUB Centre for AI in Media Proposal.docx
 
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Encapsulation_ Review paper, used for researhc scholars
Mobile App Security Testing_ A Comprehensive Guide.pdf
Reach Out and Touch Someone: Haptics and Empathic Computing
NewMind AI Monthly Chronicles - July 2025
Review of recent advances in non-invasive hemoglobin estimation
The Rise and Fall of 3GPP – Time for a Sabbatical?
 
cuic standard and advanced reporting.pdf
Understanding_Digital_Forensics_Presentation.pptx
Network Security Unit 5.pdf for BCA BBA.
Chapter 3 Spatial Domain Image Processing.pdf
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
The AUB Centre for AI in Media Proposal.docx
 
“AI and Expert System Decision Support & Business Intelligence Systems”
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Building Integrated photovoltaic BIPV_UPV.pdf
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...

Evolution of Monitoring and Prometheus (Dublin 2018)

  • 2. Who am I? Engineer passionate about running software reliably in production. â—Ź One of the main developers of Prometheus â—Ź Author of Prometheus: Up&Running â—Ź Founder of Robust Perception â—Ź Based in Dublin
  • 3. Historical Roots A lot of what we do today for monitoring is based on tools and techniques that were awesome decades ago. A lot of our practices still come from that time, and often we are still feeding the machine with human blood. Treating systems as pets rather than cattle.
  • 4. A little history - MRTG and RRD In 1994 Tobias Oetiker created a perl script, which became MRTG 1.0 in 1995. Used to graph metrics from SNMP or external programs. Stored data in constantly rewritten ASCII file. MRTG 2.0 moved some code to C, released 1997. RRD started in 1997 by Tobias Oetiker to further improve performance, released 1999. Many tools use/used RRD, e.g. Graphite and Munin.
  • 5. A little history - MRTG and RRD Source: Wikipedia
  • 6. A little history - Nagios Written initially by Ethan Galstad in 1996. MS-DOS application to do pings. Started more properly in 1998, first release in 1999 as NetSaint (renamed in 2002 for legal reasons). Runs scripts on a regular basis, and sends alerts based on their exit code. Many projects inspired by/based off Nagios such as Icinga, Sensu and Zmon.
  • 7. A little history - Nagios Source: Wikipedia
  • 8. Historical Heritage These very popular tools and their offspring left us with a world where graphing is whitebox and alerting is blackbox, and they are separate concerns. They come from a world where machines are pets, and services tend to live on one machine. They come from a world where even slight deviance would be immediately jumped upon by heroic engineers in a NOC. We need a new perspective in a cloud native environment.
  • 9. What is Different Now? It's no longer one service on one machine that will live there for years. Services are dynamically assigned to machines, and can be moved around on an hourly basis. Microservices rather than monoliths mean more services created more often. More dynamic, more churn, more to monitor.
  • 10. So what should be monitoring? We need a view that goes beyond alerting, graphing and jumping on every potential problem. We need to consider additional data sources, such as logs and browser events. What about looking at the problem statement rather than what the tools can give us? What is the ultimate goal of all this "monitoring"?
  • 11. Why do we monitor? â—Ź Know when things go wrong â—Ź Be able to debug and gain insight â—Ź Trending to see changes over time â—Ź Plumbing data to other systems/processes
  • 12. Knowing when things go wrong The first thing many people think of you say monitoring is alerting. What is the wrongness we want to detect and alert on? A blip with no real consequence, or a latency issue affecting users?
  • 13. Symptoms vs Causes Humans are limited in what they can handle. If you alert on every single thing that might be a problem, you'll get overwhelmed and suffer from alert fatigue. Key problem: You care about things like user facing latency. There are hundreds of things that could cause that. Alerting on every possible cause is a Sisyphean task, but alerting on the symptom of high latency is just one alert.
  • 14. Example: CPU usage Some monitoring systems don't allow you to alert on the latency of your servers. The closest you can get is CPU usage. False positives due to e.g. logrotate running too long. False negatives due to deadlocks. End result: Spammy alerts which operators learn to ignore, missing real problems.
  • 15. Human attention is limited Alerts should require intelligent human action! Alerts should relate to actual end-user problems! Your users don't care if a machine has a load of 4. They care if they can't view their cat videos.
  • 16. Debugging to Gain Insight After you receive an alert notification you need to investigate it. How do you work from a high level symptom alert such as increased latency? You methodically drill down through your stack with dashboards to find the subsystem that's the cause. Break out more tools as you drill down into the suspect process.
  • 18. Trending and Reporting Alerting and debugging is short term. Trending is medium to long term. How is cache hit rate changing over time? Is anyone still using that obscure feature? When will I need more machines?
  • 19. Plumbing When all you have is a hammer, everything starts to look like a nail. Often it'd be really convenient to use a monitoring system as a data transport as part of some other process (often a control loop of some form). This isn't monitoring, but it's going to happen. If it's ~free, it's not necessarily even a bad idea.
  • 20. How do we monitor? Now knowing the three general goals of monitoring (plus plumbing), how do we go about it? What data do we collect? How do we process it? What tradeoffs do we make? Monitoring resources aren't free, so everything all the time is rarely an option.
  • 21. The core is the Event Events are what monitoring systems work off. An event might be a HTTP request coming in, a packet being sent out, a library call being made or a blackbox probe failing. Events have context, such as the customer ultimately making the request, which machines are involved, and how much data is being processed. A single event might accumulate thousands of pieces of context as it goes through the system, and there can be millions of events per second.
  • 22. How do we monitor? We're going to look at 4 classes of approaches to monitoring. â—Ź Profiling â—Ź Metrics â—Ź Logs â—Ź Distributed Tracing When someone says "monitoring" they often have one of these stuck in their head, however they're all complementary with different tradeoffs.
  • 23. Profiling Profiling is using tools like tcpdump, gdb, strace, dtrace, BPF and pretty much anything Brendan Gregg talks about. They give you very detailed information about individual events. So detailed that you can't keep it all, so use must be targeted and temporary. Great for debugging if have an idea what's wrong, not so much for anything else.
  • 24. Metrics Metrics make the tradeoff that we're going to ignore individual events, but track how often particular contexts show up. Examples: Prometheus, Graphite. For example you wouldn't track the exact time each HTTP request came in, but you do track enough to tell that there were 3594 in the past minute. And 14 of them failed. And 85 were for the login subsystem. And 5.4MB was transferred. With 2023 cache hits. And one of those requests hit a weird code path everyone has forgotten about.
  • 25. Metric Tradeoffs Metrics give you breadth. You can easily track tens of thousands of metrics, but you can't break them out by too many dimensions. E.g. breaking out metrics by email address is rarely a good idea. Great for figuring out what's generally going on, writing meaningful alerts, narrowing down the scope of what needs debugging. Not so great for tracking individual events.
  • 26. Logs Logs take the opposite approach to metrics. They track individual events. So you can tell that Mr. Foo visited /myendpoint yesterday at 7pm and received a 7892 byte response with status code 200. The downside is you're limited in how many fields you can track due, likely <100. Data volumes involved can also mean it takes some time for data to be available. Examples: ELK stack, Graylog
  • 27. Distributed Tracing Distributed tracing is really a special case of logging. It gives each request an unique ID that is logged as the request is handled through various services in your architecture. It ties these logs back together to see how the request flowed through the system, and where time was spent. Essential for debugging large-scale distributed systems. Examples: OpenTracing, OpenZipkin
  • 29. Where does Prometheus fit in? Prometheus is a metrics-based monitoring system. It tracks overall statistics over time, not individual events. It has a Time Series DataBase (TSDB) at its core.
  • 30. Powerful Data Model and Query Language All metrics have arbitrary multi-dimensional labels. Supports any double value with millisecond resolution timestamps. Can multiply, add, aggregate, join, predict, take quantiles across many metrics in the same query. Can evaluate right now, and graph back in time. Can alert on any query.
  • 31. Prometheus and the Cloud Dynamic environments mean that new application instances continuously appear and disappear. Service Discovery can automatically detect these changes, and monitor all the current instances. Even better as Prometheus is pull-based, we can tell the difference between an instance being down and an instance being turned off on purpose!
  • 32. Heterogeneity Not all Cloud VMs are equal. Noisy neighbours mean different application instances have different performance. Alerting on individual instance latency would be spammy. But PromQL can aggregate latency across instances, allowing you to alert on overall end-user visible latency rather than outliers.
  • 33. Reliability is Key Core Prometheus server is a single binary. Each Prometheus server is independent, it only relies on local SSD. No clustering or attempts to backfill "missing" data when scrapes fail. Such approaches are difficult/impossible to get right, and often cause the type of outages you're trying to prevent. Option for remote storage for long term storage.
  • 35. Prometheus History - 1 Prometheus started in 2012 by Matt Proud and Julius Volz in Berlin. In 2013 developed within SoundCloud, expanded to support Bazooka (cluster manager/scheduler), Go, Java and Ruby clients. In 2014 other companies start using it, including myself working at Boxever. Project matures: new v2 storage by Beorn, new text format. In January 2015 we "publicly release", adoption increases.
  • 36. Prometheus History - 2 In May 2016 Prometheus is 2nd project to join the CNCF. In July 2016 Prometheus releases 1.0. In September 2016, first Promcon in Berlin. In early 2017, Fabian starts work on a new TSDB. In November 2017, Prometheus releases 2.0. In August 2018, Prometheus is 2nd project to graduate within the CNCF. This is announced at the 3rd Promcon.
  • 37. Community Growth 2016: 100+ users, 250+ contributors, 35+ integrations 2017: 500+ users, 300+ contributors, 100+ integrations, 600+ on lists 2018: 10k+ users, 900+ contributors, 300+ integrations, 1.2k+ on lists
  • 38. Prometheus: The Book Released in August 2018 with O'Reilly, 386 pages. Core content was written of the course of 61 days. Copies to give out today!
  • 39. Questions? Book: https://www.amazon.co.uk/dp/1492034142 Prometheus: prometheus.io Demo: demo.robustperception.io My Company Website: www.robustperception.io