Scaling Workshop
Provisioning and Capacity Planning
Brian Brazil
Founder
Who am I?
Engineer passionate about running software reliably in production.
● TCD CS Degree
● Google SRE for 7 years, working on high-scale reliable systems such as
Adwords, Adsense, Ad Exchange, Billing, Database
● Boxever TL Systems&Infrastructure, applied processes and technology to let
allow company to scale and reduce operational load
● Contributor to many open source projects, including Prometheus, Ansible,
Python, Aurora and Zookeeper.
● Founder of Robust Perception, making scalability and efficiency available to
everyone
Goals
At the end of the workshop you will be able to:
● Estimate how much spare capacity you have in less than 5 minutes
● Estimate how much runway that capacity provides
● Determine how many machines you need
● Spot common potential problems as you scale
This should set you up for your first 1-2 years, if not more
Audience
This is an introductory workshop to teach you the basics.
Your company:
● Uses Unix in production
● Has a relatively simple setup/small number of machines
● Operations primarily performed by developers
● Performance has not been a primary consideration in your product
I’m also going to focus on webservices-type systems rather than offline processing
or batch.
Capacity
Estimate your capacity in 3 easy steps!
1. Measure bottleneck resource at peak traffic
2. Divide to get fraction of limit
3. Multiply by peak traffic
Estimate your capacity in 3 not so easy steps!
1. What’s your bottleneck? How do you measure it?
2. What’s your bottleneck’s limit?
3. What’s your peak traffic?
Step 1: What’s the bottleneck?
The most common bottlenecks:
1. CPU
2. Disk I/O
Less common: network, disk space, external resources, quotas, hardcoded limits,
contention/locking, memory, file descriptors, port numbers, humans
Step 1: Where’s the bottleneck?
Look at CPU % and Disk I/O Utilisation on each type of machine.
If you’ve monitoring, use that.
Failing that:
sudo apt-get install sysstat
iostat -x 5
Step 1: Iostat
avg-cpu: %user %nice %system %iowait %steal %idle
4.24 0.00 1.18 0.98 0.00 93.60
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 1.40 0.00 3.80 0.00 45.20 23.79 0.00 1.05 0.00 1.05 0.84 0.32
sdb 0.00 1.40 0.00 21.00 0.00 267.20 25.45 0.09 4.11 0.00 4.11 4.11 8.64
sdc 0.00 1.40 0.00 20.00 0.00 267.20 26.72 0.06 3.24 0.00 3.24 3.24 6.48
md0 0.00 0.00 0.00 2.00 0.00 8.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00
The numbers you care about are %idle and %util.
%idle is the amount of CPU not in use. %util is the amount of disk I/O in use, take
the biggest one.
Step 2: What’s the limit?
We now know the CPU and disk I/O usage on each machine at peak.
Which is the bottleneck though?
Need to know the limit. Rules of thumb:
● 80% limit for CPU
● 50% limit for Disk I/O
Step 2: Division
Find how full each CPU and disk is.
Say we had a disk 10% utilised, and a CPU 20% utilised (80% idle).
0.1/0.5 = 0.2 => Disk IO is at 20% of limit
0.2/0.8 = 0.25 => CPU is at 25% of limit
CPU is our bottleneck, with 25% of capacity used.
Step 2: Utilisation Visualisation
Step 3: Peak traffic
Now that we know how full our bottleneck is, we need to know how much capacity
we have.
Figure out how much traffic you were handling around the time you measured cpu
and disk utilisation.
You might do this via monitoring, or parsing logs or if you’re really stuck tcpdump.
Step 3: The 2nd division
Let’s say our queries per second (qps) was 10 around peak.
Our CPU was our bottleneck, and about 25% of our limit.
10/0.25 = 40qps
So we can currently handle a maximum traffic of around 40qps
Step 3: Capacity Visualisation
Now you can estimate your capacity in 3 easy steps!
1. Measure bottleneck resource at peak traffic
○ Use monitoring or iostat to see how close you are to the limit, say 20% full
2. Divide to get fraction of limit
○ With a limit of 80% for CPU, you’re 20/80 = 25% full
3. Multiply by peak traffic
○ Traffic was 10qps, so 10/0.25 = 40qps capacity
Runway
How much runway do you have?
You now have a rough idea of how much capacity you have to spare.
In the example here, we’re using 10qps out of 40qps capacity.
How long will that 30qps last you?
The two main factors are new customers and organic growth.
New Customers
New customers/partners are your main source of traffic.
Look at your traffic graphs around the time a new customer started using your
system.
If the customer had say 1M users and you saw 10qps increased peak traffic, you
can now predict how much traffic future customers will need.
Based on sales predictions, you can tell how much capacity you’ll need for new
customers.
Organic growth
Over time your existing customers/partners will use the system more and more,
new employees are hired, they get new customers etc.
Look at your monitoring’s traffic graphs over a few months to see what the trend is
like. Do your best to ignore the impact of launches.
Calculate your % growth month on month.
Starting out, it’s likely that organic growth will not be your main consideration.
Calculating runway
Once again in the example here, we’re using 10qps out of 40qps capacity.
Each 1M user customer generates 10qps of additional traffic.
You also expect a negligible amount of organic growth.
This means you can handle 3M more users worth of new customers.
If you’re signing up one 1M user customer per month, that gives you 3 months.
Provisioning
Provisioning vs Capacity Planning
Capacity Planning:
In 6 months I will have 7 new customers, and need to be able to handle 100qps in
total
Provisioning:
To handle 100qps I need X frontends and Y databases
Provisioning: What can a machine handle?
Continuing our example, let’s say we had 4 machines and each reported being at
CPU 20% (25% of the 80% limit) while dealing with 10qps each.
The key metric is qps per machine.
10qps/.2 machines = 50qps/machine
Can only safely use 80% of the machine, so 50*.8 = 40qps
So we can handle 40 qps per machine.
Provisioning: How many machines do I need?
If we want to handle 100qps, we need 100/40 = 2.5 machines. So 3 machines.
For each type of machine, calculate the incoming external qps it can handle and
how many you need.
Don’t fret about $10/month worth of cost, it’s not worth your time.
Provisioning: Visualisation
Review: The Basics
● Estimating capacity:
○ Measure bottleneck at peak
○ Find how near bottleneck is to the limit
○ Calculate spare capacity based on peak traffic
● Keep an eye on new customers/partners and organic growth to track runway
● For provisioning, calculate qps/machine for each type of machine
Life is not Basic
A few wrinkles
I’ve glossed over a lot of detail so you can go away from today’s workshop with
something you can immediately use.
Some questions ye may have:
● Why measure at peak traffic?
● What if I don’t have much traffic?
● Why 80% limit on CPU and 50% on disk?
● What if a machine fails?
● What if things aren’t that simple?
● Doesn’t autoscaling take care of all this for me?
Why measure at peak traffic?
As your utilisation increases:
● Latency increases
● Performance decreases
In addition skew due to
background of constant CPU
usage is decreased
Measuring at peak helps
allow for these factors.
Beware the knee.
What if I don’t have much traffic?
If you don’t have enough traffic to show up in top or iotop, then these techniques
won’t help you much.
You could loadtest, but that takes time. Or use rules of thumb.
Easier way: Use latency to estimate throughput.
If your queries take 10ms, then you can probably handle 100/s
Why 80% limit on CPU and 50% on disk?
For CPU due to utilisation/latency curve you want to avoid having too high
utilisation.
If you have the CPU to yourself 90-95% is safe in a controlled environment with
good loadtesting. This is uncommon, so leave safety margin for OS processes etc.
For spinning disks the impact of utilisation tend to be more problematic, and
background tasks tend to use a lot of disk.
What if a machine fails?
You generally should add 2 extra machines beyond that you need to serve peak
qps. This is commonly known as “n+2”.
This is to allow for one machine failure, and to let you take down a machine to
push a new binary, perform maintenance or whatever.
This also gives you some slack in your capacity. As you grow, more sophisticated
math is required.
What if things aren’t that simple?
Lots of other issues can throw a spanner in the works.
● Heterogeneous machines
● Varying machine performance
● Varying traffic mixes
● Multiple datacenters
● Multi-tiered services
As a general rule try to keep things simple. A perfect model is brittle and usually
takes more time than it’s worth.
Doesn’t autoscaling take care of all this for me?
Short answer Long answer
Doesn’t autoscaling take care of all this for me?
Short answer
No
Long answer
Doesn’t autoscaling take care of all this for me?
Short answer
No
Long answer
Haha, Haha.
No
Doesn’t autoscaling take care of all this for me?
EC2 Autoscaling can eliminate some of the day-to-day work in provisioning
servers.
There’s operational and complexity overhead, as you have to maintain images and
systems that can be spun up.
You have to wait for instances to spin up - can’t rely on it completely for sudden
spikes. You need to do math to tune it to be able to handle a spikes.
You still have to tune everything. Control systems are hard.
Wrapping Up
Monitoring Matters
A common thread through this workshop is that monitoring is what should be
providing you the information you need to make operational decisions.
Make sure you have a good monitoring system.
Logs are not monitoring, though better than nothing.
I recommend Prometheus.io: If it didn’t exist I would have created it.
Production Matters
Provisioning and Capacity planning is just one aspect of production. There’s many
others involved with running your company:
Robust Perception can help you with all of this and more.
● Deployment
● Change Management
● Configuration Management
● Reliability
● Architecture
● Design Feasibility
● Cost Management
● Performance Tuning
● SLAs
● Contract Sanity Check
● Debugging
● Alerting
● Oncall
● Incident Management
Questions?
Blog: www.robustperception.io/blog
Twitter: @RobustPerceiver
Email: brian.brazil@robustperception.io
Linkedin: https://ie.linkedin.com/in/brianbrazil

More Related Content

PDF
Monitoring Hadoop with Prometheus (Hadoop User Group Ireland, December 2015)
PPTX
Prometheus design and philosophy
PDF
Prometheus and Docker (Docker Galway, November 2015)
PPTX
So You Want to Write an Exporter
PDF
Ansible at FOSDEM (Ansible Dublin, 2016)
PDF
Better Monitoring for Python: Inclusive Monitoring with Prometheus (Pycon Ire...
PDF
Systems Monitoring with Prometheus (Devops Ireland April 2015)
PPTX
Provisioning and Capacity Planning (Travel Meets Big Data)
Monitoring Hadoop with Prometheus (Hadoop User Group Ireland, December 2015)
Prometheus design and philosophy
Prometheus and Docker (Docker Galway, November 2015)
So You Want to Write an Exporter
Ansible at FOSDEM (Ansible Dublin, 2016)
Better Monitoring for Python: Inclusive Monitoring with Prometheus (Pycon Ire...
Systems Monitoring with Prometheus (Devops Ireland April 2015)
Provisioning and Capacity Planning (Travel Meets Big Data)

What's hot (20)

PPTX
Prometheus for Monitoring Metrics (Percona Live Europe 2017)
PPTX
Prometheus - Open Source Forum Japan
PPTX
Evaluating Prometheus Knowledge in Interviews (PromCon 2018)
PDF
No C-QL (Or how I learned to stop worrying, and love eventual consistency) (N...
PPTX
What does "monitoring" mean? (FOSDEM 2017)
PDF
Microservices and Prometheus (Microservices NYC 2016)
PPTX
Prometheus (Monitorama 2016)
PDF
Your data is in Prometheus, now what? (CurrencyFair Engineering Meetup, 2016)
PPTX
Counting with Prometheus (CloudNativeCon+Kubecon Europe 2017)
PDF
Prometheus: A Next Generation Monitoring System (FOSDEM 2016)
PPTX
Monitoring What Matters: The Prometheus Approach to Whitebox Monitoring (Berl...
PDF
Monitoring your Python with Prometheus (Python Ireland April 2015)
PPTX
An Introduction to Prometheus (GrafanaCon 2016)
PDF
Prometheus lightning talk (Devops Dublin March 2015)
PPTX
Life of a Label (PromCon2016, Berlin)
PPTX
Staleness and Isolation in Prometheus 2.0 (PromCon 2017)
PPTX
Prometheus (Prometheus London, 2016)
PDF
Cloud Monitoring with Prometheus
PDF
What is your application doing right now? An introduction to Prometheus
PDF
Prometheus Overview
Prometheus for Monitoring Metrics (Percona Live Europe 2017)
Prometheus - Open Source Forum Japan
Evaluating Prometheus Knowledge in Interviews (PromCon 2018)
No C-QL (Or how I learned to stop worrying, and love eventual consistency) (N...
What does "monitoring" mean? (FOSDEM 2017)
Microservices and Prometheus (Microservices NYC 2016)
Prometheus (Monitorama 2016)
Your data is in Prometheus, now what? (CurrencyFair Engineering Meetup, 2016)
Counting with Prometheus (CloudNativeCon+Kubecon Europe 2017)
Prometheus: A Next Generation Monitoring System (FOSDEM 2016)
Monitoring What Matters: The Prometheus Approach to Whitebox Monitoring (Berl...
Monitoring your Python with Prometheus (Python Ireland April 2015)
An Introduction to Prometheus (GrafanaCon 2016)
Prometheus lightning talk (Devops Dublin March 2015)
Life of a Label (PromCon2016, Berlin)
Staleness and Isolation in Prometheus 2.0 (PromCon 2017)
Prometheus (Prometheus London, 2016)
Cloud Monitoring with Prometheus
What is your application doing right now? An introduction to Prometheus
Prometheus Overview
Ad

Similar to Provisioning and Capacity Planning Workshop (Dogpatch Labs, September 2015) (20)

PPTX
MongoDB Capacity Planning
PPTX
Webinar: Capacity Planning
PPT
Capacity Management from Flickr
PPTX
Capacity Planning
PPTX
CC_Unit4_2024_Class3.pptx Cloud Computing Unit V
PDF
Capacity Management for Web Operations
PDF
Database Benchmarking for Performance Masterclass: Session 2 - Data Modeling ...
PDF
Capacity Planning Infrastructure for Web Applications (Drupal)
PDF
Sameer Mitter - Management Responsibilities by Cloud service model types
PDF
Capacity planning for your data stores
PDF
Capacity Planning
PDF
Scaling Up with PHP and AWS
PDF
Capacity guide
PPTX
Black Friday and Cyber Monday- Best Practices for Your E-Commerce Database
PDF
Capacity Planning For Web Operations Presentation
PDF
Capacity Planning For Web Operations Presentation
PDF
Abusing the Cloud for Fun and Profit
PDF
Autoscaling Best Practices - WebPerf Barcelona Oct 2014
PDF
Ebay: DB Capacity planning at eBay
PDF
QConLondon2017-LisaGuo-ScalingInstagramInfrastructure.pdf
MongoDB Capacity Planning
Webinar: Capacity Planning
Capacity Management from Flickr
Capacity Planning
CC_Unit4_2024_Class3.pptx Cloud Computing Unit V
Capacity Management for Web Operations
Database Benchmarking for Performance Masterclass: Session 2 - Data Modeling ...
Capacity Planning Infrastructure for Web Applications (Drupal)
Sameer Mitter - Management Responsibilities by Cloud service model types
Capacity planning for your data stores
Capacity Planning
Scaling Up with PHP and AWS
Capacity guide
Black Friday and Cyber Monday- Best Practices for Your E-Commerce Database
Capacity Planning For Web Operations Presentation
Capacity Planning For Web Operations Presentation
Abusing the Cloud for Fun and Profit
Autoscaling Best Practices - WebPerf Barcelona Oct 2014
Ebay: DB Capacity planning at eBay
QConLondon2017-LisaGuo-ScalingInstagramInfrastructure.pdf
Ad

More from Brian Brazil (11)

PPTX
OpenMetrics: What Does It Mean for You (PromCon 2019, Munich)
PPTX
Evolution of Monitoring and Prometheus (Dublin 2018)
PPTX
Anatomy of a Prometheus Client Library (PromCon 2018)
PPTX
Prometheus for Monitoring Metrics (Fermilab 2018)
PPTX
Evolving Prometheus for the Cloud Native World (FOSDEM 2018)
PPTX
Evolution of the Prometheus TSDB (Percona Live Europe 2017)
PPTX
Rule 110 for Prometheus (PromCon 2017)
PPTX
Prometheus: From Berlin to Bonanza (Keynote CloudNativeCon+Kubecon Europe 2017)
PPTX
An Exploration of the Formal Properties of PromQL
PDF
Monitoring Kubernetes with Prometheus (Kubernetes Ireland, 2016)
PDF
Prometheus (Microsoft, 2016)
OpenMetrics: What Does It Mean for You (PromCon 2019, Munich)
Evolution of Monitoring and Prometheus (Dublin 2018)
Anatomy of a Prometheus Client Library (PromCon 2018)
Prometheus for Monitoring Metrics (Fermilab 2018)
Evolving Prometheus for the Cloud Native World (FOSDEM 2018)
Evolution of the Prometheus TSDB (Percona Live Europe 2017)
Rule 110 for Prometheus (PromCon 2017)
Prometheus: From Berlin to Bonanza (Keynote CloudNativeCon+Kubecon Europe 2017)
An Exploration of the Formal Properties of PromQL
Monitoring Kubernetes with Prometheus (Kubernetes Ireland, 2016)
Prometheus (Microsoft, 2016)

Recently uploaded (20)

PPTX
t_and_OpenAI_Combined_two_pressentations
PDF
BIOCHEM CH2 OVERVIEW OF MICROBIOLOGY.pdf
PPTX
Introduction to cybersecurity and digital nettiquette
PPT
415456121-Jiwratrwecdtwfdsfwgdwedvwe dbwsdjsadca-EVN.ppt
PPTX
Mathew Digital SEO Checklist Guidlines 2025
PDF
Smart Home Technology for Health Monitoring (www.kiu.ac.ug)
PPTX
Internet Safety for Seniors presentation
PDF
Alethe Consulting Corporate Profile and Solution Aproach
PDF
Containerization lab dddddddddddddddmanual.pdf
PPT
FIRE PREVENTION AND CONTROL PLAN- LUS.FM.MQ.OM.UTM.PLN.00014.ppt
PPTX
IPCNA VIRTUAL CLASSES INTERMEDIATE 6 PROJECT.pptx
PDF
mera desh ae watn.(a source of motivation and patriotism to the youth of the ...
PDF
The Ikigai Template _ Recalibrate How You Spend Your Time.pdf
PPTX
Database Information System - Management Information System
PDF
The Evolution of Traditional to New Media .pdf
PDF
Lean-Manufacturing-Tools-Techniques-and-How-To-Use-Them.pdf
PPTX
The-Importance-of-School-Sanitation.pptx
DOCX
Powerful Ways AIRCONNECT INFOSYSTEMS Pvt Ltd Enhances IT Infrastructure in In...
PDF
Understand the Gitlab_presentation_task.pdf
PDF
SlidesGDGoCxRAIS about Google Dialogflow and NotebookLM.pdf
t_and_OpenAI_Combined_two_pressentations
BIOCHEM CH2 OVERVIEW OF MICROBIOLOGY.pdf
Introduction to cybersecurity and digital nettiquette
415456121-Jiwratrwecdtwfdsfwgdwedvwe dbwsdjsadca-EVN.ppt
Mathew Digital SEO Checklist Guidlines 2025
Smart Home Technology for Health Monitoring (www.kiu.ac.ug)
Internet Safety for Seniors presentation
Alethe Consulting Corporate Profile and Solution Aproach
Containerization lab dddddddddddddddmanual.pdf
FIRE PREVENTION AND CONTROL PLAN- LUS.FM.MQ.OM.UTM.PLN.00014.ppt
IPCNA VIRTUAL CLASSES INTERMEDIATE 6 PROJECT.pptx
mera desh ae watn.(a source of motivation and patriotism to the youth of the ...
The Ikigai Template _ Recalibrate How You Spend Your Time.pdf
Database Information System - Management Information System
The Evolution of Traditional to New Media .pdf
Lean-Manufacturing-Tools-Techniques-and-How-To-Use-Them.pdf
The-Importance-of-School-Sanitation.pptx
Powerful Ways AIRCONNECT INFOSYSTEMS Pvt Ltd Enhances IT Infrastructure in In...
Understand the Gitlab_presentation_task.pdf
SlidesGDGoCxRAIS about Google Dialogflow and NotebookLM.pdf

Provisioning and Capacity Planning Workshop (Dogpatch Labs, September 2015)

  • 1. Scaling Workshop Provisioning and Capacity Planning Brian Brazil Founder
  • 2. Who am I? Engineer passionate about running software reliably in production. ● TCD CS Degree ● Google SRE for 7 years, working on high-scale reliable systems such as Adwords, Adsense, Ad Exchange, Billing, Database ● Boxever TL Systems&Infrastructure, applied processes and technology to let allow company to scale and reduce operational load ● Contributor to many open source projects, including Prometheus, Ansible, Python, Aurora and Zookeeper. ● Founder of Robust Perception, making scalability and efficiency available to everyone
  • 3. Goals At the end of the workshop you will be able to: ● Estimate how much spare capacity you have in less than 5 minutes ● Estimate how much runway that capacity provides ● Determine how many machines you need ● Spot common potential problems as you scale This should set you up for your first 1-2 years, if not more
  • 4. Audience This is an introductory workshop to teach you the basics. Your company: ● Uses Unix in production ● Has a relatively simple setup/small number of machines ● Operations primarily performed by developers ● Performance has not been a primary consideration in your product I’m also going to focus on webservices-type systems rather than offline processing or batch.
  • 6. Estimate your capacity in 3 easy steps! 1. Measure bottleneck resource at peak traffic 2. Divide to get fraction of limit 3. Multiply by peak traffic
  • 7. Estimate your capacity in 3 not so easy steps! 1. What’s your bottleneck? How do you measure it? 2. What’s your bottleneck’s limit? 3. What’s your peak traffic?
  • 8. Step 1: What’s the bottleneck? The most common bottlenecks: 1. CPU 2. Disk I/O Less common: network, disk space, external resources, quotas, hardcoded limits, contention/locking, memory, file descriptors, port numbers, humans
  • 9. Step 1: Where’s the bottleneck? Look at CPU % and Disk I/O Utilisation on each type of machine. If you’ve monitoring, use that. Failing that: sudo apt-get install sysstat iostat -x 5
  • 10. Step 1: Iostat avg-cpu: %user %nice %system %iowait %steal %idle 4.24 0.00 1.18 0.98 0.00 93.60 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 1.40 0.00 3.80 0.00 45.20 23.79 0.00 1.05 0.00 1.05 0.84 0.32 sdb 0.00 1.40 0.00 21.00 0.00 267.20 25.45 0.09 4.11 0.00 4.11 4.11 8.64 sdc 0.00 1.40 0.00 20.00 0.00 267.20 26.72 0.06 3.24 0.00 3.24 3.24 6.48 md0 0.00 0.00 0.00 2.00 0.00 8.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00 The numbers you care about are %idle and %util. %idle is the amount of CPU not in use. %util is the amount of disk I/O in use, take the biggest one.
  • 11. Step 2: What’s the limit? We now know the CPU and disk I/O usage on each machine at peak. Which is the bottleneck though? Need to know the limit. Rules of thumb: ● 80% limit for CPU ● 50% limit for Disk I/O
  • 12. Step 2: Division Find how full each CPU and disk is. Say we had a disk 10% utilised, and a CPU 20% utilised (80% idle). 0.1/0.5 = 0.2 => Disk IO is at 20% of limit 0.2/0.8 = 0.25 => CPU is at 25% of limit CPU is our bottleneck, with 25% of capacity used.
  • 13. Step 2: Utilisation Visualisation
  • 14. Step 3: Peak traffic Now that we know how full our bottleneck is, we need to know how much capacity we have. Figure out how much traffic you were handling around the time you measured cpu and disk utilisation. You might do this via monitoring, or parsing logs or if you’re really stuck tcpdump.
  • 15. Step 3: The 2nd division Let’s say our queries per second (qps) was 10 around peak. Our CPU was our bottleneck, and about 25% of our limit. 10/0.25 = 40qps So we can currently handle a maximum traffic of around 40qps
  • 16. Step 3: Capacity Visualisation
  • 17. Now you can estimate your capacity in 3 easy steps! 1. Measure bottleneck resource at peak traffic ○ Use monitoring or iostat to see how close you are to the limit, say 20% full 2. Divide to get fraction of limit ○ With a limit of 80% for CPU, you’re 20/80 = 25% full 3. Multiply by peak traffic ○ Traffic was 10qps, so 10/0.25 = 40qps capacity
  • 19. How much runway do you have? You now have a rough idea of how much capacity you have to spare. In the example here, we’re using 10qps out of 40qps capacity. How long will that 30qps last you? The two main factors are new customers and organic growth.
  • 20. New Customers New customers/partners are your main source of traffic. Look at your traffic graphs around the time a new customer started using your system. If the customer had say 1M users and you saw 10qps increased peak traffic, you can now predict how much traffic future customers will need. Based on sales predictions, you can tell how much capacity you’ll need for new customers.
  • 21. Organic growth Over time your existing customers/partners will use the system more and more, new employees are hired, they get new customers etc. Look at your monitoring’s traffic graphs over a few months to see what the trend is like. Do your best to ignore the impact of launches. Calculate your % growth month on month. Starting out, it’s likely that organic growth will not be your main consideration.
  • 22. Calculating runway Once again in the example here, we’re using 10qps out of 40qps capacity. Each 1M user customer generates 10qps of additional traffic. You also expect a negligible amount of organic growth. This means you can handle 3M more users worth of new customers. If you’re signing up one 1M user customer per month, that gives you 3 months.
  • 24. Provisioning vs Capacity Planning Capacity Planning: In 6 months I will have 7 new customers, and need to be able to handle 100qps in total Provisioning: To handle 100qps I need X frontends and Y databases
  • 25. Provisioning: What can a machine handle? Continuing our example, let’s say we had 4 machines and each reported being at CPU 20% (25% of the 80% limit) while dealing with 10qps each. The key metric is qps per machine. 10qps/.2 machines = 50qps/machine Can only safely use 80% of the machine, so 50*.8 = 40qps So we can handle 40 qps per machine.
  • 26. Provisioning: How many machines do I need? If we want to handle 100qps, we need 100/40 = 2.5 machines. So 3 machines. For each type of machine, calculate the incoming external qps it can handle and how many you need. Don’t fret about $10/month worth of cost, it’s not worth your time.
  • 28. Review: The Basics ● Estimating capacity: ○ Measure bottleneck at peak ○ Find how near bottleneck is to the limit ○ Calculate spare capacity based on peak traffic ● Keep an eye on new customers/partners and organic growth to track runway ● For provisioning, calculate qps/machine for each type of machine
  • 29. Life is not Basic
  • 30. A few wrinkles I’ve glossed over a lot of detail so you can go away from today’s workshop with something you can immediately use. Some questions ye may have: ● Why measure at peak traffic? ● What if I don’t have much traffic? ● Why 80% limit on CPU and 50% on disk? ● What if a machine fails? ● What if things aren’t that simple? ● Doesn’t autoscaling take care of all this for me?
  • 31. Why measure at peak traffic? As your utilisation increases: ● Latency increases ● Performance decreases In addition skew due to background of constant CPU usage is decreased Measuring at peak helps allow for these factors. Beware the knee.
  • 32. What if I don’t have much traffic? If you don’t have enough traffic to show up in top or iotop, then these techniques won’t help you much. You could loadtest, but that takes time. Or use rules of thumb. Easier way: Use latency to estimate throughput. If your queries take 10ms, then you can probably handle 100/s
  • 33. Why 80% limit on CPU and 50% on disk? For CPU due to utilisation/latency curve you want to avoid having too high utilisation. If you have the CPU to yourself 90-95% is safe in a controlled environment with good loadtesting. This is uncommon, so leave safety margin for OS processes etc. For spinning disks the impact of utilisation tend to be more problematic, and background tasks tend to use a lot of disk.
  • 34. What if a machine fails? You generally should add 2 extra machines beyond that you need to serve peak qps. This is commonly known as “n+2”. This is to allow for one machine failure, and to let you take down a machine to push a new binary, perform maintenance or whatever. This also gives you some slack in your capacity. As you grow, more sophisticated math is required.
  • 35. What if things aren’t that simple? Lots of other issues can throw a spanner in the works. ● Heterogeneous machines ● Varying machine performance ● Varying traffic mixes ● Multiple datacenters ● Multi-tiered services As a general rule try to keep things simple. A perfect model is brittle and usually takes more time than it’s worth.
  • 36. Doesn’t autoscaling take care of all this for me? Short answer Long answer
  • 37. Doesn’t autoscaling take care of all this for me? Short answer No Long answer
  • 38. Doesn’t autoscaling take care of all this for me? Short answer No Long answer Haha, Haha. No
  • 39. Doesn’t autoscaling take care of all this for me? EC2 Autoscaling can eliminate some of the day-to-day work in provisioning servers. There’s operational and complexity overhead, as you have to maintain images and systems that can be spun up. You have to wait for instances to spin up - can’t rely on it completely for sudden spikes. You need to do math to tune it to be able to handle a spikes. You still have to tune everything. Control systems are hard.
  • 41. Monitoring Matters A common thread through this workshop is that monitoring is what should be providing you the information you need to make operational decisions. Make sure you have a good monitoring system. Logs are not monitoring, though better than nothing. I recommend Prometheus.io: If it didn’t exist I would have created it.
  • 42. Production Matters Provisioning and Capacity planning is just one aspect of production. There’s many others involved with running your company: Robust Perception can help you with all of this and more. ● Deployment ● Change Management ● Configuration Management ● Reliability ● Architecture ● Design Feasibility ● Cost Management ● Performance Tuning ● SLAs ● Contract Sanity Check ● Debugging ● Alerting ● Oncall ● Incident Management
  • 43. Questions? Blog: www.robustperception.io/blog Twitter: @RobustPerceiver Email: brian.brazil@robustperception.io Linkedin: https://ie.linkedin.com/in/brianbrazil