SlideShare a Scribd company logo
Autoscaling
Timothy Farkas
Senior Software Engineer @ Netflix
Problem
Definition
Our Pain
● Thousands of stateless single source and single sink Flink routers.
● All operators are chained.
● When lag for a router exceeds a threshold we are paged.
Kafka
Consumer
Project
Filter
Sink
Definitions
● Workload: Events being produced to a kafka topic. Two main knobs to turn:
○ Message Rate
○ Message Size
● Lag: The time it would take for a router to process all the remaining
unprocessed events that are buffered in its kafka topic.
● Healthy Router: A router is healthy if it’s lag is always under ~5 minutes.
● Autoscaling Solution: Adjust the number of nodes in the router dynamically
based on the workload to keep the router healthy. Attempt to use the
smallest number of nodes that are required to keep the pipeline healthy.
Solution Space
● Claim: There is no perfect solution. Any autoscaling algorithm can be defeated
by one or more workloads.
● Proof: Take any autoscaling algorithm A. Provide A with a workload W that
does the exact opposite of what A expects whenever A decides to resize the
cluster. =>
A will always make the wrong decision for W by definition. =>
Q.E.D.
○ Understand our limitations.
○ Make assumptions about our workloads.
○ Make a solution that works well when taking both into account.
Limitations
● Rescaling introduces processing pauses.
● Scaling down a Flink job suspends processing for 1 - 3 minutes and possibly
more.
○ CHEAP: Graceful shutdown with savepoint.
○ EXPENSIVE: Remove TMs.
○ CHEAP: Restart from savepoint with reduced parallelism.
● Scaling up a Flink job suspends processing for period < 1 minute.
○ EXPENSIVE: Add TMs.
○ CHEAP: Graceful shutdown with savepoint.
○ CHEAP: Restart from savepoint with increased parallelism.
● There is a two minute delay for propagating metrics through Netflix’s metrics
infrastructure.
Assumptions
● Better to accidentally over allocate than to under allocate.
● Average message size changes infrequently.
● Large spikes in the workload happen, but not frequently.
● Workloads tend to smoothly increase or decrease, when they don’t have a
large spike.
Solution
Desirable Characteristics
● Minimal amount of state
● Deterministic behavior
● Easy to unit test
● Easy to control
Approaches
● Historical Prediction
● Rule Based
● PID Controller
● Statistical Short Term Prediction + Policies
Autoscaling - High Level Steps
● Collect: Receive a batch of metrics for the current 1 minute timebucket.
● Pre-Decision Policy: Apply policies which decide whether the cluster can be
rescaled or whether performance information can be collected about the cluster.
● Decide: Based on latest batch of events, decide whether to:
○ Scale up
○ Scale down
○ Stay the same
○ Also collects cluster performance information
● Calculate Size: If scaling up or down, decide how many nodes need to be
added or removed.
● Post-Decision Policy: Apply policies which can modify scale up and scale
down decisions.
Metrics Collection
Each minute collect the following
● Kafka consumer lag
● Records processed per second
● Cpu utilization
● Max Message Latency
Kafka
Kafka
Consumer
Buffered Events
Router
Sink
Processing
Filter /
Projection
● Kafka messages in per second
● Net in / out utilization
● Sink health metrics
Store the metrics for the past N minutes to inform scaling decisions and to do
regression to predict the workload.
Pre-Decision Policy
Abort autoscaling process if:
● The router has recent task failures
● The router is currently redeploying
Decide - Scale Up
Scale up if:
● There is significant lag AND sink is healthy
● Utilization exceeds the safe threshold AND sink is healthy
Key Insight - Collect cluster performance information:
● If the cluster needs to be scaled up that means the cluster is saturated.
● This is effectively a benchmark for the performance of the cluster at the current
size.
● Save this information in a Performance Table for future scaling decisions.
● More on this in the Performance Table section later.
Decide - Scale Down
Scale down if:
● There is no lag AND we do not anticipate an increase in incoming messages
● More on how we anticipate incoming message rate in the Predict Workload
section later.
Calculate Size
● Predict Workload: Predict the future workload (messages in per second),
while taking spikes into account.
● Target Events Per Second: Compute target events / sec that the pipeline will
need to handle X minutes from now.
● Cluster Size Lookup: Use the target events / sec to estimate the desired
cluster size, which can handle the workload up to X minutes from now.
● Quadratic regression for troughs: ax^2 + bx + c
● Linear regression for everything else: ax + b
Predict Workload
● Assume error for regression is normally distributed and centered at 0.
● Find standard deviation of error.
● Any error greater than 3 * sigma is an outlier.
● After enough consecutive outliers are observed, the baseline is reset.
Spike Detection
Spike Detection - Baseline
Var = (1 + 1 + 1 + 1) / 6 std = sqrt(4 / 6) 3 *std = 2.45
Spike Detection - First Outlier
Spike Detection - Outliers
Spike Detection - Baseline Reset
Calculate Size
● Predict Workload: Predict the future workload (messages in per second),
while taking spikes into account.
● Target Events Per Second: Compute target events / sec that the pipeline will
need to handle X minutes from now.
● Cluster Size Lookup: Use the target events / sec to estimate the desired
cluster size, which can handle the workload up to X minutes from now.
restartTime catchUpTime
recoveryTime = 15 min
2 minutes 13 minutes
t1 t2 = t1 + 15 min
Compute Target Processing Rate
t3 = t1 + 20 min
validTime
1.
2.
3.
Calculate Size
● Predict Workload: Predict the future workload (messages in per second),
while taking spikes into account.
● Target Events Per Second: Compute target events / sec that the pipeline will
need to handle X minutes from now.
● Cluster Size Lookup: Use the target events / sec to estimate the desired
cluster size, which can handle the workload up to X minutes from now.
● Lag and resource usage is high =>
● Pipeline is saturated =>
● We decide to scale up =>
● We know the maximum throughput of the current cluster at the current size =>
● Record the performance in a lookup table
Cluster Size Lookup - The Performance Table
● Given a target rate find the performance records above and below it.
● Do linear interpolation to find the suitable cluster size.
Num Nodes Max Rate
4 10,000
10 20,000
18 35,000
targetRate = 15,000
ratio = (15000 - 10000)
(20000 - 10000)
ratio = .5
clusterSize = .5 (4) + .5 (10)
clusterSize = 7
Performance Table
Cluster Size Lookup - The Performance Table
Num Nodes Max Rate
4 10,000
10 20,000
18 35,000
targetRate = 40,000
clusterSize = (40,000) = 20.57
(35,000 / 18)
ceil(clusterSize) = 21
Performance Table
Cluster Size Lookup - Corner Case
Cluster Size Lookup - Complexities
● Few more corner cases
● Utilization also needs to be taken into account
● Want new cluster size to have reasonable resource utilization 60% or less
Calculate Size
Scale Up vs Scale Down
● Flow and logic is the same
● Minor differences in implementation details
Post-Decision Policy
● Minimum cluster size based on partition count of Kafka topic
● Maximum cluster size based on partition count of Kafka topic
● Cooldown period for scale ups
● Cooldown period for scale downs
● Disable scale downs during region failover (see Region Failover section)
● Safety limit for max scale up. Ex. cannot add more than 50 nodes during a
scale up
● Safety limit for max cluster size
Running In
Production
Architecture Options
● Embed autoscaling in Flink
○ Pros:
■ Lower latency for retrieving metrics.
○ Cons:
■ Complex resource manager interactions get pushed down into Flink.
■ Rescale operation not easily integrated into operations history for the job.
■ Autoscaling changes requires redeploy of the job.
● Run autoscaling as a Mantis pipeline
○ Pros:
■ Flink service control plane handles all resource manager interactions already and it can be
re-used for rescaling the job.
■ Flink service control plane keeps history of all rescale actions.
■ Autoscaling can be changed without redeploying jobs.
○ Cons:
■ 2 minute latency for getting metrics.
Titus
Autoscaling Architecture
Autoscaler
(Mantis Job)
User
Router
User
Router
User
Router
Spinnaker
Flink as a Service
Atlas
Results
Sine Wave
Linear Spikey
Production Router
Fleet Resource Usage
Additional
Considerations
Memory Requirements
● Direct memory has to be preserved for Kafka Consumer and Kafka
Producer
● Direct memory cannot be changed for TMs that are already running
● Smaller clusters require more Direct memory (each node handles more
partitions)
● Larger clusters require less Direct memory (each node handles fewer
partitions)
● Deploy cluster with Direct memory that works for the minimum cluster
size
Partition Balancing
2 Partitions
2 Partitions
1 Partition
1 Partition
1 Partition
1 Partition
● 3 TMs
● 2 Task Slots per TM
● Topic with 8 partitions
TMs with N Task Slots in the worst case
can be assigned N more partitions than
other TMs
● Aggregate consumer lag and
average message latency may
be low.
● A few partitions may have high
latency due to unbalanced
distribution of partitions.
● Round up the cluster so that the
maximum possible partitions per
subtask is reduced by 1.
● Note this is a much looser
requirement on cluster size than
requiring equal distribution of
partitions to subtasks. This
allows finer grained cluster size
control.
Outlier Containers
[Wed Sep 4 17:50:03 2019] EDAC skx MC2: HANDLING MCE MEMORY ERROR
[Wed Sep 4 17:50:03 2019] EDAC skx MC2: CPU 24: Machine Check Event: 0 Bank 13:
cc063f80000800c0
[Wed Sep 4 17:50:03 2019] EDAC skx MC2: TSC 0
[Wed Sep 4 17:50:03 2019] EDAC skx MC2: ADDR 57c24fb4c0
[Wed Sep 4 17:50:03 2019] EDAC skx MC2: MISC 908511010100086
[Wed Sep 4 17:50:03 2019] EDAC skx MC2: PROCESSOR 0:50654 TIME 1567619410 SOCKET 1 APIC 40
[Wed Sep 4 17:50:03 2019] EDAC MC2: 6398 CE memory scrubbing error on
CPU_SrcID#1_MC#0_Chan#0_DIMM#0 (channel:0 slot:0 page:0x57c24fb offset:0x4c0 grain:32
syndrome:0x0 - OVERFLOW err_code:0008:00c0 socket:1 imc:0 rank:1 bg:3 ba:2 row:1aacf col:338)
Region Failover
Netflix
us-west
Netflix
us-east
Netflix
eu-west
User
us-west
User
us-east
User
eu-west
Flink Routers
us-west
Flink Routers
us-west
Flink Routers
us-west
Disable scaledowns in the evacuated
region until traffic comes back.
Future Work
Eager Scale Up
● Current Scale Up Decision:
○ There is significant consumer lag AND sink is healthy
○ Utilization exceeds the safe threshold AND sink is healthy
This is not ideal since in most cases latency builds up in the job before a scale up is
triggered.
● Eager Scale Up Decision:
○ Use the performance table to determine the maximum processing rate of the
current cluster.
○ Use regression to determine if the workload will exceed the processing rate
of the cluster in the near future.
○ If this is the case do a scale up before any lag builds up.
Downscale Optimization
● Current downscale operation
○ CHEAP: Graceful shutdown with savepoint.
○ EXPENSIVE: Remove TMs.
○ CHEAP: Restart from savepoint with reduced parallelism.
● Optimized downscale operation
○ CHEAP: Graceful shutdown with savepoint.
○ CHEAP: Blacklist TMs that will be removed.
○ CHEAP: Restart from savepoint with reduced parallelism.
○ EXPENSIVE: Remove TMs.
Complex DAGs
● Extend the algorithm to support multiple sources and sinks.
● Handle jobs where all operators are not chained.
Acknowledgements
● Steven Wu
● Andrew Nguonly
● Neeraj Joshi
● Mark Cho
● Netflix Flink Team
● Netflix Mantis Team
● Netflix Data Pipeline Team
● Netflix RTDI Team
● https://www.spinnaker.io/
● https://github.com/Netflix/mantis

More Related Content

PPTX
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
PPTX
Autoscaling Flink with Reactive Mode
ODP
Stream processing using Kafka
PDF
Apache Kafka Fundamentals for Architects, Admins and Developers
PDF
Deploying Flink on Kubernetes - David Anderson
PDF
Introduction to Apache Flink - Fast and reliable big data processing
PDF
Fundamentals of Apache Kafka
PPTX
Kafka presentation
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Autoscaling Flink with Reactive Mode
Stream processing using Kafka
Apache Kafka Fundamentals for Architects, Admins and Developers
Deploying Flink on Kubernetes - David Anderson
Introduction to Apache Flink - Fast and reliable big data processing
Fundamentals of Apache Kafka
Kafka presentation

What's hot (20)

PPTX
Prometheus and Grafana
PPTX
Demystifying flink memory allocation and tuning - Roshan Naik, Uber
PDF
HTTP Analytics for 6M requests per second using ClickHouse, by Alexander Boc...
PDF
ksqlDB: A Stream-Relational Database System
PPTX
Kafka Tutorial - Introduction to Apache Kafka (Part 1)
PPTX
Dynamic Rule-based Real-time Market Data Alerts
PDF
Streaming Event Time Partitioning with Apache Flink and Apache Iceberg - Juli...
PPTX
Extending Flink SQL for stream processing use cases
PPTX
PDF
How I learned to time travel, or, data pipelining and scheduling with Airflow
PDF
Apache Flink internals
PPTX
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
PDF
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
PDF
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
PPTX
Evening out the uneven: dealing with skew in Flink
PPTX
Kafka 101
PDF
Deep Dive: Memory Management in Apache Spark
PDF
Benefits of Stream Processing and Apache Kafka Use Cases
PDF
Apache Kafka - Martin Podval
PDF
Real-Life Use Cases & Architectures for Event Streaming with Apache Kafka
Prometheus and Grafana
Demystifying flink memory allocation and tuning - Roshan Naik, Uber
HTTP Analytics for 6M requests per second using ClickHouse, by Alexander Boc...
ksqlDB: A Stream-Relational Database System
Kafka Tutorial - Introduction to Apache Kafka (Part 1)
Dynamic Rule-based Real-time Market Data Alerts
Streaming Event Time Partitioning with Apache Flink and Apache Iceberg - Juli...
Extending Flink SQL for stream processing use cases
How I learned to time travel, or, data pipelining and scheduling with Airflow
Apache Flink internals
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Evening out the uneven: dealing with skew in Flink
Kafka 101
Deep Dive: Memory Management in Apache Spark
Benefits of Stream Processing and Apache Kafka Use Cases
Apache Kafka - Martin Podval
Real-Life Use Cases & Architectures for Event Streaming with Apache Kafka
Ad

Similar to Virtual Flink Forward 2020: Autoscaling Flink at Netflix - Timothy Farkas (20)

PDF
Autoscaling Confluent Cloud: Should We? How Would We?
PDF
Autoscaling Best Practices - WebPerf Barcelona Oct 2014
PDF
Netflix SRE perf meetup_slides
PDF
Flink Forward Berlin 2017: Robert Metzger - Keep it going - How to reliably a...
PDF
Scaling Open Source Big Data Cloud Applications is Easy/Hard
PDF
Auto scaling using Amazon Web Services ( AWS )
PDF
Scaling your Kafka streaming pipeline can be a pain - but it doesn’t have to ...
PDF
Scale new business peaks with Amazon auto scaling
PDF
Adaptive Scaling of Microgateways on Kubernetes
PPTX
Putting Kafka Into Overdrive
PDF
Scalling Rails: The Journey to 200M Notifications
PDF
Capacity Management for Web Operations
PDF
Scalable Web Apps - Journey Through the Cloud
PDF
Dynamic Scaling at Pinterest. Large Scale Production Engineering meetup, Feb...
PDF
August-20_Autoscaling-and-Cost-Optimization-on-Kubernetes-From-0-to-100.pdf
PPTX
Scalable web apps on AWS - Hebrew Webinar September 2017
PDF
Scalable Web Applications Session at Codebase
PDF
Workload-Aware: Auto-Scaling A new paradigm for Big Data Workloads
PDF
Autoscaler architecture of apache stratos 4.0.0
PDF
Fuzzy Control meets Software Engineering
Autoscaling Confluent Cloud: Should We? How Would We?
Autoscaling Best Practices - WebPerf Barcelona Oct 2014
Netflix SRE perf meetup_slides
Flink Forward Berlin 2017: Robert Metzger - Keep it going - How to reliably a...
Scaling Open Source Big Data Cloud Applications is Easy/Hard
Auto scaling using Amazon Web Services ( AWS )
Scaling your Kafka streaming pipeline can be a pain - but it doesn’t have to ...
Scale new business peaks with Amazon auto scaling
Adaptive Scaling of Microgateways on Kubernetes
Putting Kafka Into Overdrive
Scalling Rails: The Journey to 200M Notifications
Capacity Management for Web Operations
Scalable Web Apps - Journey Through the Cloud
Dynamic Scaling at Pinterest. Large Scale Production Engineering meetup, Feb...
August-20_Autoscaling-and-Cost-Optimization-on-Kubernetes-From-0-to-100.pdf
Scalable web apps on AWS - Hebrew Webinar September 2017
Scalable Web Applications Session at Codebase
Workload-Aware: Auto-Scaling A new paradigm for Big Data Workloads
Autoscaler architecture of apache stratos 4.0.0
Fuzzy Control meets Software Engineering
Ad

More from Flink Forward (20)

PDF
Building a fully managed stream processing platform on Flink at scale for Lin...
PPTX
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
PDF
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
PDF
Introducing the Apache Flink Kubernetes Operator
PDF
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
PPTX
One sink to rule them all: Introducing the new Async Sink
PPTX
Tuning Apache Kafka Connectors for Flink.pptx
PDF
Flink powered stream processing platform at Pinterest
PPTX
Apache Flink in the Cloud-Native Era
PPTX
Where is my bottleneck? Performance troubleshooting in Flink
PPTX
Using the New Apache Flink Kubernetes Operator in a Production Deployment
PPTX
The Current State of Table API in 2022
PDF
Flink SQL on Pulsar made easy
PPTX
Processing Semantically-Ordered Streams in Financial Services
PDF
Tame the small files problem and optimize data layout for streaming ingestion...
PDF
Batch Processing at Scale with Flink & Iceberg
PPTX
Welcome to the Flink Community!
PPTX
Practical learnings from running thousands of Flink jobs
PPTX
The top 3 challenges running multi-tenant Flink at scale
PPTX
Using Queryable State for Fun and Profit
Building a fully managed stream processing platform on Flink at scale for Lin...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing the Apache Flink Kubernetes Operator
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
One sink to rule them all: Introducing the new Async Sink
Tuning Apache Kafka Connectors for Flink.pptx
Flink powered stream processing platform at Pinterest
Apache Flink in the Cloud-Native Era
Where is my bottleneck? Performance troubleshooting in Flink
Using the New Apache Flink Kubernetes Operator in a Production Deployment
The Current State of Table API in 2022
Flink SQL on Pulsar made easy
Processing Semantically-Ordered Streams in Financial Services
Tame the small files problem and optimize data layout for streaming ingestion...
Batch Processing at Scale with Flink & Iceberg
Welcome to the Flink Community!
Practical learnings from running thousands of Flink jobs
The top 3 challenges running multi-tenant Flink at scale
Using Queryable State for Fun and Profit

Recently uploaded (20)

PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPT
Teaching material agriculture food technology
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Encapsulation theory and applications.pdf
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PPTX
Big Data Technologies - Introduction.pptx
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
Mobile App Security Testing_ A Comprehensive Guide.pdf
Understanding_Digital_Forensics_Presentation.pptx
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Teaching material agriculture food technology
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Encapsulation theory and applications.pdf
Chapter 3 Spatial Domain Image Processing.pdf
Spectral efficient network and resource selection model in 5G networks
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
MYSQL Presentation for SQL database connectivity
Big Data Technologies - Introduction.pptx
Building Integrated photovoltaic BIPV_UPV.pdf
Advanced methodologies resolving dimensionality complications for autism neur...
Per capita expenditure prediction using model stacking based on satellite ima...
Dropbox Q2 2025 Financial Results & Investor Presentation
How UI/UX Design Impacts User Retention in Mobile Apps.pdf

Virtual Flink Forward 2020: Autoscaling Flink at Netflix - Timothy Farkas

  • 3. Our Pain ● Thousands of stateless single source and single sink Flink routers. ● All operators are chained. ● When lag for a router exceeds a threshold we are paged. Kafka Consumer Project Filter Sink
  • 4. Definitions ● Workload: Events being produced to a kafka topic. Two main knobs to turn: ○ Message Rate ○ Message Size ● Lag: The time it would take for a router to process all the remaining unprocessed events that are buffered in its kafka topic. ● Healthy Router: A router is healthy if it’s lag is always under ~5 minutes. ● Autoscaling Solution: Adjust the number of nodes in the router dynamically based on the workload to keep the router healthy. Attempt to use the smallest number of nodes that are required to keep the pipeline healthy.
  • 5. Solution Space ● Claim: There is no perfect solution. Any autoscaling algorithm can be defeated by one or more workloads. ● Proof: Take any autoscaling algorithm A. Provide A with a workload W that does the exact opposite of what A expects whenever A decides to resize the cluster. => A will always make the wrong decision for W by definition. => Q.E.D. ○ Understand our limitations. ○ Make assumptions about our workloads. ○ Make a solution that works well when taking both into account.
  • 6. Limitations ● Rescaling introduces processing pauses. ● Scaling down a Flink job suspends processing for 1 - 3 minutes and possibly more. ○ CHEAP: Graceful shutdown with savepoint. ○ EXPENSIVE: Remove TMs. ○ CHEAP: Restart from savepoint with reduced parallelism. ● Scaling up a Flink job suspends processing for period < 1 minute. ○ EXPENSIVE: Add TMs. ○ CHEAP: Graceful shutdown with savepoint. ○ CHEAP: Restart from savepoint with increased parallelism. ● There is a two minute delay for propagating metrics through Netflix’s metrics infrastructure.
  • 7. Assumptions ● Better to accidentally over allocate than to under allocate. ● Average message size changes infrequently. ● Large spikes in the workload happen, but not frequently. ● Workloads tend to smoothly increase or decrease, when they don’t have a large spike.
  • 9. Desirable Characteristics ● Minimal amount of state ● Deterministic behavior ● Easy to unit test ● Easy to control
  • 10. Approaches ● Historical Prediction ● Rule Based ● PID Controller ● Statistical Short Term Prediction + Policies
  • 11. Autoscaling - High Level Steps ● Collect: Receive a batch of metrics for the current 1 minute timebucket. ● Pre-Decision Policy: Apply policies which decide whether the cluster can be rescaled or whether performance information can be collected about the cluster. ● Decide: Based on latest batch of events, decide whether to: ○ Scale up ○ Scale down ○ Stay the same ○ Also collects cluster performance information ● Calculate Size: If scaling up or down, decide how many nodes need to be added or removed. ● Post-Decision Policy: Apply policies which can modify scale up and scale down decisions.
  • 12. Metrics Collection Each minute collect the following ● Kafka consumer lag ● Records processed per second ● Cpu utilization ● Max Message Latency Kafka Kafka Consumer Buffered Events Router Sink Processing Filter / Projection ● Kafka messages in per second ● Net in / out utilization ● Sink health metrics Store the metrics for the past N minutes to inform scaling decisions and to do regression to predict the workload.
  • 13. Pre-Decision Policy Abort autoscaling process if: ● The router has recent task failures ● The router is currently redeploying
  • 14. Decide - Scale Up Scale up if: ● There is significant lag AND sink is healthy ● Utilization exceeds the safe threshold AND sink is healthy Key Insight - Collect cluster performance information: ● If the cluster needs to be scaled up that means the cluster is saturated. ● This is effectively a benchmark for the performance of the cluster at the current size. ● Save this information in a Performance Table for future scaling decisions. ● More on this in the Performance Table section later.
  • 15. Decide - Scale Down Scale down if: ● There is no lag AND we do not anticipate an increase in incoming messages ● More on how we anticipate incoming message rate in the Predict Workload section later.
  • 16. Calculate Size ● Predict Workload: Predict the future workload (messages in per second), while taking spikes into account. ● Target Events Per Second: Compute target events / sec that the pipeline will need to handle X minutes from now. ● Cluster Size Lookup: Use the target events / sec to estimate the desired cluster size, which can handle the workload up to X minutes from now.
  • 17. ● Quadratic regression for troughs: ax^2 + bx + c ● Linear regression for everything else: ax + b Predict Workload
  • 18. ● Assume error for regression is normally distributed and centered at 0. ● Find standard deviation of error. ● Any error greater than 3 * sigma is an outlier. ● After enough consecutive outliers are observed, the baseline is reset. Spike Detection
  • 19. Spike Detection - Baseline
  • 20. Var = (1 + 1 + 1 + 1) / 6 std = sqrt(4 / 6) 3 *std = 2.45 Spike Detection - First Outlier
  • 21. Spike Detection - Outliers
  • 22. Spike Detection - Baseline Reset
  • 23. Calculate Size ● Predict Workload: Predict the future workload (messages in per second), while taking spikes into account. ● Target Events Per Second: Compute target events / sec that the pipeline will need to handle X minutes from now. ● Cluster Size Lookup: Use the target events / sec to estimate the desired cluster size, which can handle the workload up to X minutes from now.
  • 24. restartTime catchUpTime recoveryTime = 15 min 2 minutes 13 minutes t1 t2 = t1 + 15 min Compute Target Processing Rate t3 = t1 + 20 min validTime 1. 2. 3.
  • 25. Calculate Size ● Predict Workload: Predict the future workload (messages in per second), while taking spikes into account. ● Target Events Per Second: Compute target events / sec that the pipeline will need to handle X minutes from now. ● Cluster Size Lookup: Use the target events / sec to estimate the desired cluster size, which can handle the workload up to X minutes from now.
  • 26. ● Lag and resource usage is high => ● Pipeline is saturated => ● We decide to scale up => ● We know the maximum throughput of the current cluster at the current size => ● Record the performance in a lookup table Cluster Size Lookup - The Performance Table
  • 27. ● Given a target rate find the performance records above and below it. ● Do linear interpolation to find the suitable cluster size. Num Nodes Max Rate 4 10,000 10 20,000 18 35,000 targetRate = 15,000 ratio = (15000 - 10000) (20000 - 10000) ratio = .5 clusterSize = .5 (4) + .5 (10) clusterSize = 7 Performance Table Cluster Size Lookup - The Performance Table
  • 28. Num Nodes Max Rate 4 10,000 10 20,000 18 35,000 targetRate = 40,000 clusterSize = (40,000) = 20.57 (35,000 / 18) ceil(clusterSize) = 21 Performance Table Cluster Size Lookup - Corner Case
  • 29. Cluster Size Lookup - Complexities ● Few more corner cases ● Utilization also needs to be taken into account ● Want new cluster size to have reasonable resource utilization 60% or less
  • 30. Calculate Size Scale Up vs Scale Down ● Flow and logic is the same ● Minor differences in implementation details
  • 31. Post-Decision Policy ● Minimum cluster size based on partition count of Kafka topic ● Maximum cluster size based on partition count of Kafka topic ● Cooldown period for scale ups ● Cooldown period for scale downs ● Disable scale downs during region failover (see Region Failover section) ● Safety limit for max scale up. Ex. cannot add more than 50 nodes during a scale up ● Safety limit for max cluster size
  • 33. Architecture Options ● Embed autoscaling in Flink ○ Pros: ■ Lower latency for retrieving metrics. ○ Cons: ■ Complex resource manager interactions get pushed down into Flink. ■ Rescale operation not easily integrated into operations history for the job. ■ Autoscaling changes requires redeploy of the job. ● Run autoscaling as a Mantis pipeline ○ Pros: ■ Flink service control plane handles all resource manager interactions already and it can be re-used for rescaling the job. ■ Flink service control plane keeps history of all rescale actions. ■ Autoscaling can be changed without redeploying jobs. ○ Cons: ■ 2 minute latency for getting metrics.
  • 41. Memory Requirements ● Direct memory has to be preserved for Kafka Consumer and Kafka Producer ● Direct memory cannot be changed for TMs that are already running ● Smaller clusters require more Direct memory (each node handles more partitions) ● Larger clusters require less Direct memory (each node handles fewer partitions) ● Deploy cluster with Direct memory that works for the minimum cluster size
  • 42. Partition Balancing 2 Partitions 2 Partitions 1 Partition 1 Partition 1 Partition 1 Partition ● 3 TMs ● 2 Task Slots per TM ● Topic with 8 partitions TMs with N Task Slots in the worst case can be assigned N more partitions than other TMs ● Aggregate consumer lag and average message latency may be low. ● A few partitions may have high latency due to unbalanced distribution of partitions. ● Round up the cluster so that the maximum possible partitions per subtask is reduced by 1. ● Note this is a much looser requirement on cluster size than requiring equal distribution of partitions to subtasks. This allows finer grained cluster size control.
  • 43. Outlier Containers [Wed Sep 4 17:50:03 2019] EDAC skx MC2: HANDLING MCE MEMORY ERROR [Wed Sep 4 17:50:03 2019] EDAC skx MC2: CPU 24: Machine Check Event: 0 Bank 13: cc063f80000800c0 [Wed Sep 4 17:50:03 2019] EDAC skx MC2: TSC 0 [Wed Sep 4 17:50:03 2019] EDAC skx MC2: ADDR 57c24fb4c0 [Wed Sep 4 17:50:03 2019] EDAC skx MC2: MISC 908511010100086 [Wed Sep 4 17:50:03 2019] EDAC skx MC2: PROCESSOR 0:50654 TIME 1567619410 SOCKET 1 APIC 40 [Wed Sep 4 17:50:03 2019] EDAC MC2: 6398 CE memory scrubbing error on CPU_SrcID#1_MC#0_Chan#0_DIMM#0 (channel:0 slot:0 page:0x57c24fb offset:0x4c0 grain:32 syndrome:0x0 - OVERFLOW err_code:0008:00c0 socket:1 imc:0 rank:1 bg:3 ba:2 row:1aacf col:338)
  • 44. Region Failover Netflix us-west Netflix us-east Netflix eu-west User us-west User us-east User eu-west Flink Routers us-west Flink Routers us-west Flink Routers us-west Disable scaledowns in the evacuated region until traffic comes back.
  • 46. Eager Scale Up ● Current Scale Up Decision: ○ There is significant consumer lag AND sink is healthy ○ Utilization exceeds the safe threshold AND sink is healthy This is not ideal since in most cases latency builds up in the job before a scale up is triggered. ● Eager Scale Up Decision: ○ Use the performance table to determine the maximum processing rate of the current cluster. ○ Use regression to determine if the workload will exceed the processing rate of the cluster in the near future. ○ If this is the case do a scale up before any lag builds up.
  • 47. Downscale Optimization ● Current downscale operation ○ CHEAP: Graceful shutdown with savepoint. ○ EXPENSIVE: Remove TMs. ○ CHEAP: Restart from savepoint with reduced parallelism. ● Optimized downscale operation ○ CHEAP: Graceful shutdown with savepoint. ○ CHEAP: Blacklist TMs that will be removed. ○ CHEAP: Restart from savepoint with reduced parallelism. ○ EXPENSIVE: Remove TMs.
  • 48. Complex DAGs ● Extend the algorithm to support multiple sources and sinks. ● Handle jobs where all operators are not chained.
  • 49. Acknowledgements ● Steven Wu ● Andrew Nguonly ● Neeraj Joshi ● Mark Cho ● Netflix Flink Team ● Netflix Mantis Team ● Netflix Data Pipeline Team ● Netflix RTDI Team ● https://www.spinnaker.io/ ● https://github.com/Netflix/mantis