SlideShare a Scribd company logo
Best Practices for Enabling
Speculative Execution on
Large Scale Platforms
Ron Hu, Venkata Sowrirajan
LinkedIn
Agenda
▪ Motivation
▪ Enhancements
▪ Configuration
▪ Metrics and analysis
▪ User guidance
▪ Future work
Speculative Execution
• A stage consists of many parallel
tasks and the stage can run as fast as
the slowest task runs.
• If one task is running very slowly in a
stage, Spark driver will re-launch a
speculation task for it on a different host.
• Between the regular task and the
speculation task, whichever task finishes
first is used. The slower task is killed.
3
❌
✅
Launch
speculative task
Time
Default Speculation Parameters
4
Configuration
Parameters
Default
value
Meaning
spark.speculation false If set to "true", performs speculative execution of
tasks.
spark.speculation.interval 100ms How often Spark will check for tasks to speculate.
spark.speculation.multiplier 1.5 How many times slower a task is than the median to
be considered for speculation.
spark.speculation.quantile 0.75 Fraction of tasks which must be complete before
speculation is enabled for a particular stage.
Motivation
• Speeds up straggler tasks - additional overhead.
• Default configs are generally too aggressive in most
cases.
• Speculating tasks that run for few seconds are
mostly wasteful.
• Investigate impact caused by data skews,
overloaded shuffle services etc with speculation
enabled.
• What is the impact if we enable speculation by
default in a multi-tenant large scale cluster?
5
Speculative Execution improvements
• Tasks run for few seconds gets speculated wasting
resources unnecessarily.
• Solution: Prevent tasks getting speculated which run
for few seconds
• Internally, introduced a new spark configuration
(spark.speculation.minRuntimeThreshold) which prevents
tasks from getting speculated that runs for less than the min threshold
time.
• Similar feature later got added to Apache Spark in
SPARK-33741
6
Speculative execution metrics
• Additional metrics are required to understand both the
usefulness and overhead introduced by speculative
execution.
• Existing onTaskEnd and onTaskStart event in
AppStatusListener is enriched to produce speculation
summary metrics for a stage.
7
Speculative execution metrics
• Added additional metrics for a stage with
speculative execution like:
• Number of speculated tasks
• Number of successful speculated tasks
• Number of killed speculated tasks
• Number of failed speculated tasks
8
Speculative execution metrics
• Speculation summary for a stage with additional metrics using
the existing events.
9
Updated Speculation Parameter Values
• Upstream Spark’s default speculation parameter values are not good for us.
• LinkedIn’s Spark jobs are mainly for batch off-line jobs plus some interactive analytics
workloads.
• We set speculation parameters to these default values for most LinkedIn’s applications.
Users can still overwrite per their individual needs.
10
Configuration Parameters
Upstream
Default
LinkedIn
Default
spark.speculation false true
spark.speculation.interval 100ms 1 sec
spark.speculation.multiplier 1.5 4.0
spark.speculation.quantile 0.75 0.90
spark.speculation.min.threshold N/A 30 sec
Metrics and Analysis
• We care about ROI (Return On Investment).
• We analyzed
• The return or performance gain, and
• The investment/overhead or additional cost
• We measured various metrics for one week on a large
cluster with 10K+ machines.
• A multi-tenant environment with 40K+ Spark applications running daily.
• Enabled dynamic allocations.
• With resource sharing and contention, performance varies due to transient
delays/congestions.
11
Task Level Statistics
12
1.24% 0.32% 60%
Duration
delta
Success
rate
Additional
tasks
Ratio of all
launched
speculation
tasks over
all tasks
Speculated
tasks success
rate
Ratio of
duration of all
speculation
tasks over
duration of all
regular tasks
1.65M
2.73M
Fruitful
tasks
Speculated
tasks
Total number
of the
launched
speculation
tasks
Total number
of fruitful
speculation
tasks
● A speculation task is
fruitful if it finishes
earlier than the
corresponding regular
task.
● The conservative values
in the config parameters
leads to high success
rate.
Stage Level Statistics
447K 184K 140K
Total
eligible
stages
Stages with
speculation
tasks
Stages with
fruitful
speculation
tasks
● A stage is eligible for
speculation if its duration > 30
seconds with at least 10 tasks.
● 41% of them launched
speculation tasks
● Among those stages that
launched speculation tasks,
76% of them received
performance benefits.
Stages Fruitful Stages
Speculated stages
Application Level Statistics
157K 59K 51K
Total
applications
Applications
with
speculation
tasks
Applications
with fruitful
speculation
tasks
● 38% of all Spark applications
launched speculation tasks.
● 87% of them benefit from the
speculative execution.
● Overall 32% of all Spark
applications benefit from the
speculation execution.
Applications Fruitful apps
Speculated apps
Case Study
• We analyzed the impact on a mission critical
application.
• It has a total of 29 Spark application flows.
• Some Spark flows run daily. Some run hourly.
• Each flow has a well defined SLA.
• We took measures of all the flows for
• two weeks before enabling speculation, and
• two weeks after enabling speculation.
15
Number in Minutes BEFORE enabling AFTER enabling After/Before ratio
Geometric mean of average
elapsed times of all flows
7.44 6.47 87% (or decreased by
13%)
Geometric mean of standard
deviation of elapsed times for
all flows
2.91 1.71 59%(or decreased by
41%)
Resource Consumption Impact
 
17
Decrease
by 24%
User Guidance: Where speculation can help
• A mapper task is slow because the running executor is too busy and/or
some system hangs due to hardware/software issues.
• We used to see ‘run-away’ tasks sometimes due to some system hang issues.
• After enabling speculation, we rarely see ‘run-away’ tasks.
• The ‘run-away’ tasks were later killed since their corresponding speculation tasks
finished earlier.
• The network route is congested somewhere.
• There exists another data copy.
• The regular task normally will reach the ‘NODE_LOCAL’/’RACK_LOCAL” copy. The
speculation task usually reaches the ‘ANY’ data copy
• If the initial task was launched suboptimally, its speculative task can have better
locality.
18
User Guidance: Where speculation cannot help
• Data skew
• Overload of shuffle services causing reducer task
delays
• Not enough memory causing tasks to spill.
• Spark driver does not know the root cause why a task
is slow when it launches a speculation task.
19
Summary
• At LinkedIn, we further enhanced Spark engine to monitor
speculation statistics.
• We shared our configuration settings to effectively manage
speculative executions.
• Depending on your performance goal, you need to decide how much
overhead you can tolerate.
• ROI if speculation parameters are properly set:
• I: small increase in network messages
• I: small overhead in Spark Driver
• R: good saving in executor resources
• R: good reduction in job elapsed times
• R: significant reduction in the variation of elapsed times, leading to a
more predictable/consistent performance.
Future Work
• Add intelligence to Spark driver to decide whether or
not to launch speculation tasks.
• Distinguish between the manageable/unmanageable causes.
• On the cloud, we may have unlimited resources.
However, we may need to factor in the money cost.
• What is the cost in launching additional executors?
21
Acknowledgement
We want to thank
▪ Eric Baldeschweiler
▪ Sunitha Beeram
▪ LinkedIn Spark Team
for their enlightening discussions
and insightful comments.
Feedback
Your feedback is important to us.
Don’t forget to rate and review the sessions.

More Related Content

PDF
Apache Spark Core – Practical Optimization
PPTX
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
PDF
Deep Dive: Memory Management in Apache Spark
PDF
How We Optimize Spark SQL Jobs With parallel and sync IO
PDF
Apache Spark Core—Deep Dive—Proper Optimization
PDF
Top 5 Mistakes When Writing Spark Applications
PDF
Dynamic Partition Pruning in Apache Spark
PDF
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Apache Spark Core – Practical Optimization
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Deep Dive: Memory Management in Apache Spark
How We Optimize Spark SQL Jobs With parallel and sync IO
Apache Spark Core—Deep Dive—Proper Optimization
Top 5 Mistakes When Writing Spark Applications
Dynamic Partition Pruning in Apache Spark
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...

What's hot (20)

PDF
Fine Tuning and Enhancing Performance of Apache Spark Jobs
PDF
Understanding Query Plans and Spark UIs
PDF
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
PDF
Materialized Column: An Efficient Way to Optimize Queries on Nested Columns
PDF
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
PDF
Spark shuffle introduction
PDF
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
PDF
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
PDF
From Query Plan to Query Performance: Supercharging your Apache Spark Queries...
PDF
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
PDF
Common Strategies for Improving Performance on Your Delta Lakehouse
PDF
Designing Structured Streaming Pipelines—How to Architect Things Right
PPTX
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
PDF
Parquet performance tuning: the missing guide
PPTX
Hive + Tez: A Performance Deep Dive
PPTX
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
PDF
Enabling Vectorized Engine in Apache Spark
PDF
A Deep Dive into Query Execution Engine of Spark SQL
PDF
Iceberg: A modern table format for big data (Strata NY 2018)
PDF
Apache Spark vs Apache Flink
Fine Tuning and Enhancing Performance of Apache Spark Jobs
Understanding Query Plans and Spark UIs
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Materialized Column: An Efficient Way to Optimize Queries on Nested Columns
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
Spark shuffle introduction
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
From Query Plan to Query Performance: Supercharging your Apache Spark Queries...
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Common Strategies for Improving Performance on Your Delta Lakehouse
Designing Structured Streaming Pipelines—How to Architect Things Right
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
Parquet performance tuning: the missing guide
Hive + Tez: A Performance Deep Dive
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
Enabling Vectorized Engine in Apache Spark
A Deep Dive into Query Execution Engine of Spark SQL
Iceberg: A modern table format for big data (Strata NY 2018)
Apache Spark vs Apache Flink
Ad

Similar to Best Practices for Enabling Speculative Execution on Large Scale Platforms (20)

PDF
Apache Spark Performance is too hard. Let's make it easier
PDF
Apache Spark Performance: Past, Future and Present
PDF
Tackling Scaling Challenges of Apache Spark at LinkedIn
PPTX
LanceShivnathHadoopSummit2015
PDF
Hadoop Spark Introduction-20150130
PDF
Apache Spark: What's under the hood
PDF
Spark Autotuning Talk - Strata New York
PDF
Spark Autotuning - Strata EU 2018
PPTX
Understanding Spark Tuning: Strata New York
PDF
Faster Data Integration Pipeline Execution using Spark-Jobserver
PPTX
Spark Application Development Made Easy
PPTX
Spark autotuning talk final
PDF
Making Sense of Spark Performance-(Kay Ousterhout, UC Berkeley)
PPTX
The Fifth Elephant 2016: Self-Serve Performance Tuning for Hadoop and Spark
PDF
How to Automate Performance Tuning for Apache Spark
PDF
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...
PDF
Apache Spark at Viadeo
PDF
Use Machine Learning to Get the Most out of Your Big Data Clusters
PPTX
Building Efficient Pipelines in Apache Spark
PPTX
February 2017 HUG: Slow, Stuck, or Runaway Apps? Learn How to Quickly Fix Pro...
Apache Spark Performance is too hard. Let's make it easier
Apache Spark Performance: Past, Future and Present
Tackling Scaling Challenges of Apache Spark at LinkedIn
LanceShivnathHadoopSummit2015
Hadoop Spark Introduction-20150130
Apache Spark: What's under the hood
Spark Autotuning Talk - Strata New York
Spark Autotuning - Strata EU 2018
Understanding Spark Tuning: Strata New York
Faster Data Integration Pipeline Execution using Spark-Jobserver
Spark Application Development Made Easy
Spark autotuning talk final
Making Sense of Spark Performance-(Kay Ousterhout, UC Berkeley)
The Fifth Elephant 2016: Self-Serve Performance Tuning for Hadoop and Spark
How to Automate Performance Tuning for Apache Spark
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...
Apache Spark at Viadeo
Use Machine Learning to Get the Most out of Your Big Data Clusters
Building Efficient Pipelines in Apache Spark
February 2017 HUG: Slow, Stuck, or Runaway Apps? Learn How to Quickly Fix Pro...
Ad

More from Databricks (20)

PPTX
DW Migration Webinar-March 2022.pptx
PPTX
Data Lakehouse Symposium | Day 1 | Part 1
PPT
Data Lakehouse Symposium | Day 1 | Part 2
PPTX
Data Lakehouse Symposium | Day 2
PPTX
Data Lakehouse Symposium | Day 4
PDF
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
PDF
Democratizing Data Quality Through a Centralized Platform
PDF
Learn to Use Databricks for Data Science
PDF
Why APM Is Not the Same As ML Monitoring
PDF
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
PDF
Stage Level Scheduling Improving Big Data and AI Integration
PDF
Simplify Data Conversion from Spark to TensorFlow and PyTorch
PDF
Scaling your Data Pipelines with Apache Spark on Kubernetes
PDF
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
PDF
Sawtooth Windows for Feature Aggregations
PDF
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
PDF
Re-imagine Data Monitoring with whylogs and Spark
PDF
Raven: End-to-end Optimization of ML Prediction Queries
PDF
Processing Large Datasets for ADAS Applications using Apache Spark
PDF
Massive Data Processing in Adobe Using Delta Lake
DW Migration Webinar-March 2022.pptx
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 4
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Democratizing Data Quality Through a Centralized Platform
Learn to Use Databricks for Data Science
Why APM Is Not the Same As ML Monitoring
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Stage Level Scheduling Improving Big Data and AI Integration
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Sawtooth Windows for Feature Aggregations
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Re-imagine Data Monitoring with whylogs and Spark
Raven: End-to-end Optimization of ML Prediction Queries
Processing Large Datasets for ADAS Applications using Apache Spark
Massive Data Processing in Adobe Using Delta Lake

Recently uploaded (20)

PPTX
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
PPTX
Business Ppt On Nestle.pptx huunnnhhgfvu
PPTX
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
PDF
Introduction to Business Data Analytics.
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PPT
Chapter 3 METAL JOINING.pptnnnnnnnnnnnnn
PPTX
STUDY DESIGN details- Lt Col Maksud (21).pptx
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PPTX
Introduction to Knowledge Engineering Part 1
PPTX
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PPTX
Supervised vs unsupervised machine learning algorithms
PPTX
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
Business Ppt On Nestle.pptx huunnnhhgfvu
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
Introduction to Business Data Analytics.
oil_refinery_comprehensive_20250804084928 (1).pptx
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
Introduction-to-Cloud-ComputingFinal.pptx
Chapter 3 METAL JOINING.pptnnnnnnnnnnnnn
STUDY DESIGN details- Lt Col Maksud (21).pptx
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
Acceptance and paychological effects of mandatory extra coach I classes.pptx
Introduction to Knowledge Engineering Part 1
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
Galatica Smart Energy Infrastructure Startup Pitch Deck
Supervised vs unsupervised machine learning algorithms
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
Data_Analytics_and_PowerBI_Presentation.pptx

Best Practices for Enabling Speculative Execution on Large Scale Platforms

  • 1. Best Practices for Enabling Speculative Execution on Large Scale Platforms Ron Hu, Venkata Sowrirajan LinkedIn
  • 2. Agenda ▪ Motivation ▪ Enhancements ▪ Configuration ▪ Metrics and analysis ▪ User guidance ▪ Future work
  • 3. Speculative Execution • A stage consists of many parallel tasks and the stage can run as fast as the slowest task runs. • If one task is running very slowly in a stage, Spark driver will re-launch a speculation task for it on a different host. • Between the regular task and the speculation task, whichever task finishes first is used. The slower task is killed. 3 ❌ ✅ Launch speculative task Time
  • 4. Default Speculation Parameters 4 Configuration Parameters Default value Meaning spark.speculation false If set to "true", performs speculative execution of tasks. spark.speculation.interval 100ms How often Spark will check for tasks to speculate. spark.speculation.multiplier 1.5 How many times slower a task is than the median to be considered for speculation. spark.speculation.quantile 0.75 Fraction of tasks which must be complete before speculation is enabled for a particular stage.
  • 5. Motivation • Speeds up straggler tasks - additional overhead. • Default configs are generally too aggressive in most cases. • Speculating tasks that run for few seconds are mostly wasteful. • Investigate impact caused by data skews, overloaded shuffle services etc with speculation enabled. • What is the impact if we enable speculation by default in a multi-tenant large scale cluster? 5
  • 6. Speculative Execution improvements • Tasks run for few seconds gets speculated wasting resources unnecessarily. • Solution: Prevent tasks getting speculated which run for few seconds • Internally, introduced a new spark configuration (spark.speculation.minRuntimeThreshold) which prevents tasks from getting speculated that runs for less than the min threshold time. • Similar feature later got added to Apache Spark in SPARK-33741 6
  • 7. Speculative execution metrics • Additional metrics are required to understand both the usefulness and overhead introduced by speculative execution. • Existing onTaskEnd and onTaskStart event in AppStatusListener is enriched to produce speculation summary metrics for a stage. 7
  • 8. Speculative execution metrics • Added additional metrics for a stage with speculative execution like: • Number of speculated tasks • Number of successful speculated tasks • Number of killed speculated tasks • Number of failed speculated tasks 8
  • 9. Speculative execution metrics • Speculation summary for a stage with additional metrics using the existing events. 9
  • 10. Updated Speculation Parameter Values • Upstream Spark’s default speculation parameter values are not good for us. • LinkedIn’s Spark jobs are mainly for batch off-line jobs plus some interactive analytics workloads. • We set speculation parameters to these default values for most LinkedIn’s applications. Users can still overwrite per their individual needs. 10 Configuration Parameters Upstream Default LinkedIn Default spark.speculation false true spark.speculation.interval 100ms 1 sec spark.speculation.multiplier 1.5 4.0 spark.speculation.quantile 0.75 0.90 spark.speculation.min.threshold N/A 30 sec
  • 11. Metrics and Analysis • We care about ROI (Return On Investment). • We analyzed • The return or performance gain, and • The investment/overhead or additional cost • We measured various metrics for one week on a large cluster with 10K+ machines. • A multi-tenant environment with 40K+ Spark applications running daily. • Enabled dynamic allocations. • With resource sharing and contention, performance varies due to transient delays/congestions. 11
  • 12. Task Level Statistics 12 1.24% 0.32% 60% Duration delta Success rate Additional tasks Ratio of all launched speculation tasks over all tasks Speculated tasks success rate Ratio of duration of all speculation tasks over duration of all regular tasks 1.65M 2.73M Fruitful tasks Speculated tasks Total number of the launched speculation tasks Total number of fruitful speculation tasks ● A speculation task is fruitful if it finishes earlier than the corresponding regular task. ● The conservative values in the config parameters leads to high success rate.
  • 13. Stage Level Statistics 447K 184K 140K Total eligible stages Stages with speculation tasks Stages with fruitful speculation tasks ● A stage is eligible for speculation if its duration > 30 seconds with at least 10 tasks. ● 41% of them launched speculation tasks ● Among those stages that launched speculation tasks, 76% of them received performance benefits. Stages Fruitful Stages Speculated stages
  • 14. Application Level Statistics 157K 59K 51K Total applications Applications with speculation tasks Applications with fruitful speculation tasks ● 38% of all Spark applications launched speculation tasks. ● 87% of them benefit from the speculative execution. ● Overall 32% of all Spark applications benefit from the speculation execution. Applications Fruitful apps Speculated apps
  • 15. Case Study • We analyzed the impact on a mission critical application. • It has a total of 29 Spark application flows. • Some Spark flows run daily. Some run hourly. • Each flow has a well defined SLA. • We took measures of all the flows for • two weeks before enabling speculation, and • two weeks after enabling speculation. 15
  • 16. Number in Minutes BEFORE enabling AFTER enabling After/Before ratio Geometric mean of average elapsed times of all flows 7.44 6.47 87% (or decreased by 13%) Geometric mean of standard deviation of elapsed times for all flows 2.91 1.71 59%(or decreased by 41%)
  • 18. User Guidance: Where speculation can help • A mapper task is slow because the running executor is too busy and/or some system hangs due to hardware/software issues. • We used to see ‘run-away’ tasks sometimes due to some system hang issues. • After enabling speculation, we rarely see ‘run-away’ tasks. • The ‘run-away’ tasks were later killed since their corresponding speculation tasks finished earlier. • The network route is congested somewhere. • There exists another data copy. • The regular task normally will reach the ‘NODE_LOCAL’/’RACK_LOCAL” copy. The speculation task usually reaches the ‘ANY’ data copy • If the initial task was launched suboptimally, its speculative task can have better locality. 18
  • 19. User Guidance: Where speculation cannot help • Data skew • Overload of shuffle services causing reducer task delays • Not enough memory causing tasks to spill. • Spark driver does not know the root cause why a task is slow when it launches a speculation task. 19
  • 20. Summary • At LinkedIn, we further enhanced Spark engine to monitor speculation statistics. • We shared our configuration settings to effectively manage speculative executions. • Depending on your performance goal, you need to decide how much overhead you can tolerate. • ROI if speculation parameters are properly set: • I: small increase in network messages • I: small overhead in Spark Driver • R: good saving in executor resources • R: good reduction in job elapsed times • R: significant reduction in the variation of elapsed times, leading to a more predictable/consistent performance.
  • 21. Future Work • Add intelligence to Spark driver to decide whether or not to launch speculation tasks. • Distinguish between the manageable/unmanageable causes. • On the cloud, we may have unlimited resources. However, we may need to factor in the money cost. • What is the cost in launching additional executors? 21
  • 22. Acknowledgement We want to thank ▪ Eric Baldeschweiler ▪ Sunitha Beeram ▪ LinkedIn Spark Team for their enlightening discussions and insightful comments.
  • 23. Feedback Your feedback is important to us. Don’t forget to rate and review the sessions.