SlideShare a Scribd company logo
Emma Tang, Neustar
Optimal Strategies for
Large-Scale Batch ETL
Jobs
#EUDev3 October, 2017
2#EUdev3
https://www.neustar.biz/marketing
Neustar
• Help the world’s most valuable brands
understand and target their consumers both
online and offline
• Maximize ROI on Ad spend
• Billions of user events per day, petabytes of data
3#EUdev3
Architecture (simplified view)
4#EUdev3
Batch ETL
• Runs on schedule/ programmatically triggered
• Aim for complete utilization of cluster resources,
esp. memory and CPU
5#EUdev3
Why Batch?
• We care about historical state
• We don’t have SLA other than 1-3x daily delivery
• Efficient, tuned optimal use of resources, cost
efficiency
6#EUdev3
What we will talk about today
• Issues at scale
• Skew
• Optimizations
• Ganglia
7#EUdev3
The attribution problem
• At Neustar, we process large quantities of ad
events
• Familiar events like: impressions, clicks,
conversions
• Which impression/click contributed to
conversion?
8#EUdev3
Example attribution
• Alice goes to her favorite news site, and sees 3
ads – impressions
• She clicks on one of them that leads to Macy’s –
click
• She buys something on Macy’s – conversion
• Her purchase can be attributed to the click and
impression events
9#EUdev3
The approach
• Join conversions with impressions and clicks on
userId
• Go through each user and attribute conversions
to correct target event (impressions/clicks)
• Latest target events are valued more, so
timestamp matters
10#EUdev3
The scale
• Impression: 250 billion
• Clicks: 20 billion
• Conversions: 50 billion
• Join 50 billion x 250 billion
11#EUdev3
impressions
clicks
conversions
What we will talk about today
• Issues at scale
• Skew
• Optimizations
• Ganglia
12#EUdev3
Driver OOM
Exception in thread "map-output-dispatcher-12" java.lang.OutOfMemoryError
at java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:123)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:117)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
at java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:253)
at java.util.zip.DeflaterOutputStream.write(DeflaterOutputStream.java:211)
at java.util.zip.GZIPOutputStream.write(GZIPOutputStream.java:145)
at java.io.ObjectOutputStream$BlockDataOutputStream.writeBlockHeader(ObjectOutputStream.java:1894)
at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1875)
at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1189)
at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply$mcV$sp(MapOutputTracker.scala:615)
at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply(MapOutputTracker.scala:614)
at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply(MapOutputTracker.scala:614)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1287)
at org.apache.spark.MapOutputTracker$.serializeMapStatuses(MapOutputTracker.scala:617)
13#EUdev3
Driver OOM
• Array of mapStatuses of size m, each status
contains info about how the block is used by
each reducer (n).
• m (mappers) x n (reducers)
14#EUdev3
Status1
reducer1
reducer2
Status2
reducer1
reducer2
Driver OOM
• 2 types of mapStatus: highly compressed vs
compressed
• HighlyCompressedMapStatus tracks reduce
partition average size, with bitmap tracking
which blocks are empty for each reducer
15#EUdev3
Driver OOM
• Reduce number of partitions on either side
• 300k x 75k à 100k x 75k
16#EUdev3
Disable unnecessary GC
• spark.cleaner.periodicGC.interval
• GC cycles “stop the world”.
• Large heaps means longer GC
• Set to a long period (e.g. twice the length of your
job)
17#EUdev3
Disable unnecessary GC
• ContextCleaner uses weak references to keep
track of every RDD, ShuffleDependency, and
Broadcast, and registers when the objects go
out of scope
• periodicGCService is a single-thread executor
service that calls the JVM garbage collector
periodically
18#EUdev3
Allow extra time
• spark.rpc.askTimeout
• spark.network.timeout
• in case of GC, our heap size is so large, we will
exceed the timeout.
19#EUdev3
Spurious failures
• Reading from s3 can be flaky, especially when
reading millions of files
• Set spark.task.maxFailures higher than default
of 3
• We set to < 10 to ensure true errors propagate
out quickly
20#EUdev3
What we will talk about today
• Issues at scale
• Skew
• Optimizations
• Ganglia
21#EUdev3
The skew
• Extreme skew in data
• A few users have 100k+ events for 90 days. The
average user has < 50
• Executors dying due to handful of extremely
large partitions
22#EUdev3
The skew
• Out of 20.5B users, 20.2B have < 50 events
23#EUdev3
0
5E+09
1E+10
1.5E+10
2E+10
2.5E+10
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
50000
55000
60000
65000
70000
75000
80000
85000
90000
95000
100000
105000
110000
115000
120000
125000
130000
135000
140000
145000
150000
155000
161000
167000
172000
177000
183000
191000
197000
204000
211000
223000
248000
271000
305000
328000
393000
504000
#ofusers
# of events
count of users with number of events bucketed by 1000s
The skew: zoom
24#EUdev3
0
5
10
15
20
25
30
35
75000
79000
83000
87000
91000
95000
99000
103000
107000
111000
115000
119000
123000
127000
131000
135000
139000
143000
147000
151000
155000
160000
165000
169000
173000
177000
182000
189000
193000
198000
204000
210000
218000
231000
256000
271000
303000
314000
360000
395000
504000
#ofusers
# of events
# of users with # of events bucketed by 100s (> 75k)
Strategy: increase # of partitions
• First line of defense - increase number of
partitions so skewed data is more spread out
25#EUdev3
Strategy: Nest
• Group conversions by userId, group target event
by userId, then join the lists
• Avoid cartesian joins
26#EUdev3
Long tail: ganglia
27#EUdev3
Long tail: Spark UI
• 50 min long tail, median 24 s
28#EUdev3
Long tail: what else to do?
• If you have domain specific knowledge of your
data, use it to filter “bad” data out
• Salt your data, and shuffle twice (but shuffling is
expensive)
• Use bloom filter if one side of your join is much
smaller than the other
29#EUdev3
Bloom Filter
• Space-efficient probabilistic data structure to test
whether an element is a member of a set
• Size mainly determined by number of items in
the filter, and the probability of false positives
• No false negatives!
• Broadcast filter out to executors
30#EUdev3
Bloom Filter
• Using a high false positive rate, still very good
filter
• P = 5% -> 80% filtered out
• Subsequent join much faster
31#EUdev3
Bloom Filter
• Tradeoff between accuracy & size
• We’ve had great success with Bloom Filters with
size of < 5G
• Experiment with Bloom Filters
32#EUdev3
Bloom Filter Applied
• For conversions of 50 billion, false positive rate
of 0.1%, filter size is 80GB
• False positive rate of 5%, filter size is 35GB
• Still too big
33#EUdev3
Long tail: what else to do?
• If you have domain specific knowledge of your
data, use it to filter “bad” data out
• Salt your data, and shuffle twice (but shuffling is
expensive)
• Use bloom filter if one side of your join is much
smaller than the other
34#EUdev3
Long tail: ganglia
35#EUdev3
Long tail: what is it doing?
• Look at executor threads during long tail
com.esotericsoftware.kryo.util.IdentityObjectIntMap.clear(IdentityObjectIntMap.java:382)
com.esotericsoftware.kryo.util.MapReferenceResolver.reset(MapReferenceResolver.java:65)
com.esotericsoftware.kryo.Kryo.reset(Kryo.java:865)
com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:630)
org.apache.spark.serializer.KryoSerializationStream.writeObject(KryoSerializer.scala:209
)
org.apache.spark.serializer.SerializationStream.writeValue(Serializer.scala:134)
org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:239)
org.apache.spark.util.collection.WritablePartitionedPairCollection$$anon$1.writeNext(Wri
tablePartitionedPairCollection.scala:56)
org.apache.spark.util.collection.ExternalSorter.writePartitionedFile(ExternalSorter.scal
a:699)
36#EUdev3
Long tail: what is it doing?
• Mappers writing to shuffle space taking long
• Need to reduce data size before going into
shuffle
37#EUdev3
Long tail: what is it doing?
• Events in the long tail had almost identical
information, spread over time.
• For each user, if we retain just 1 event per hour,
at 90 days, it is around 2k events.
• However, this means we need to group by user
first, which requires a shuffle, which defeats the
whole purpose of this exercise, right?
38#EUdev3
Strategy: Filter during map side combine
• Use combineByKey and maximize map side
combine
• Thin collection out during map side combine ->
less is written to shuffle space
39#EUdev3
40#EUdev3
Still slow…
• What else can I do?
41#EUdev3
What we will talk about today
• Issues at scale
• Skew
• Optimizations
• Ganglia
42#EUdev3
Avoid shuffles
• Reuse the same partitioner instance
43#EUdev3
The DAG
44#EUdev3
Avoid shuffles
• Denormalize data or union data to minimize
shuffle
• Rely on the fact we will reduce into a highly
compressed key space.
• For example, we want count of events by
campaign, also count of events by site
45#EUdev3
Avoid shuffle
46#EUdev3
Coalesce partitions when loading
• Loading many small files – coalesce down # of
partitions
• No shuffle
• Reduce task overhead, greatly improve speed
• Going from 300k partitions to 60k, cut time by
half
47#EUdev3
Coalesce partitions when loading
final JavaRDD<Event> eventRDD= loadDataFromS3(); // load data
final int loadingPartitions = eventRDD.getNumPartitions(); // inspect
how many partitions
final int coalescePartitions = loadingPartitions / 5; // use algorithm
to calculate new #
eventRDD
.coalesce(coalescePartitions) // coalesce to smaller #
.map(e -> transform(e)) // faster subsequent operations
48#EUdev3
Materialize data
• Large chunk of data persisted in memory
• Large RDD used to calculate small RDD
• Use an Action to materialize the smaller
calculated result so larger data can be
unpersisted
49#EUdev3
Materialize data
parent.cache() // persist large parent PairRDD to memory
child1 = parent.reduceByKey(a).cache() // calculate child1 from parent
child2 = parent.reduceByKey(b).cache() // calculate child2 from parent
child1.count()// perform an Action
child2.count()// perform an Action
parent.unpersist() // safe to mark parent as unpersisted
// rest of the code can use memory
50#EUdev3
What we will talk about today
• Issues at scale
• Skew
• Optimizations
• Ganglia
51#EUdev3
Ganglia
• Ganglia is an extremely useful tool to
understand performance bottlenecks, and to
tune for highest cluster utilization
52#EUdev3
Ganglia: CPU wave
53#EUdev3
Ganglia: CPU wave
• Executors are going into GC multiple times in
the same stage
• Running out of execution memory
• Persist to StorageLevel.DISK_ONLY()
54#EUdev3
Ganglia: inefficient use
55#EUdev3
Ganglia: inefficient use
• Decrease # of partitions of RDDs used in this
stage
56#EUdev3
Ganglia: much better
57#EUdev3
Final Configuration
• Master 1 r3.4xl
• Executors 110 r3.4xl
• Configurations:
58#EUdev3
spark maximizeResourceAllocation TRUE
spark-defaults spark.executor.cores 16
spark-defaults spark.dynamicAllocation.enabled FALSE
spark-defaults spark.driver.maxResultSize 8g
spark-defaults spark.rpc.message.maxSize 2047
spark-defaults spark.rpc.askTimeout 300
spark-defaults spark.network.timeout 300s
spark-defaults spark.executor.heartbeatInterval 20s
spark-defaults spark.executor.memory 92540m
spark-defaults spark.yarn.executor.memoryOverhead 23300
spark-defaults spark.task.maxFailures 10
spark-defaults spark.executor.extraJavaOptions -XX:+UseG1GC
spark-defaults spark.cleaner.periodicGC.interval 600min
Summary
• Large jobs are special, use special settings
• Outsmart the skew
• Use Ganglia!
59#EUdev3
Thank you
60#EUdev3
Emma Tang
@emmayolotang
@Neustar

More Related Content

PDF
Scaling out Tensorflow-as-a-Service on Spark and Commodity GPUs
PPTX
Deep Learning and Streaming in Apache Spark 2.x with Matei Zaharia
PDF
Supporting Highly Multitenant Spark Notebook Workloads with Craig Ingram and ...
PDF
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...
PPTX
A Developer’s View into Spark's Memory Model with Wenchen Fan
PDF
Apache Spark Performance: Past, Future and Present
PDF
State of Spark in the cloud (Spark Summit EU 2017)
PDF
Natural Language Processing with CNTK and Apache Spark with Ali Zaidi
Scaling out Tensorflow-as-a-Service on Spark and Commodity GPUs
Deep Learning and Streaming in Apache Spark 2.x with Matei Zaharia
Supporting Highly Multitenant Spark Notebook Workloads with Craig Ingram and ...
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...
A Developer’s View into Spark's Memory Model with Wenchen Fan
Apache Spark Performance: Past, Future and Present
State of Spark in the cloud (Spark Summit EU 2017)
Natural Language Processing with CNTK and Apache Spark with Ali Zaidi

What's hot (20)

PDF
Top 5 mistakes when writing Spark applications
PDF
Indicium: Interactive Querying at Scale Using Apache Spark, Zeppelin, and Spa...
PDF
An Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai Yu
PDF
700 Updatable Queries Per Second: Spark as a Real-Time Web Service
PDF
Deep Learning with Apache Spark and GPUs with Pierce Spitler
PDF
Fast and Reliable Apache Spark SQL Releases
PDF
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
PDF
Understanding Memory Management In Spark For Fun And Profit
PDF
Spark Summit 2016: Connecting Python to the Spark Ecosystem
PDF
Speeding Up Spark with Data Compression on Xeon+FPGA with David Ojika
PDF
Spark performance tuning - Maksud Ibrahimov
PPTX
Exploiting GPU's for Columnar DataFrrames by Kiran Lonikar
PDF
Apache Spark Core – Practical Optimization
PDF
VEGAS: The Missing Matplotlib for Scala/Apache Spark with Roger Menezes and D...
PDF
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
PDF
Scaling Data Analytics Workloads on Databricks
PPTX
CaffeOnSpark Update: Recent Enhancements and Use Cases
PDF
Spark Summit EU talk by Debasish Das and Pramod Narasimha
PDF
Unleashing Data Intelligence with Intel and Apache Spark with Michael Greene
PDF
Accelerating Shuffle: A Tailor-Made RDMA Solution for Apache Spark with Yuval...
Top 5 mistakes when writing Spark applications
Indicium: Interactive Querying at Scale Using Apache Spark, Zeppelin, and Spa...
An Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai Yu
700 Updatable Queries Per Second: Spark as a Real-Time Web Service
Deep Learning with Apache Spark and GPUs with Pierce Spitler
Fast and Reliable Apache Spark SQL Releases
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
Understanding Memory Management In Spark For Fun And Profit
Spark Summit 2016: Connecting Python to the Spark Ecosystem
Speeding Up Spark with Data Compression on Xeon+FPGA with David Ojika
Spark performance tuning - Maksud Ibrahimov
Exploiting GPU's for Columnar DataFrrames by Kiran Lonikar
Apache Spark Core – Practical Optimization
VEGAS: The Missing Matplotlib for Scala/Apache Spark with Roger Menezes and D...
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
Scaling Data Analytics Workloads on Databricks
CaffeOnSpark Update: Recent Enhancements and Use Cases
Spark Summit EU talk by Debasish Das and Pramod Narasimha
Unleashing Data Intelligence with Intel and Apache Spark with Michael Greene
Accelerating Shuffle: A Tailor-Made RDMA Solution for Apache Spark with Yuval...
Ad

Viewers also liked (6)

PDF
High Performance Enterprise Data Processing with Apache Spark with Sandeep Va...
PDF
Working with Skewed Data: The Iterative Broadcast with Fokko Driesprong Rob K...
PDF
Running Spark Inside Containers with Haohai Ma and Khalid Ahmed
PDF
Fast Data with Apache Ignite and Apache Spark with Christos Erotocritou
PDF
Storage Engine Considerations for Your Apache Spark Applications with Mladen ...
PDF
Spark Streaming Programming Techniques You Should Know with Gerard Maas
High Performance Enterprise Data Processing with Apache Spark with Sandeep Va...
Working with Skewed Data: The Iterative Broadcast with Fokko Driesprong Rob K...
Running Spark Inside Containers with Haohai Ma and Khalid Ahmed
Fast Data with Apache Ignite and Apache Spark with Christos Erotocritou
Storage Engine Considerations for Your Apache Spark Applications with Mladen ...
Spark Streaming Programming Techniques You Should Know with Gerard Maas
Ad

Similar to Optimal Strategies for Large Scale Batch ETL Jobs with Emma Tang (20)

PDF
Fire-fighting java big data problems
PPTX
Think Like Spark
PDF
Sparklife - Life In The Trenches With Spark
PPTX
Spark Gotchas and Lessons Learned
PPTX
Think Like Spark: Some Spark Concepts and a Use Case
PDF
New Developments in Spark
PPT
Counting Unique Users in Real-Time: Here's a Challenge for You!
PDF
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Ben...
PDF
Spark 4th Meetup Londond - Building a Product with Spark
PDF
Sessionization with Spark streaming
PPTX
Meetup spark structured streaming
PDF
Optimizing Spark-based data pipelines - are you up for it?
PDF
Building Scalable Data Pipelines - 2016 DataPalooza Seattle
PPTX
ETL with SPARK - First Spark London meetup
PDF
Scala like distributed collections - dumping time-series data with apache spark
PDF
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Be...
PDF
Cómo hemos implementado semántica de "Exactly Once" en nuestra base de datos ...
PDF
Streaming Analytics with Spark, Kafka, Cassandra and Akka by Helena Edelson
PPTX
Spark etl
PPTX
How we evolved data pipeline at Celtra and what we learned along the way
Fire-fighting java big data problems
Think Like Spark
Sparklife - Life In The Trenches With Spark
Spark Gotchas and Lessons Learned
Think Like Spark: Some Spark Concepts and a Use Case
New Developments in Spark
Counting Unique Users in Real-Time: Here's a Challenge for You!
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Ben...
Spark 4th Meetup Londond - Building a Product with Spark
Sessionization with Spark streaming
Meetup spark structured streaming
Optimizing Spark-based data pipelines - are you up for it?
Building Scalable Data Pipelines - 2016 DataPalooza Seattle
ETL with SPARK - First Spark London meetup
Scala like distributed collections - dumping time-series data with apache spark
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Be...
Cómo hemos implementado semántica de "Exactly Once" en nuestra base de datos ...
Streaming Analytics with Spark, Kafka, Cassandra and Akka by Helena Edelson
Spark etl
How we evolved data pipeline at Celtra and what we learned along the way

More from Databricks (20)

PPTX
DW Migration Webinar-March 2022.pptx
PPTX
Data Lakehouse Symposium | Day 1 | Part 1
PPT
Data Lakehouse Symposium | Day 1 | Part 2
PPTX
Data Lakehouse Symposium | Day 2
PPTX
Data Lakehouse Symposium | Day 4
PDF
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
PDF
Democratizing Data Quality Through a Centralized Platform
PDF
Learn to Use Databricks for Data Science
PDF
Why APM Is Not the Same As ML Monitoring
PDF
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
PDF
Stage Level Scheduling Improving Big Data and AI Integration
PDF
Simplify Data Conversion from Spark to TensorFlow and PyTorch
PDF
Scaling your Data Pipelines with Apache Spark on Kubernetes
PDF
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
PDF
Sawtooth Windows for Feature Aggregations
PDF
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
PDF
Re-imagine Data Monitoring with whylogs and Spark
PDF
Raven: End-to-end Optimization of ML Prediction Queries
PDF
Processing Large Datasets for ADAS Applications using Apache Spark
PDF
Massive Data Processing in Adobe Using Delta Lake
DW Migration Webinar-March 2022.pptx
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 4
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Democratizing Data Quality Through a Centralized Platform
Learn to Use Databricks for Data Science
Why APM Is Not the Same As ML Monitoring
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Stage Level Scheduling Improving Big Data and AI Integration
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Sawtooth Windows for Feature Aggregations
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Re-imagine Data Monitoring with whylogs and Spark
Raven: End-to-end Optimization of ML Prediction Queries
Processing Large Datasets for ADAS Applications using Apache Spark
Massive Data Processing in Adobe Using Delta Lake

Recently uploaded (20)

PPTX
Qualitative Qantitative and Mixed Methods.pptx
PPTX
A Complete Guide to Streamlining Business Processes
PDF
Optimise Shopper Experiences with a Strong Data Estate.pdf
PDF
Transcultural that can help you someday.
PDF
Introduction to Data Science and Data Analysis
PPTX
SAP 2 completion done . PRESENTATION.pptx
PDF
Business Analytics and business intelligence.pdf
PDF
Microsoft Core Cloud Services powerpoint
PPT
Predictive modeling basics in data cleaning process
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PPTX
Introduction to Inferential Statistics.pptx
PDF
Global Data and Analytics Market Outlook Report
PPTX
modul_python (1).pptx for professional and student
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PPTX
QUANTUM_COMPUTING_AND_ITS_POTENTIAL_APPLICATIONS[2].pptx
PPTX
Market Analysis -202507- Wind-Solar+Hybrid+Street+Lights+for+the+North+Amer...
PPTX
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
PPTX
importance of Data-Visualization-in-Data-Science. for mba studnts
PPTX
STERILIZATION AND DISINFECTION-1.ppthhhbx
PDF
Jean-Georges Perrin - Spark in Action, Second Edition (2020, Manning Publicat...
Qualitative Qantitative and Mixed Methods.pptx
A Complete Guide to Streamlining Business Processes
Optimise Shopper Experiences with a Strong Data Estate.pdf
Transcultural that can help you someday.
Introduction to Data Science and Data Analysis
SAP 2 completion done . PRESENTATION.pptx
Business Analytics and business intelligence.pdf
Microsoft Core Cloud Services powerpoint
Predictive modeling basics in data cleaning process
Acceptance and paychological effects of mandatory extra coach I classes.pptx
Introduction to Inferential Statistics.pptx
Global Data and Analytics Market Outlook Report
modul_python (1).pptx for professional and student
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
QUANTUM_COMPUTING_AND_ITS_POTENTIAL_APPLICATIONS[2].pptx
Market Analysis -202507- Wind-Solar+Hybrid+Street+Lights+for+the+North+Amer...
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
importance of Data-Visualization-in-Data-Science. for mba studnts
STERILIZATION AND DISINFECTION-1.ppthhhbx
Jean-Georges Perrin - Spark in Action, Second Edition (2020, Manning Publicat...

Optimal Strategies for Large Scale Batch ETL Jobs with Emma Tang

  • 1. Emma Tang, Neustar Optimal Strategies for Large-Scale Batch ETL Jobs #EUDev3 October, 2017
  • 3. Neustar • Help the world’s most valuable brands understand and target their consumers both online and offline • Maximize ROI on Ad spend • Billions of user events per day, petabytes of data 3#EUdev3
  • 5. Batch ETL • Runs on schedule/ programmatically triggered • Aim for complete utilization of cluster resources, esp. memory and CPU 5#EUdev3
  • 6. Why Batch? • We care about historical state • We don’t have SLA other than 1-3x daily delivery • Efficient, tuned optimal use of resources, cost efficiency 6#EUdev3
  • 7. What we will talk about today • Issues at scale • Skew • Optimizations • Ganglia 7#EUdev3
  • 8. The attribution problem • At Neustar, we process large quantities of ad events • Familiar events like: impressions, clicks, conversions • Which impression/click contributed to conversion? 8#EUdev3
  • 9. Example attribution • Alice goes to her favorite news site, and sees 3 ads – impressions • She clicks on one of them that leads to Macy’s – click • She buys something on Macy’s – conversion • Her purchase can be attributed to the click and impression events 9#EUdev3
  • 10. The approach • Join conversions with impressions and clicks on userId • Go through each user and attribute conversions to correct target event (impressions/clicks) • Latest target events are valued more, so timestamp matters 10#EUdev3
  • 11. The scale • Impression: 250 billion • Clicks: 20 billion • Conversions: 50 billion • Join 50 billion x 250 billion 11#EUdev3 impressions clicks conversions
  • 12. What we will talk about today • Issues at scale • Skew • Optimizations • Ganglia 12#EUdev3
  • 13. Driver OOM Exception in thread "map-output-dispatcher-12" java.lang.OutOfMemoryError at java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:123) at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:117) at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93) at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153) at java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:253) at java.util.zip.DeflaterOutputStream.write(DeflaterOutputStream.java:211) at java.util.zip.GZIPOutputStream.write(GZIPOutputStream.java:145) at java.io.ObjectOutputStream$BlockDataOutputStream.writeBlockHeader(ObjectOutputStream.java:1894) at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1875) at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1189) at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348) at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply$mcV$sp(MapOutputTracker.scala:615) at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply(MapOutputTracker.scala:614) at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply(MapOutputTracker.scala:614) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1287) at org.apache.spark.MapOutputTracker$.serializeMapStatuses(MapOutputTracker.scala:617) 13#EUdev3
  • 14. Driver OOM • Array of mapStatuses of size m, each status contains info about how the block is used by each reducer (n). • m (mappers) x n (reducers) 14#EUdev3 Status1 reducer1 reducer2 Status2 reducer1 reducer2
  • 15. Driver OOM • 2 types of mapStatus: highly compressed vs compressed • HighlyCompressedMapStatus tracks reduce partition average size, with bitmap tracking which blocks are empty for each reducer 15#EUdev3
  • 16. Driver OOM • Reduce number of partitions on either side • 300k x 75k à 100k x 75k 16#EUdev3
  • 17. Disable unnecessary GC • spark.cleaner.periodicGC.interval • GC cycles “stop the world”. • Large heaps means longer GC • Set to a long period (e.g. twice the length of your job) 17#EUdev3
  • 18. Disable unnecessary GC • ContextCleaner uses weak references to keep track of every RDD, ShuffleDependency, and Broadcast, and registers when the objects go out of scope • periodicGCService is a single-thread executor service that calls the JVM garbage collector periodically 18#EUdev3
  • 19. Allow extra time • spark.rpc.askTimeout • spark.network.timeout • in case of GC, our heap size is so large, we will exceed the timeout. 19#EUdev3
  • 20. Spurious failures • Reading from s3 can be flaky, especially when reading millions of files • Set spark.task.maxFailures higher than default of 3 • We set to < 10 to ensure true errors propagate out quickly 20#EUdev3
  • 21. What we will talk about today • Issues at scale • Skew • Optimizations • Ganglia 21#EUdev3
  • 22. The skew • Extreme skew in data • A few users have 100k+ events for 90 days. The average user has < 50 • Executors dying due to handful of extremely large partitions 22#EUdev3
  • 23. The skew • Out of 20.5B users, 20.2B have < 50 events 23#EUdev3 0 5E+09 1E+10 1.5E+10 2E+10 2.5E+10 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 55000 60000 65000 70000 75000 80000 85000 90000 95000 100000 105000 110000 115000 120000 125000 130000 135000 140000 145000 150000 155000 161000 167000 172000 177000 183000 191000 197000 204000 211000 223000 248000 271000 305000 328000 393000 504000 #ofusers # of events count of users with number of events bucketed by 1000s
  • 25. Strategy: increase # of partitions • First line of defense - increase number of partitions so skewed data is more spread out 25#EUdev3
  • 26. Strategy: Nest • Group conversions by userId, group target event by userId, then join the lists • Avoid cartesian joins 26#EUdev3
  • 28. Long tail: Spark UI • 50 min long tail, median 24 s 28#EUdev3
  • 29. Long tail: what else to do? • If you have domain specific knowledge of your data, use it to filter “bad” data out • Salt your data, and shuffle twice (but shuffling is expensive) • Use bloom filter if one side of your join is much smaller than the other 29#EUdev3
  • 30. Bloom Filter • Space-efficient probabilistic data structure to test whether an element is a member of a set • Size mainly determined by number of items in the filter, and the probability of false positives • No false negatives! • Broadcast filter out to executors 30#EUdev3
  • 31. Bloom Filter • Using a high false positive rate, still very good filter • P = 5% -> 80% filtered out • Subsequent join much faster 31#EUdev3
  • 32. Bloom Filter • Tradeoff between accuracy & size • We’ve had great success with Bloom Filters with size of < 5G • Experiment with Bloom Filters 32#EUdev3
  • 33. Bloom Filter Applied • For conversions of 50 billion, false positive rate of 0.1%, filter size is 80GB • False positive rate of 5%, filter size is 35GB • Still too big 33#EUdev3
  • 34. Long tail: what else to do? • If you have domain specific knowledge of your data, use it to filter “bad” data out • Salt your data, and shuffle twice (but shuffling is expensive) • Use bloom filter if one side of your join is much smaller than the other 34#EUdev3
  • 36. Long tail: what is it doing? • Look at executor threads during long tail com.esotericsoftware.kryo.util.IdentityObjectIntMap.clear(IdentityObjectIntMap.java:382) com.esotericsoftware.kryo.util.MapReferenceResolver.reset(MapReferenceResolver.java:65) com.esotericsoftware.kryo.Kryo.reset(Kryo.java:865) com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:630) org.apache.spark.serializer.KryoSerializationStream.writeObject(KryoSerializer.scala:209 ) org.apache.spark.serializer.SerializationStream.writeValue(Serializer.scala:134) org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:239) org.apache.spark.util.collection.WritablePartitionedPairCollection$$anon$1.writeNext(Wri tablePartitionedPairCollection.scala:56) org.apache.spark.util.collection.ExternalSorter.writePartitionedFile(ExternalSorter.scal a:699) 36#EUdev3
  • 37. Long tail: what is it doing? • Mappers writing to shuffle space taking long • Need to reduce data size before going into shuffle 37#EUdev3
  • 38. Long tail: what is it doing? • Events in the long tail had almost identical information, spread over time. • For each user, if we retain just 1 event per hour, at 90 days, it is around 2k events. • However, this means we need to group by user first, which requires a shuffle, which defeats the whole purpose of this exercise, right? 38#EUdev3
  • 39. Strategy: Filter during map side combine • Use combineByKey and maximize map side combine • Thin collection out during map side combine -> less is written to shuffle space 39#EUdev3
  • 41. Still slow… • What else can I do? 41#EUdev3
  • 42. What we will talk about today • Issues at scale • Skew • Optimizations • Ganglia 42#EUdev3
  • 43. Avoid shuffles • Reuse the same partitioner instance 43#EUdev3
  • 45. Avoid shuffles • Denormalize data or union data to minimize shuffle • Rely on the fact we will reduce into a highly compressed key space. • For example, we want count of events by campaign, also count of events by site 45#EUdev3
  • 47. Coalesce partitions when loading • Loading many small files – coalesce down # of partitions • No shuffle • Reduce task overhead, greatly improve speed • Going from 300k partitions to 60k, cut time by half 47#EUdev3
  • 48. Coalesce partitions when loading final JavaRDD<Event> eventRDD= loadDataFromS3(); // load data final int loadingPartitions = eventRDD.getNumPartitions(); // inspect how many partitions final int coalescePartitions = loadingPartitions / 5; // use algorithm to calculate new # eventRDD .coalesce(coalescePartitions) // coalesce to smaller # .map(e -> transform(e)) // faster subsequent operations 48#EUdev3
  • 49. Materialize data • Large chunk of data persisted in memory • Large RDD used to calculate small RDD • Use an Action to materialize the smaller calculated result so larger data can be unpersisted 49#EUdev3
  • 50. Materialize data parent.cache() // persist large parent PairRDD to memory child1 = parent.reduceByKey(a).cache() // calculate child1 from parent child2 = parent.reduceByKey(b).cache() // calculate child2 from parent child1.count()// perform an Action child2.count()// perform an Action parent.unpersist() // safe to mark parent as unpersisted // rest of the code can use memory 50#EUdev3
  • 51. What we will talk about today • Issues at scale • Skew • Optimizations • Ganglia 51#EUdev3
  • 52. Ganglia • Ganglia is an extremely useful tool to understand performance bottlenecks, and to tune for highest cluster utilization 52#EUdev3
  • 54. Ganglia: CPU wave • Executors are going into GC multiple times in the same stage • Running out of execution memory • Persist to StorageLevel.DISK_ONLY() 54#EUdev3
  • 56. Ganglia: inefficient use • Decrease # of partitions of RDDs used in this stage 56#EUdev3
  • 58. Final Configuration • Master 1 r3.4xl • Executors 110 r3.4xl • Configurations: 58#EUdev3 spark maximizeResourceAllocation TRUE spark-defaults spark.executor.cores 16 spark-defaults spark.dynamicAllocation.enabled FALSE spark-defaults spark.driver.maxResultSize 8g spark-defaults spark.rpc.message.maxSize 2047 spark-defaults spark.rpc.askTimeout 300 spark-defaults spark.network.timeout 300s spark-defaults spark.executor.heartbeatInterval 20s spark-defaults spark.executor.memory 92540m spark-defaults spark.yarn.executor.memoryOverhead 23300 spark-defaults spark.task.maxFailures 10 spark-defaults spark.executor.extraJavaOptions -XX:+UseG1GC spark-defaults spark.cleaner.periodicGC.interval 600min
  • 59. Summary • Large jobs are special, use special settings • Outsmart the skew • Use Ganglia! 59#EUdev3