SlideShare a Scribd company logo
Innovations In Apache Hadoop MapReduce,
Pig and Hive for improving query
performance

gopalv@apache.org
vinodkv@apache.org




                                      Page 1
© Hortonworks Inc. 2013
Operation Stinger




     © Hortonworks Inc. 2013   Page 3
Performance at any cost




        © Hortonworks Inc. 2013
• Scalability
   – Already works great, just don’t break it for performance gains
• Isolation + Security
   – Queries between different users run as different users
• Fault tolerance
   – Keep all of MR’s safety nets to work around bad nodes in clusters
• UDFs
   – Make sure they are “User” defined and not “Admin” defined




                              © Hortonworks Inc. 2013
First things first
• How far can we push Hive as it exists today?




                      © Hortonworks Inc. 2013
Benchmark spec
• The TPC-DS benchmark data+query set
• Query 27 (big joins small)
  – For all items sold in stores located in specified states during a given
    year, find the average quantity, average list price, average list sales
    price, average coupon amount for a given gender, marital status,
    education and customer demographic.
• Query 82 (big joins big)
  – List all items and current prices sold through the store channel from
    certain manufacturers in a given price range and consistently had a
    quantity between 100 and 500 on hand in a 60-day period.




                               © Hortonworks Inc. 2013
TL;DR
• TPC-DS Query 27, Scale=200, 10 EC2 nodes (40 disks)




                     © Hortonworks Inc. 2013
TL;DR - II
• TPC-DS Query 82, Scale=200, 10 EC2 nodes (40 disks)




                     © Hortonworks Inc. 2013
Forget the actual benchmark
• First of all, YMMV
  – Software
  – Hardware
  – Setup
  – Tuning
• Text formats seem to be the staple of all comparisons
  – Really?
  – Everybody’s using it but only for benchmarks!




                             © Hortonworks Inc. 2013
What did the trick?
• Mapreduce?
• HDFS?
• Or is it just Hive?




                        © Hortonworks Inc. 2013
Optional Advice




    © Hortonworks Inc. 2013
RCFile
• Binary RCFiles
• Hive pushes down column projections
• Less I/O, Less CPU
• Smaller files




                     © Hortonworks Inc. 2013
Data organization
• No data system at scale is loaded once & left alone
• Partitions are essential
• Data flows into new partitions every day




                      © Hortonworks Inc. 2013
A closer look
• Now revisiting the benchmark and its results




                      © Hortonworks Inc. 2013
Query27 - Before




     © Hortonworks Inc. 2013
Before




© Hortonworks Inc. 2013
Query 27 - After




    © Hortonworks Inc. 2013
After




© Hortonworks Inc. 2013
Query 82 - Before




     © Hortonworks Inc. 2013
Query 82 - After




    © Hortonworks Inc. 2013
What changed?
• Job Count/Correct plan
• Correct data formats
• Correct data organization
• Correct configuration




                      © Hortonworks Inc. 2013
What changed?
                             Data Formats
                                    Data Organization




                                            Query Plan




   © Hortonworks Inc. 2013
© Hortonworks Inc. 2013
Is that all?
• NO!
• In Hive
   – Metastore
   – RCFile issues
   – CPU intensive code
• In YARN+MR
   – Parallelism
   – Spin-up times
   – Data locality
• In HDFS
   – Bad disks/deteriorating nodes



                              © Hortonworks Inc. 2013
In Hive
• NO!
• In Hive
   – Metastore
   – RCFile issues
   – CPU intensive code
• In YARN+MR
   – Parallelism
   – Spin-up times
   – Data locality
• In HDFS
   – Bad disks/deteriorating nodes



                              © Hortonworks Inc. 2013
In Hive
• NO!
• In Hive
   – Metastore
   – RCFile issues
   – CPU intensive code
• In YARN+MR
   – Parallelism
   – Spin-up times
   – Data locality
• In HDFS
   – Bad disks/deteriorating nodes



                              © Hortonworks Inc. 2013
Hive Metastore
• 1+N Select problem
  – SELECT partitions FROM tables;
  – /* for each needed partition */ SELECT * FROM Partition ..
  – For query 27 , generates > 5000 queries! 4-5 seconds lost on each call!
  – Lazy loading or Include/Join are general solutions
• Datanucleus/ORM issues
  – 100K NPEs try.. Catch.. Ignore..
• Metastore DB Schema revisit
  – Denormalize some/all of it?




                              © Hortonworks Inc. 2013
In Hive
• NO!
• In Hive
   – Metastore
   – RCFile issues
   – CPU intensive code
• In YARN+MR
   – Parallelism
   – Spin-up times
   – Data locality
• In HDFS
   – Bad disks/deteriorating nodes



                              © Hortonworks Inc. 2013
RCFile issues
• RCFiles do not split well
   – Row groups and row group boundaries
• Small row groups vs big row groups
   – Sync() vs min split
   – Storage packing
• Run-length information is lost
   – Unnecessary deserialization costs




                              © Hortonworks Inc. 2013
ORC file format
• A single file as output of each task.
  – Dramatically simplifies integration with Hive
  – Lowers pressure on the NameNode
• Support for the Hive type model
  – Complex types (struct, list, map, union)
  – New types (datetime, decimal)
  – Encoding specific to the column type
• Split files without scanning for markers
• Bound the amount of memory required for
  reading or writing.


                             © Hortonworks Inc. 2013
In Hive
• NO!
• In Hive
   – Metastore
   – RCFile issues
   – CPU intensive code
• In YARN+MR
   – Parallelism
   – Spin-up times
   – Data locality
• In HDFS
   – Bad disks/deteriorating nodes



                              © Hortonworks Inc. 2013
CPU intensive code




      © Hortonworks Inc. 2013
CPU intensive code
• Hive query engine processes one row at a time
   – Very inefficient in terms of CPU usage
• Lazy deserialization: layers
• Object inspector calls
• Lots of virtual method calls




                               © Hortonworks Inc. 2013
Tighten your loops




     © Hortonworks Inc. 2013
Vectorization to the rescue
• Process a row batch at a time instead of a single row
• Row batch to consist of column vectors
   – The column vector will consist of array(s) of primitive types as far as
     possible
• Each operator will process the whole column vector at a
  time
• File formats to give out vectorized batches for processing
• Underlying research promises
   – Better instruction pipelines and cache usage
   – Mechanical sympathy




                                © Hortonworks Inc. 2013
Vectorization: Prelim results
• Functionality
   – Some arithmetic operators and filters using primitive type columns
   – Have a basic integration benchmark to prove that the whole setup
     works
• Performance
   – Micro benchmark
   – More than 30x improvement in the CPU time
   – Disclaimer:
       – Micro benchmark!
       – Include io or deserialization costs or complex and string datatypes




                                © Hortonworks Inc. 2013
In YARN+MR
• NO!
• In Hive
   – Metastore
   – RCFile issues
   – CPU intensive code
• In YARN+MR
   – Data locality
   – Parallelism
   – Spin-up times
• In HDFS
   – Bad disks/deteriorating nodes



                              © Hortonworks Inc. 2013
In YARN+MR
• NO!
• In Hive
   – Metastore
   – RCFile issues
   – CPU intensive code
• In YARN+MR
   – Data locality
   – Parallelism
   – Spin-up times
• In HDFS
   – Bad disks/deteriorating nodes



                              © Hortonworks Inc. 2013
Data Locality
• CombineInputFormat
• AM interaction with locality
• Short-circuit reads!
• Delay scheduling
   – Good for throughput
   – Bad for latency




                           © Hortonworks Inc. 2013
In YARN+MR
• NO!
• In Hive
   – Metastore
   – RCFile issues
   – CPU intensive code
• In YARN+MR
   – Data locality
   – Parallelism
   – Spin-up times
• In HDFS
   – Bad disks/deteriorating nodes



                              © Hortonworks Inc. 2013
Parallelism
• Can tune it (to some extent)
   – Controlling splits/reducer count
• Hive doesn’t know dynamic cluster status
   – Benchmarks max out clusters, real jobs may or may not
• Hive does not let you control parallelism
   – particularly in case of multiple jobs in a query




                                © Hortonworks Inc. 2013
In YARN+MR
• NO!
• In Hive
   – Metastore
   – RCFile issues
   – CPU intensive code
• In YARN+MR
   – Data locality
   – Parallelism
   – Spin-up times
• In HDFS
   – Bad disks/deteriorating nodes



                              © Hortonworks Inc. 2013
Spin up times
• AM startup costs
• Task startup costs
• Multiple waves of map tasks




                       © Hortonworks Inc. 2013
Apache Tez
• Generic DAG workflow
• Container re-use
• AM pool service




                         © Hortonworks Inc. 2013
AM Pool Service
• Pre-launches a pool of AMs
• Jobs submitted to these pre-launched AMs
  – Saves 3-5 seconds
• Pre-launched AMs can pre-allocate containers
• Tasks can be started as soon as the job is submitted
  – Saves 2-3 seconds




                        © Hortonworks Inc. 2013
Container reuse
• Tez MapReduce AM supports Container reuse
• Launched JVMs are re-used between tasks
  – about 4-5 seconds saved in case of multiple waves
• Allows future enhancements
  – re-using task data structures across splits




                               © Hortonworks Inc. 2013
In HDFS
• NO!
• In Hive
   – Metastore
   – RCFile issues
   – CPU intensive code
• In YARN+MR
   – Data locality
   – Parallelism
   – Spin-up times
• In HDFS
   – Bad disks/deteriorating nodes



                              © Hortonworks Inc. 2013
Speculation/bad disks
• No cluster remains at 100% forever
• Bad disks cause latency issues
  – Speculation is one defense, but it is not enough
  – Fault tolerance is a safety net
• Possible solutions:
  – More feedback from HDFS about stale nodes, bad/slow disks
  – Volume scheduling




                              © Hortonworks Inc. 2013
General guidelines
• Benchmarking
  – Be wary of benchmarks! Including ours!
  – Algebra with X




                            © Hortonworks Inc. 2013
General guidelines contd.
• Benchmarks: To repeat, YMMV.
• Benchmark *your* use-case.
• Decide your problem size
   – If (smallData) {
         Mysql/Postgres/Your smart phone
    } else {
         –Make it work
         –Make it scale
         –Make it faster
     }
• If it is (seems to be) slow, file a bug, spend a little time!
• Replacing systems without understanding them
   – Is an easy way to have an illusion of progress



                               © Hortonworks Inc. 2013
Related talks
• “Optimizing Hive Queries” by Owen O’Malley
• “What’s New and What’s Next in Apache Hive” by Gunther
  Hagleitner




                      © Hortonworks Inc. 2013
Credits
• Arun C Murthy
• Bikas Saha
• Gopal Vijayaraghavan
• Hitesh Shah
• Siddharth Seth
• Vinod Kumar Vavilapalli
• Alan Gates
• Ashutosh Chauhan
• Vikram Dixit
• Gunther Hagleitner
• Owen O’Malley
• Jintendranath Pandey
• Yahoo!, Facebook, Twitter, SAP and Microsoft all contributing.

                          © Hortonworks Inc. 2013
Q&A
• Thanks!




            © Hortonworks Inc. 2013

More Related Content

PDF
Cloudera Impala: A modern SQL Query Engine for Hadoop
PPTX
The Impala Cookbook
PPTX
Hadoop Backup and Disaster Recovery
PPTX
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
PDF
Hadoop World 2011: The Hadoop Stack - Then, Now and in the Future - Eli Colli...
PPTX
How the Internet of Things are Turning the Internet Upside Down
PPT
Low Latency SQL on Hadoop - What's best for your cluster
PPTX
Cloudera Big Data Integration Speedpitch at TDWI Munich June 2017
Cloudera Impala: A modern SQL Query Engine for Hadoop
The Impala Cookbook
Hadoop Backup and Disaster Recovery
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Hadoop World 2011: The Hadoop Stack - Then, Now and in the Future - Eli Colli...
How the Internet of Things are Turning the Internet Upside Down
Low Latency SQL on Hadoop - What's best for your cluster
Cloudera Big Data Integration Speedpitch at TDWI Munich June 2017

What's hot (20)

PPTX
Bigdata workshop february 2015
PDF
Implementing Parallelism in PostgreSQL - PGCon 2014
 
PPTX
Evolving HDFS to a Generalized Distributed Storage Subsystem
PPTX
Hadoop World 2011: Mike Olson Keynote Presentation
PDF
Big data processing meets non-volatile memory: opportunities and challenges
PPTX
Cloudera Sessions - Clinic 1 - Getting Started With Hadoop
PPTX
HDFS Tiered Storage: Mounting Object Stores in HDFS
PDF
Strata London 2019 Scaling Impala
PDF
a Secure Public Cache for YARN Application Resources
PPTX
Taming the Elephant: Efficient and Effective Apache Hadoop Management
PPTX
Apache Tez - A unifying Framework for Hadoop Data Processing
PDF
A Survey of Petabyte Scale Databases and Storage Systems Deployed at Facebook
PDF
Storage infrastructure using HBase behind LINE messages
PPTX
Keep your hadoop cluster at its best! v4
PDF
Hadoop Operations for Production Systems (Strata NYC)
PPTX
Hadoop Operations - Best Practices from the Field
PPTX
PPTX
Evolving HDFS to Generalized Storage Subsystem
PPTX
Apache Hadoop YARN: Present and Future
PDF
What's New and Upcoming in HDFS - the Hadoop Distributed File System
Bigdata workshop february 2015
Implementing Parallelism in PostgreSQL - PGCon 2014
 
Evolving HDFS to a Generalized Distributed Storage Subsystem
Hadoop World 2011: Mike Olson Keynote Presentation
Big data processing meets non-volatile memory: opportunities and challenges
Cloudera Sessions - Clinic 1 - Getting Started With Hadoop
HDFS Tiered Storage: Mounting Object Stores in HDFS
Strata London 2019 Scaling Impala
a Secure Public Cache for YARN Application Resources
Taming the Elephant: Efficient and Effective Apache Hadoop Management
Apache Tez - A unifying Framework for Hadoop Data Processing
A Survey of Petabyte Scale Databases and Storage Systems Deployed at Facebook
Storage infrastructure using HBase behind LINE messages
Keep your hadoop cluster at its best! v4
Hadoop Operations for Production Systems (Strata NYC)
Hadoop Operations - Best Practices from the Field
Evolving HDFS to Generalized Storage Subsystem
Apache Hadoop YARN: Present and Future
What's New and Upcoming in HDFS - the Hadoop Distributed File System
Ad

Viewers also liked (9)

PDF
Hadoop introduction
PPTX
Hadoop Summit Europe Talk 2014: Apache Hadoop YARN: Present and Future
PPTX
Hadoop Summit San Jose 2015: YARN - Past, Present and Future
PPT
Pig TPC-H Benchmark and Performance Tuning
PPTX
Hadoop Summit Europe 2015 - YARN Present and Future
PPTX
SQL on Hadoop
PPT
apache pig performance optimizations talk at apachecon 2010
PDF
High-level Programming Languages: Apache Pig and Pig Latin
PPTX
Introduction to Pig
Hadoop introduction
Hadoop Summit Europe Talk 2014: Apache Hadoop YARN: Present and Future
Hadoop Summit San Jose 2015: YARN - Past, Present and Future
Pig TPC-H Benchmark and Performance Tuning
Hadoop Summit Europe 2015 - YARN Present and Future
SQL on Hadoop
apache pig performance optimizations talk at apachecon 2010
High-level Programming Languages: Apache Pig and Pig Latin
Introduction to Pig
Ad

Similar to Innovations in Apache Hadoop MapReduce, Pig and Hive for improving query performance (20)

PPTX
Innovations in Apache Hadoop MapReduce Pig Hive for Improving Query Performance
PDF
Optimizing Hive Queries
PDF
Optimizing Hive Queries
PPTX
MHUG - YARN
PPTX
Yarnthug2014
PPTX
Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)
PPTX
Real time hadoop + mapreduce intro
PPTX
Big data Hadoop
PPTX
Hadoop for the Absolute Beginner
PPTX
Hadoop ppt on the basics and architecture
PDF
Petabyte scale on commodity infrastructure
PPTX
Building HBase Applications - Ted Dunning
PPTX
La big datacamp2014_vikram_dixit
PPTX
2013 year of real-time hadoop
PPTX
Apache Hadoop YARN: best practices
PDF
Innovation in the Data Warehouse - StampedeCon 2016
PPTX
Introduction to hadoop V2
PPTX
ARCHITECTING INFLUXENTERPRISE FOR SUCCESS
PDF
Containers and Big Data
PDF
Emergent Distributed Data Storage
Innovations in Apache Hadoop MapReduce Pig Hive for Improving Query Performance
Optimizing Hive Queries
Optimizing Hive Queries
MHUG - YARN
Yarnthug2014
Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)
Real time hadoop + mapreduce intro
Big data Hadoop
Hadoop for the Absolute Beginner
Hadoop ppt on the basics and architecture
Petabyte scale on commodity infrastructure
Building HBase Applications - Ted Dunning
La big datacamp2014_vikram_dixit
2013 year of real-time hadoop
Apache Hadoop YARN: best practices
Innovation in the Data Warehouse - StampedeCon 2016
Introduction to hadoop V2
ARCHITECTING INFLUXENTERPRISE FOR SUCCESS
Containers and Big Data
Emergent Distributed Data Storage

Recently uploaded (20)

PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Empathic Computing: Creating Shared Understanding
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
cuic standard and advanced reporting.pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Machine learning based COVID-19 study performance prediction
PPTX
Programs and apps: productivity, graphics, security and other tools
PPT
Teaching material agriculture food technology
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
Spectroscopy.pptx food analysis technology
Reach Out and Touch Someone: Haptics and Empathic Computing
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Assigned Numbers - 2025 - Bluetooth® Document
Empathic Computing: Creating Shared Understanding
Building Integrated photovoltaic BIPV_UPV.pdf
Network Security Unit 5.pdf for BCA BBA.
cuic standard and advanced reporting.pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?
Machine learning based COVID-19 study performance prediction
Programs and apps: productivity, graphics, security and other tools
Teaching material agriculture food technology
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Mobile App Security Testing_ A Comprehensive Guide.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
Digital-Transformation-Roadmap-for-Companies.pptx
Agricultural_Statistics_at_a_Glance_2022_0.pdf
“AI and Expert System Decision Support & Business Intelligence Systems”
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Advanced methodologies resolving dimensionality complications for autism neur...
Spectroscopy.pptx food analysis technology

Innovations in Apache Hadoop MapReduce, Pig and Hive for improving query performance

  • 1. Innovations In Apache Hadoop MapReduce, Pig and Hive for improving query performance gopalv@apache.org vinodkv@apache.org Page 1
  • 3. Operation Stinger © Hortonworks Inc. 2013 Page 3
  • 4. Performance at any cost © Hortonworks Inc. 2013
  • 5. • Scalability – Already works great, just don’t break it for performance gains • Isolation + Security – Queries between different users run as different users • Fault tolerance – Keep all of MR’s safety nets to work around bad nodes in clusters • UDFs – Make sure they are “User” defined and not “Admin” defined © Hortonworks Inc. 2013
  • 6. First things first • How far can we push Hive as it exists today? © Hortonworks Inc. 2013
  • 7. Benchmark spec • The TPC-DS benchmark data+query set • Query 27 (big joins small) – For all items sold in stores located in specified states during a given year, find the average quantity, average list price, average list sales price, average coupon amount for a given gender, marital status, education and customer demographic. • Query 82 (big joins big) – List all items and current prices sold through the store channel from certain manufacturers in a given price range and consistently had a quantity between 100 and 500 on hand in a 60-day period. © Hortonworks Inc. 2013
  • 8. TL;DR • TPC-DS Query 27, Scale=200, 10 EC2 nodes (40 disks) © Hortonworks Inc. 2013
  • 9. TL;DR - II • TPC-DS Query 82, Scale=200, 10 EC2 nodes (40 disks) © Hortonworks Inc. 2013
  • 10. Forget the actual benchmark • First of all, YMMV – Software – Hardware – Setup – Tuning • Text formats seem to be the staple of all comparisons – Really? – Everybody’s using it but only for benchmarks! © Hortonworks Inc. 2013
  • 11. What did the trick? • Mapreduce? • HDFS? • Or is it just Hive? © Hortonworks Inc. 2013
  • 12. Optional Advice © Hortonworks Inc. 2013
  • 13. RCFile • Binary RCFiles • Hive pushes down column projections • Less I/O, Less CPU • Smaller files © Hortonworks Inc. 2013
  • 14. Data organization • No data system at scale is loaded once & left alone • Partitions are essential • Data flows into new partitions every day © Hortonworks Inc. 2013
  • 15. A closer look • Now revisiting the benchmark and its results © Hortonworks Inc. 2013
  • 16. Query27 - Before © Hortonworks Inc. 2013
  • 18. Query 27 - After © Hortonworks Inc. 2013
  • 20. Query 82 - Before © Hortonworks Inc. 2013
  • 21. Query 82 - After © Hortonworks Inc. 2013
  • 22. What changed? • Job Count/Correct plan • Correct data formats • Correct data organization • Correct configuration © Hortonworks Inc. 2013
  • 23. What changed? Data Formats Data Organization Query Plan © Hortonworks Inc. 2013
  • 25. Is that all? • NO! • In Hive – Metastore – RCFile issues – CPU intensive code • In YARN+MR – Parallelism – Spin-up times – Data locality • In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • 26. In Hive • NO! • In Hive – Metastore – RCFile issues – CPU intensive code • In YARN+MR – Parallelism – Spin-up times – Data locality • In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • 27. In Hive • NO! • In Hive – Metastore – RCFile issues – CPU intensive code • In YARN+MR – Parallelism – Spin-up times – Data locality • In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • 28. Hive Metastore • 1+N Select problem – SELECT partitions FROM tables; – /* for each needed partition */ SELECT * FROM Partition .. – For query 27 , generates > 5000 queries! 4-5 seconds lost on each call! – Lazy loading or Include/Join are general solutions • Datanucleus/ORM issues – 100K NPEs try.. Catch.. Ignore.. • Metastore DB Schema revisit – Denormalize some/all of it? © Hortonworks Inc. 2013
  • 29. In Hive • NO! • In Hive – Metastore – RCFile issues – CPU intensive code • In YARN+MR – Parallelism – Spin-up times – Data locality • In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • 30. RCFile issues • RCFiles do not split well – Row groups and row group boundaries • Small row groups vs big row groups – Sync() vs min split – Storage packing • Run-length information is lost – Unnecessary deserialization costs © Hortonworks Inc. 2013
  • 31. ORC file format • A single file as output of each task. – Dramatically simplifies integration with Hive – Lowers pressure on the NameNode • Support for the Hive type model – Complex types (struct, list, map, union) – New types (datetime, decimal) – Encoding specific to the column type • Split files without scanning for markers • Bound the amount of memory required for reading or writing. © Hortonworks Inc. 2013
  • 32. In Hive • NO! • In Hive – Metastore – RCFile issues – CPU intensive code • In YARN+MR – Parallelism – Spin-up times – Data locality • In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • 33. CPU intensive code © Hortonworks Inc. 2013
  • 34. CPU intensive code • Hive query engine processes one row at a time – Very inefficient in terms of CPU usage • Lazy deserialization: layers • Object inspector calls • Lots of virtual method calls © Hortonworks Inc. 2013
  • 35. Tighten your loops © Hortonworks Inc. 2013
  • 36. Vectorization to the rescue • Process a row batch at a time instead of a single row • Row batch to consist of column vectors – The column vector will consist of array(s) of primitive types as far as possible • Each operator will process the whole column vector at a time • File formats to give out vectorized batches for processing • Underlying research promises – Better instruction pipelines and cache usage – Mechanical sympathy © Hortonworks Inc. 2013
  • 37. Vectorization: Prelim results • Functionality – Some arithmetic operators and filters using primitive type columns – Have a basic integration benchmark to prove that the whole setup works • Performance – Micro benchmark – More than 30x improvement in the CPU time – Disclaimer: – Micro benchmark! – Include io or deserialization costs or complex and string datatypes © Hortonworks Inc. 2013
  • 38. In YARN+MR • NO! • In Hive – Metastore – RCFile issues – CPU intensive code • In YARN+MR – Data locality – Parallelism – Spin-up times • In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • 39. In YARN+MR • NO! • In Hive – Metastore – RCFile issues – CPU intensive code • In YARN+MR – Data locality – Parallelism – Spin-up times • In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • 40. Data Locality • CombineInputFormat • AM interaction with locality • Short-circuit reads! • Delay scheduling – Good for throughput – Bad for latency © Hortonworks Inc. 2013
  • 41. In YARN+MR • NO! • In Hive – Metastore – RCFile issues – CPU intensive code • In YARN+MR – Data locality – Parallelism – Spin-up times • In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • 42. Parallelism • Can tune it (to some extent) – Controlling splits/reducer count • Hive doesn’t know dynamic cluster status – Benchmarks max out clusters, real jobs may or may not • Hive does not let you control parallelism – particularly in case of multiple jobs in a query © Hortonworks Inc. 2013
  • 43. In YARN+MR • NO! • In Hive – Metastore – RCFile issues – CPU intensive code • In YARN+MR – Data locality – Parallelism – Spin-up times • In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • 44. Spin up times • AM startup costs • Task startup costs • Multiple waves of map tasks © Hortonworks Inc. 2013
  • 45. Apache Tez • Generic DAG workflow • Container re-use • AM pool service © Hortonworks Inc. 2013
  • 46. AM Pool Service • Pre-launches a pool of AMs • Jobs submitted to these pre-launched AMs – Saves 3-5 seconds • Pre-launched AMs can pre-allocate containers • Tasks can be started as soon as the job is submitted – Saves 2-3 seconds © Hortonworks Inc. 2013
  • 47. Container reuse • Tez MapReduce AM supports Container reuse • Launched JVMs are re-used between tasks – about 4-5 seconds saved in case of multiple waves • Allows future enhancements – re-using task data structures across splits © Hortonworks Inc. 2013
  • 48. In HDFS • NO! • In Hive – Metastore – RCFile issues – CPU intensive code • In YARN+MR – Data locality – Parallelism – Spin-up times • In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • 49. Speculation/bad disks • No cluster remains at 100% forever • Bad disks cause latency issues – Speculation is one defense, but it is not enough – Fault tolerance is a safety net • Possible solutions: – More feedback from HDFS about stale nodes, bad/slow disks – Volume scheduling © Hortonworks Inc. 2013
  • 50. General guidelines • Benchmarking – Be wary of benchmarks! Including ours! – Algebra with X © Hortonworks Inc. 2013
  • 51. General guidelines contd. • Benchmarks: To repeat, YMMV. • Benchmark *your* use-case. • Decide your problem size – If (smallData) { Mysql/Postgres/Your smart phone } else { –Make it work –Make it scale –Make it faster } • If it is (seems to be) slow, file a bug, spend a little time! • Replacing systems without understanding them – Is an easy way to have an illusion of progress © Hortonworks Inc. 2013
  • 52. Related talks • “Optimizing Hive Queries” by Owen O’Malley • “What’s New and What’s Next in Apache Hive” by Gunther Hagleitner © Hortonworks Inc. 2013
  • 53. Credits • Arun C Murthy • Bikas Saha • Gopal Vijayaraghavan • Hitesh Shah • Siddharth Seth • Vinod Kumar Vavilapalli • Alan Gates • Ashutosh Chauhan • Vikram Dixit • Gunther Hagleitner • Owen O’Malley • Jintendranath Pandey • Yahoo!, Facebook, Twitter, SAP and Microsoft all contributing. © Hortonworks Inc. 2013
  • 54. Q&A • Thanks! © Hortonworks Inc. 2013

Editor's Notes

  • #11: Since the time we started this, we’ve seen multiple people benchmark hive comparing its text format processors against alternatives
  • #12: Not mapreduce, not hdfs, just plain hive
  • #35: Layers of inspectors that identify column type, de-serialize data and determine appropriate expression routines in the inner loop
  • #38: I wrote all of the code and Jitendra was just consulting :P