SlideShare a Scribd company logo
Experimentation Platform
on Hadoop
Tony Ng, Director, Data Services
Padma Gopal, Manager, Experimentation
Agenda
 Experimentation 101
 Reporting Work flow
 Why Hadoop?
 Framework Architecture
 Challenges & Learnings
 Q & A
Experimentation 101
• What is A/B Testing?
• Why is it important?
• Intuition vs. Reality
• eBay Wins
What is A/B Testing?
• A/B Testing is comparing two versions of a page or process to see which one
performs better
• Variations could be: UI Components, Content, Algorithms etc.
• Measures: Financial metrics, Click rate, Conversion rate etc.
Control - Current design Treatment - Variations of current design
EP – Hadoop Summit 2015 4
How is A/B Testing is done?
EP – Hadoop Summit 2015 5
Why is it important?
• Intuition vs. Reality
–Intuition especially on novel ideas should be backed up by data.
–Demographics and preferences vary
• Data Driven; not based on opinion
• Reduce risk
EP – Hadoop Summit 2015 6
Increased prominence of BIN button compared to Watch, leads to
faster checkouts.
EP – Hadoop Summit 2015 7
Merch placements perform much better when title and price
information is provided upfront.
EP – Hadoop Summit 2015 8
New sign-in design effectively pushed more new users to use
guest checkout
9EP – Hadoop Summit 2015
10
What do we support?
EP – Hadoop Summit 2015
Experimentation Reporting
• How does EP work?
• Work Flow
• DW Challenges
Experiment Lifecycle
EP – Hadoop Summit 2015 12
EP – Hadoop Summit 2015 13
User Behavior &
Transactional Data
Experiment
Metadata
Detail Intermediate Summaries
4 Billion Rows
4 TB
User1 Homepage
User1 Search for IPhone6
User1 View Item1
User2 Search for Coach bag
User2 View Item2
User2 Bid
Treatment 2 User1 Homepage
Treatment 1 User1 Search for IPhone6
Treatment 2 User1 Search for IPhone6
Treatment 1 User1 View Item 1
Treatment 2 User1 View Item 1
Treatment 1 User2 Search for Coach bag
Treatment 2 User2 Search for Coach bag
Treatment 1 100+ Metrics
Treatment 1 20 X Dimensions
Treatment 1 10 Metric Insights
Treatment 2 100+ Metrics
Treatment 2 20 X Dimensions
Treatment 2 10 Data Insights
EP – Hadoop Summit 2015 14
Transactional Metrics
Activity Metrics
Acquisition Metrics
AD Metrics
Email Metrics
Seller Metrics
Engagement metrics
Absolute - Actual number/counts
Normalized - Weighted mean (by GUID/UID)
Lift - Difference between treatment and control
Standard Deviation - Weighted standard deviation
Confidence Interval – Range within which treatment
effect is likely to lie
P-values – Statistically significance
Outlier capped – Trim tail values
Post Stratified – Adjustment method to reduce
variance
DATA INSIGHTS
Daily
Weekly
Cumulative
Browser
OS
Device
Site/Country
Category
Segment
Geo
Hadoop Migration
• Why Hadoop
• Tech Stack
• Architecture Overview
EP – Hadoop Summit 2015 16
Why Hadoop?
• Design & Development flexibility
• Store large amounts of data without the schemas constraints
• System to support complex data transformation logic
• Code base reduction
• Configurability
• Code not tied to environment & easier to share
• Support for complex structures
Scheduler/Client
EP – Hadoop Summit 2015 17
Physical Architecture
Hadoop Cluster
Job
Workflow
RDBMS
ETL
Bridge
Agent
BI
&
PresentationmySQL DW
User
Behavior
Data
1
2
43
5
Hive Scoobi Spark (poc)
AVRO ORC
EP – Hadoop Summit 2015 18
Tech Stack - Scoobi
•Scoobi
– Written in Scala, a functional programming language
– Supports Object Oriented Designs
– Abstraction of MR Framework code to lower
– Portability of typical dataset operations like map, flatMap, filter, groupBy, sort, orderBy, partition
– DList (Distributed Lists): Jobs are submitted as a series of “steps” representing granular MR jobs.
– Enables developers to write a more concise code compared to Java MR code.
EP – Hadoop Summit 2015 19
Word Count in Java M/R.
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class WordCount {
public static class Map extends Mapper<LongWritable, Text, Text,
IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
}
}
}
public static class Reduce extends Reducer<Text, IntWritable, Text,
IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values, Context
context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = new Job(conf, "wordcount");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job,new Path(args[1]));
job.waitForCompletion(true);
}
}
EP – Hadoop Summit 2015 20
Word Count in Scoobi
import Scoobi._, Reduction._
val lines = fromTextFile("hdfs://in/...")
val counts = lines.mapFlatten(_.split(" "))
.map(word => (word, 1))
.groupByKey
.combine(Sum.int)
counts.toTextFile("hdfs://out/...",
overwrite=true).persist(ScoobiConfiguration())
EP – Hadoop Summit 2015 21
Tech Stack - File Format
• Avro
– Supports rich and complex data structures such as Maps, Unions
– Self-Describing data files enabling portability (Schema co-exists with data)
– Supports schema dynamicity using Generic Records
– Supports backward compatibility for data files w.r.t schema changes
• ORC (Optimized Row Columnar)
– A single file as the output of each task, which reduces the NameNode's load
– Metadata stored using Protocol Buffers, which allows addition and removal of fields
– Better performance of queries (bound the amount of memory needed for reading or writing)
– Light-weight indexes stored within the file
EP – Hadoop Summit 2015 22
Tech Stack - Hive
• Efficient Joins for large datasets.
• UDF for use cases like median and percentile calculations.
• Hive Optimizer Joins:
- Smaller is loaded into memory as a hash table and the larger is scanned
- Map joins are automatically picked up by the optimizer.
• Ad-hoc Analysis, Data Reconciliation use-cases and Testing.
EP – Hadoop Summit 2015 23
Fun Facts of EP Processing
• We read more than 200 TB of data for processing daily.
• We run 350 M/R jobs daily.
• We perform more than 30 joins using M/R & Hive, including the ones with heavy data skew.
• We use 40 TB of YARN memory at peak time on a 170 TB Hadoop cluster.
• We can run 150+ concurrent experiments daily.
• Report generation takes around 18 hours.
24
Logical Architecture
EP – Hadoop Summit 2015
EP Reporting Services
Detail Intermediate 1 Intermediate 2 Summary
Configuration
Filters Data Providers Processors
Calculators Metric Providers
Output
ColumnsMetricsDimensions
Framework
Components
Reporting
Context
Cache
Util/Helpers
Command
Line
Input/Output
Conduit
Ancillary
Services
Alerts
Shell
Scripts
Processed
Data Store
Tools
Logging &
Monitoring
CHALLENGES &
LEARNINGS
• Joins
• Job Optimization
• Data Skew
25EP – Hadoop Summit 2015
EP – Hadoop Summit 2015 26
Key Challenges
•Performance
– Job runtimes are subject to SLA & heavily tied to
resources
•Data Skew (Long tail data distribution)
– May cause unrecoverable runtime failures
– Poor performance
•Joins, Combiner
•Job Resiliency
– Auto remediation
– Alerts and Monitoring
EP – Hadoop Summit 2015 27
Solution to Key Challenge - Performance
– Tuned the Hadoop job parameters – a few of them are listed below
• -Dmapreduce.input.fileinputformat.split.minsize and -Dmapreduce.input.fileinputformat.split.maxsize
– Job run times were reduced in the range of 9% to 35%
• -Dscoobi.mapreduce.reducers.bytesperreducer
– Adjusting this parameter helped optimize the number of reducers to use. Job run times were
reduced to the extent of 50% in some cases
• -Dscoobi.concurrentjobs
– Setting this parameter to true enables multiple steps of a scoobi job to run concurrently
• -Dmapreduce.reduce.memory.mb
– Tuning this parameter helped relieving memory pressure
EP – Hadoop Summit 2015 28
Solution to Key Challenge - Performance
– Implement Data cache for objects
• Achieved cache hit ratio of over 99% per job
• Runtime performance improved in the range of 18% to 39% depending on the job
– Redesign/Refactor Jobs and Job Schedules
• Extracted logic from existing jobs into their own jobs
• Job workflow optimization for better parallelism
– Dedicated Hadoop queue with more than 50 TB of YARN memory.
• Shared Hadoop cluster resulted in long waiting times, dedicated queue solved the problem of
resource crunch.
Joins
– Data skew in one or both datasets
 Scoobi block join divides the skewed data into blocks and joins the data one block at a time.
– Multiple joins in a process
 Rewrote a process, which needed join with 11 datasets whose size varied from 49 TB to a few mega
byte, in hive, as this process was taking 6+ hours in Scoobi and reduced the time to 3 hours in hive.
– Other join solutions
 Also looked into Hive’s bucket join, but the cost to sort and bucket the datasets was more than regular
join.
EP – Hadoop Summit 2015 29
EP – Hadoop Summit 2015 30
Combiner
To relieve Reducer memory pressure and prevent OOM
Solution – Emit part-values of the complete operation for the same key using Combiners
– Calculating Mean
• Mean = ( X1 + X2 + X3 …. Xn )/ (1 + 1 + 1 + 1 … n)
• formula is composed of 2 parts and mapper emits 2 part values combining records for the
same key.
• Reducer receives way fewer records after combining and applies the two parts from each
mapper into the actual mean formula.
• Concept can be applied to other complex formula such as Variance, as long as the formula
can be reduced to parts that are commutative and associative.
Job Resiliency
– Auto-remediation
• Auto-restart in case of job failure due to intermittent cluster issues
- Monitoring & Alerting for Hadoop jobs
• Continuous monitoring and email alert generated when a long-running job or failure detected
- Monitoring & Alerting for Data quality
• Daily monitoring of data trend set up for key metrics and email Alert on any anomaly or violations detected
- Recon scripts
• Checks and alerts setup for intermediate data
- Daily data backup
• Daily data back up with distcp to a secondary cluster and ability to restore
EP – Hadoop Summit 2015 31
Next - Evaluate Spark
Current Problems
- Data processing through Map Reduce is slow for a complex DAG as data is persisted to disk
at each step. It is Multiple stages in pipeline are chained together making the overall process
very complex.
- Massive Joins against very large datasets are slow.
- Expressing every complicated business logic into Hadoop Map Reduce is a problem.
Alternatives
- Apache Spark has wide adoption, expressive, industry backing and thriving community
support.
- Apache spark has 10x to 100x speed improvements in comparison to traditional M/R jobs.
EP – Hadoop Summit 2015 32
Summary
• Hadoop is ideal for large data processing and provides a
highly scalable storage platform.
• Hadoop eco-system is still evolving and have to face the
issues around the software which is still
underdevelopment.
• Moving to Hadoop helped to free up huge capacity in DW
for deep dive analysis.
• Huge cost reduction for business like us with exploding
data sets.
EP – Hadoop Summit 2015 33
Q & A

More Related Content

PPT
Hadoop at Ebay
PPT
Online Analytical Processing
PPTX
Data ingestion
PPTX
Airflow 101
PPTX
Real-time Analytics with Trino and Apache Pinot
PDF
Data warehouse architecture
PPTX
Introduction to big data
PDF
Dwbi Project
Hadoop at Ebay
Online Analytical Processing
Data ingestion
Airflow 101
Real-time Analytics with Trino and Apache Pinot
Data warehouse architecture
Introduction to big data
Dwbi Project

What's hot (20)

PPTX
Introduction to Apache Hive(Big Data, Final Seminar)
PDF
Snowflake SnowPro Certification Exam Cheat Sheet
PPT
Erd cardinality
PPTX
Importance of data analytics for business
PPTX
Country Readiness Tool-kit
PDF
Inside Parquet Format
PDF
Airflow introduction
PDF
B Shaped model
PPT
Schemaless Databases
PDF
Big Data technology Landscape
PDF
Introduction to Machine Learning with Spark
PPTX
Software Engineering- ERD DFD Decision Tree and Table
PDF
Apache Airflow
PPTX
Introduction to Yarn
PDF
Understanding big data and data analytics big data
PPTX
OLAP v/s OLTP
PDF
Intro to Airflow: Goodbye Cron, Welcome scheduled workflow management
PDF
Lessons from Large-Scale Cloud Software at Databricks
PPTX
Airflow presentation
PPTX
Data quality + data governance: the formula for bigger, better decisions
Introduction to Apache Hive(Big Data, Final Seminar)
Snowflake SnowPro Certification Exam Cheat Sheet
Erd cardinality
Importance of data analytics for business
Country Readiness Tool-kit
Inside Parquet Format
Airflow introduction
B Shaped model
Schemaless Databases
Big Data technology Landscape
Introduction to Machine Learning with Spark
Software Engineering- ERD DFD Decision Tree and Table
Apache Airflow
Introduction to Yarn
Understanding big data and data analytics big data
OLAP v/s OLTP
Intro to Airflow: Goodbye Cron, Welcome scheduled workflow management
Lessons from Large-Scale Cloud Software at Databricks
Airflow presentation
Data quality + data governance: the formula for bigger, better decisions
Ad

Viewers also liked (20)

PPTX
Pulsar: Real-time Analytics at Scale with Kafka, Kylin and Druid
PDF
Apache Kylin: OLAP Engine on Hadoop - Tech Deep Dive
PDF
Building an experimentation framework
PPT
Case Study: Realtime Analytics with Druid
PPTX
The Evolution of Apache Kylin
PPTX
Scalable Real-time analytics using Druid
PDF
ElasticSearch: Distributed Multitenant NoSQL Datastore and Search Engine
PPTX
Eventually Elasticsearch: Eventual Consistency in the Real World
PDF
Pinot: Realtime Distributed OLAP datastore
PPT
Big Data Paris 2015 - Cassandra chez Chronopost
PPTX
Kylin OLAP Engine Tour
PDF
Architecture Big Data open source S.M.A.C.K
PDF
Experimentation Platform at Netflix
PDF
IS OLAP DEAD IN THE AGE OF BIG DATA?
PDF
Aggregated queries with Druid on terrabytes and petabytes of data
PPTX
Apache Kylin – Cubes on Hadoop
PPT
Devoxx 2016 - Dropwizard : Création de services REST production-ready
PDF
Streaming Analytics with Spark, Kafka, Cassandra and Akka by Helena Edelson
PPTX
Design cube in Apache Kylin
PDF
Requêtes multi-critères avec Cassandra
Pulsar: Real-time Analytics at Scale with Kafka, Kylin and Druid
Apache Kylin: OLAP Engine on Hadoop - Tech Deep Dive
Building an experimentation framework
Case Study: Realtime Analytics with Druid
The Evolution of Apache Kylin
Scalable Real-time analytics using Druid
ElasticSearch: Distributed Multitenant NoSQL Datastore and Search Engine
Eventually Elasticsearch: Eventual Consistency in the Real World
Pinot: Realtime Distributed OLAP datastore
Big Data Paris 2015 - Cassandra chez Chronopost
Kylin OLAP Engine Tour
Architecture Big Data open source S.M.A.C.K
Experimentation Platform at Netflix
IS OLAP DEAD IN THE AGE OF BIG DATA?
Aggregated queries with Druid on terrabytes and petabytes of data
Apache Kylin – Cubes on Hadoop
Devoxx 2016 - Dropwizard : Création de services REST production-ready
Streaming Analytics with Spark, Kafka, Cassandra and Akka by Helena Edelson
Design cube in Apache Kylin
Requêtes multi-critères avec Cassandra
Ad

Similar to eBay Experimentation Platform on Hadoop (20)

PDF
Distributed Computing with Apache Hadoop: Technology Overview
PDF
1. Big Data - Introduction(what is bigdata).pdf
PPT
Hadoop MapReduce
PPTX
Hadoop and big data training
PPTX
Hadoop_EcoSystem slide by CIDAC India.pptx
PDF
Data Analysis with Hadoop and Hive, ChicagoDB 2/21/2011
PDF
Hadoop Tutorial with @techmilind
 
PPT
Apache hadoop, hdfs and map reduce Overview
PPTX
Introduction to Hadoop and Big Data
PDF
Extending Hadoop for Fun & Profit
PPTX
Introduction to Hadoop
PPT
Hadoop map reduce
PDF
Big Data Analytics Chapter3-6@2021.pdf
PDF
HadoopThe Hadoop Java Software Framework
PPTX
Hadoop and Mapreduce for .NET User Group
PDF
Introduction to Spark
PPTX
Keynote - Cloudera - Mike Olson - Hadoop World 2010
PPTX
Cloudera - Mike Olson - Hadoop World 2010
PPTX
Hadoop-part1 in cloud computing subject.pptx
PPT
Hadoop - Introduction to HDFS
Distributed Computing with Apache Hadoop: Technology Overview
1. Big Data - Introduction(what is bigdata).pdf
Hadoop MapReduce
Hadoop and big data training
Hadoop_EcoSystem slide by CIDAC India.pptx
Data Analysis with Hadoop and Hive, ChicagoDB 2/21/2011
Hadoop Tutorial with @techmilind
 
Apache hadoop, hdfs and map reduce Overview
Introduction to Hadoop and Big Data
Extending Hadoop for Fun & Profit
Introduction to Hadoop
Hadoop map reduce
Big Data Analytics Chapter3-6@2021.pdf
HadoopThe Hadoop Java Software Framework
Hadoop and Mapreduce for .NET User Group
Introduction to Spark
Keynote - Cloudera - Mike Olson - Hadoop World 2010
Cloudera - Mike Olson - Hadoop World 2010
Hadoop-part1 in cloud computing subject.pptx
Hadoop - Introduction to HDFS

Recently uploaded (20)

PPTX
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
PDF
Design an Analysis of Algorithms II-SECS-1021-03
PDF
Adobe Illustrator 28.6 Crack My Vision of Vector Design
PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 41
PPTX
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
PDF
Odoo Companies in India – Driving Business Transformation.pdf
PPTX
Introduction to Artificial Intelligence
PDF
Understanding Forklifts - TECH EHS Solution
PDF
Wondershare Filmora 15 Crack With Activation Key [2025
PPTX
Operating system designcfffgfgggggggvggggggggg
PPTX
Transform Your Business with a Software ERP System
PDF
Navsoft: AI-Powered Business Solutions & Custom Software Development
PDF
System and Network Administraation Chapter 3
PDF
How to Migrate SBCGlobal Email to Yahoo Easily
PPTX
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
PDF
How to Choose the Right IT Partner for Your Business in Malaysia
PPTX
Essential Infomation Tech presentation.pptx
PDF
Softaken Excel to vCard Converter Software.pdf
PDF
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
Design an Analysis of Algorithms II-SECS-1021-03
Adobe Illustrator 28.6 Crack My Vision of Vector Design
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
Internet Downloader Manager (IDM) Crack 6.42 Build 41
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
Odoo Companies in India – Driving Business Transformation.pdf
Introduction to Artificial Intelligence
Understanding Forklifts - TECH EHS Solution
Wondershare Filmora 15 Crack With Activation Key [2025
Operating system designcfffgfgggggggvggggggggg
Transform Your Business with a Software ERP System
Navsoft: AI-Powered Business Solutions & Custom Software Development
System and Network Administraation Chapter 3
How to Migrate SBCGlobal Email to Yahoo Easily
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
How to Choose the Right IT Partner for Your Business in Malaysia
Essential Infomation Tech presentation.pptx
Softaken Excel to vCard Converter Software.pdf
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf

eBay Experimentation Platform on Hadoop

  • 1. Experimentation Platform on Hadoop Tony Ng, Director, Data Services Padma Gopal, Manager, Experimentation
  • 2. Agenda  Experimentation 101  Reporting Work flow  Why Hadoop?  Framework Architecture  Challenges & Learnings  Q & A
  • 3. Experimentation 101 • What is A/B Testing? • Why is it important? • Intuition vs. Reality • eBay Wins
  • 4. What is A/B Testing? • A/B Testing is comparing two versions of a page or process to see which one performs better • Variations could be: UI Components, Content, Algorithms etc. • Measures: Financial metrics, Click rate, Conversion rate etc. Control - Current design Treatment - Variations of current design EP – Hadoop Summit 2015 4
  • 5. How is A/B Testing is done? EP – Hadoop Summit 2015 5
  • 6. Why is it important? • Intuition vs. Reality –Intuition especially on novel ideas should be backed up by data. –Demographics and preferences vary • Data Driven; not based on opinion • Reduce risk EP – Hadoop Summit 2015 6
  • 7. Increased prominence of BIN button compared to Watch, leads to faster checkouts. EP – Hadoop Summit 2015 7
  • 8. Merch placements perform much better when title and price information is provided upfront. EP – Hadoop Summit 2015 8
  • 9. New sign-in design effectively pushed more new users to use guest checkout 9EP – Hadoop Summit 2015
  • 10. 10 What do we support? EP – Hadoop Summit 2015
  • 11. Experimentation Reporting • How does EP work? • Work Flow • DW Challenges
  • 12. Experiment Lifecycle EP – Hadoop Summit 2015 12
  • 13. EP – Hadoop Summit 2015 13 User Behavior & Transactional Data Experiment Metadata Detail Intermediate Summaries 4 Billion Rows 4 TB User1 Homepage User1 Search for IPhone6 User1 View Item1 User2 Search for Coach bag User2 View Item2 User2 Bid Treatment 2 User1 Homepage Treatment 1 User1 Search for IPhone6 Treatment 2 User1 Search for IPhone6 Treatment 1 User1 View Item 1 Treatment 2 User1 View Item 1 Treatment 1 User2 Search for Coach bag Treatment 2 User2 Search for Coach bag Treatment 1 100+ Metrics Treatment 1 20 X Dimensions Treatment 1 10 Metric Insights Treatment 2 100+ Metrics Treatment 2 20 X Dimensions Treatment 2 10 Data Insights
  • 14. EP – Hadoop Summit 2015 14 Transactional Metrics Activity Metrics Acquisition Metrics AD Metrics Email Metrics Seller Metrics Engagement metrics Absolute - Actual number/counts Normalized - Weighted mean (by GUID/UID) Lift - Difference between treatment and control Standard Deviation - Weighted standard deviation Confidence Interval – Range within which treatment effect is likely to lie P-values – Statistically significance Outlier capped – Trim tail values Post Stratified – Adjustment method to reduce variance DATA INSIGHTS Daily Weekly Cumulative Browser OS Device Site/Country Category Segment Geo
  • 15. Hadoop Migration • Why Hadoop • Tech Stack • Architecture Overview
  • 16. EP – Hadoop Summit 2015 16 Why Hadoop? • Design & Development flexibility • Store large amounts of data without the schemas constraints • System to support complex data transformation logic • Code base reduction • Configurability • Code not tied to environment & easier to share • Support for complex structures
  • 17. Scheduler/Client EP – Hadoop Summit 2015 17 Physical Architecture Hadoop Cluster Job Workflow RDBMS ETL Bridge Agent BI & PresentationmySQL DW User Behavior Data 1 2 43 5 Hive Scoobi Spark (poc) AVRO ORC
  • 18. EP – Hadoop Summit 2015 18 Tech Stack - Scoobi •Scoobi – Written in Scala, a functional programming language – Supports Object Oriented Designs – Abstraction of MR Framework code to lower – Portability of typical dataset operations like map, flatMap, filter, groupBy, sort, orderBy, partition – DList (Distributed Lists): Jobs are submitted as a series of “steps” representing granular MR jobs. – Enables developers to write a more concise code compared to Java MR code.
  • 19. EP – Hadoop Summit 2015 19 Word Count in Java M/R. import java.io.IOException; import java.util.*; import org.apache.hadoop.fs.Path; import org.apache.hadoop.conf.*; import org.apache.hadoop.io.*; import org.apache.hadoop.mapreduce.*; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; public class WordCount { public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasMoreTokens()) { word.set(tokenizer.nextToken()); context.write(word, one); } } } public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> { public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } context.write(key, new IntWritable(sum)); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = new Job(conf, "wordcount"); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job,new Path(args[1])); job.waitForCompletion(true); } }
  • 20. EP – Hadoop Summit 2015 20 Word Count in Scoobi import Scoobi._, Reduction._ val lines = fromTextFile("hdfs://in/...") val counts = lines.mapFlatten(_.split(" ")) .map(word => (word, 1)) .groupByKey .combine(Sum.int) counts.toTextFile("hdfs://out/...", overwrite=true).persist(ScoobiConfiguration())
  • 21. EP – Hadoop Summit 2015 21 Tech Stack - File Format • Avro – Supports rich and complex data structures such as Maps, Unions – Self-Describing data files enabling portability (Schema co-exists with data) – Supports schema dynamicity using Generic Records – Supports backward compatibility for data files w.r.t schema changes • ORC (Optimized Row Columnar) – A single file as the output of each task, which reduces the NameNode's load – Metadata stored using Protocol Buffers, which allows addition and removal of fields – Better performance of queries (bound the amount of memory needed for reading or writing) – Light-weight indexes stored within the file
  • 22. EP – Hadoop Summit 2015 22 Tech Stack - Hive • Efficient Joins for large datasets. • UDF for use cases like median and percentile calculations. • Hive Optimizer Joins: - Smaller is loaded into memory as a hash table and the larger is scanned - Map joins are automatically picked up by the optimizer. • Ad-hoc Analysis, Data Reconciliation use-cases and Testing.
  • 23. EP – Hadoop Summit 2015 23 Fun Facts of EP Processing • We read more than 200 TB of data for processing daily. • We run 350 M/R jobs daily. • We perform more than 30 joins using M/R & Hive, including the ones with heavy data skew. • We use 40 TB of YARN memory at peak time on a 170 TB Hadoop cluster. • We can run 150+ concurrent experiments daily. • Report generation takes around 18 hours.
  • 24. 24 Logical Architecture EP – Hadoop Summit 2015 EP Reporting Services Detail Intermediate 1 Intermediate 2 Summary Configuration Filters Data Providers Processors Calculators Metric Providers Output ColumnsMetricsDimensions Framework Components Reporting Context Cache Util/Helpers Command Line Input/Output Conduit Ancillary Services Alerts Shell Scripts Processed Data Store Tools Logging & Monitoring
  • 25. CHALLENGES & LEARNINGS • Joins • Job Optimization • Data Skew 25EP – Hadoop Summit 2015
  • 26. EP – Hadoop Summit 2015 26 Key Challenges •Performance – Job runtimes are subject to SLA & heavily tied to resources •Data Skew (Long tail data distribution) – May cause unrecoverable runtime failures – Poor performance •Joins, Combiner •Job Resiliency – Auto remediation – Alerts and Monitoring
  • 27. EP – Hadoop Summit 2015 27 Solution to Key Challenge - Performance – Tuned the Hadoop job parameters – a few of them are listed below • -Dmapreduce.input.fileinputformat.split.minsize and -Dmapreduce.input.fileinputformat.split.maxsize – Job run times were reduced in the range of 9% to 35% • -Dscoobi.mapreduce.reducers.bytesperreducer – Adjusting this parameter helped optimize the number of reducers to use. Job run times were reduced to the extent of 50% in some cases • -Dscoobi.concurrentjobs – Setting this parameter to true enables multiple steps of a scoobi job to run concurrently • -Dmapreduce.reduce.memory.mb – Tuning this parameter helped relieving memory pressure
  • 28. EP – Hadoop Summit 2015 28 Solution to Key Challenge - Performance – Implement Data cache for objects • Achieved cache hit ratio of over 99% per job • Runtime performance improved in the range of 18% to 39% depending on the job – Redesign/Refactor Jobs and Job Schedules • Extracted logic from existing jobs into their own jobs • Job workflow optimization for better parallelism – Dedicated Hadoop queue with more than 50 TB of YARN memory. • Shared Hadoop cluster resulted in long waiting times, dedicated queue solved the problem of resource crunch.
  • 29. Joins – Data skew in one or both datasets  Scoobi block join divides the skewed data into blocks and joins the data one block at a time. – Multiple joins in a process  Rewrote a process, which needed join with 11 datasets whose size varied from 49 TB to a few mega byte, in hive, as this process was taking 6+ hours in Scoobi and reduced the time to 3 hours in hive. – Other join solutions  Also looked into Hive’s bucket join, but the cost to sort and bucket the datasets was more than regular join. EP – Hadoop Summit 2015 29
  • 30. EP – Hadoop Summit 2015 30 Combiner To relieve Reducer memory pressure and prevent OOM Solution – Emit part-values of the complete operation for the same key using Combiners – Calculating Mean • Mean = ( X1 + X2 + X3 …. Xn )/ (1 + 1 + 1 + 1 … n) • formula is composed of 2 parts and mapper emits 2 part values combining records for the same key. • Reducer receives way fewer records after combining and applies the two parts from each mapper into the actual mean formula. • Concept can be applied to other complex formula such as Variance, as long as the formula can be reduced to parts that are commutative and associative.
  • 31. Job Resiliency – Auto-remediation • Auto-restart in case of job failure due to intermittent cluster issues - Monitoring & Alerting for Hadoop jobs • Continuous monitoring and email alert generated when a long-running job or failure detected - Monitoring & Alerting for Data quality • Daily monitoring of data trend set up for key metrics and email Alert on any anomaly or violations detected - Recon scripts • Checks and alerts setup for intermediate data - Daily data backup • Daily data back up with distcp to a secondary cluster and ability to restore EP – Hadoop Summit 2015 31
  • 32. Next - Evaluate Spark Current Problems - Data processing through Map Reduce is slow for a complex DAG as data is persisted to disk at each step. It is Multiple stages in pipeline are chained together making the overall process very complex. - Massive Joins against very large datasets are slow. - Expressing every complicated business logic into Hadoop Map Reduce is a problem. Alternatives - Apache Spark has wide adoption, expressive, industry backing and thriving community support. - Apache spark has 10x to 100x speed improvements in comparison to traditional M/R jobs. EP – Hadoop Summit 2015 32
  • 33. Summary • Hadoop is ideal for large data processing and provides a highly scalable storage platform. • Hadoop eco-system is still evolving and have to face the issues around the software which is still underdevelopment. • Moving to Hadoop helped to free up huge capacity in DW for deep dive analysis. • Huge cost reduction for business like us with exploding data sets. EP – Hadoop Summit 2015 33
  • 34. Q & A

Editor's Notes

  • #19: Scoobi – Advantages compared to Java MR Written in Scala, a functional programming language, making Scoobi suitable for writing MR code Supports Object Oriented Designs (and legacy java object data models) MR Framework code is completely abstracted to lower levels leaving application developers to worry only about business logic Typical dataset operations like map, flatMap, filter, groupBy, sort, orderBy, partition are ported over in functionality to MR paradigm Large datasets are abstracted into a data type called DList (Distributed Lists). DLists represent delayed computations (a.k.a Scoobi Plan) using which jobs are submitted as a series of “steps” representing granular MR jobs. Developers do not need to create workflows for individual jobs Any MR operation can be executed on a DList enabling developers to write a more concise code compared to Java MR code. Multiple similar libraries based on Scala such as Scalding and Scrunch
  • #30:   Scoobi block join, where one of the datasets was heavily skewed. Join key was item_id and one of the datasets had over a million records for the same key, which was causing the job to fail. Block join divides the skewed data into blocks and joins the data one block at a time.         * Replicate the small (left) side n times including the id of the replica in the key. On the right     * side, add a random integer from 0...n-1 to the key. Join using the pseudo-key and strip out the extra     * fields.     * Useful for skewed join keys and large datasets.
  • #31: To relieve Reducer memory pressure and prevent OOM A combiner may be used to help by performing a map-local aggregation to prevent OOM errors on reducers due to a large number of input records. In Scoobi, a combiner takes the form of a function which may be invoked on a DList. Also, Combiner represents operations that have the Commutative and Associative properties. Further, two records must be combined in all aspects of the records’ attributes to result in a combined record. The problem of combining becomes more compounded in real-world problems where the rules of combining may not be directly applicable to attributes of records.
  • #33: Current Problems Data processing through Map Reduce is slow for a complex DAG as data is persisted to disk at each step. It is not designed for faster joins. Multiple stages in pipeline are chained together making the overall process very complex. Massive Joins against very large datasets. There is overwhelming need to make data more interactive/responsive and Hadoop is not built for it. Expressing every complicated business logic into Hadoop Map Reduce is a problem.