SlideShare a Scribd company logo
A look under the hood at Apache
Spark's API and engine evolutions
Reynold Xin @rxin
2017-02-08, Amsterdam Meetup
About Databricks
Founded by creators of Spark
Cloud data platform
- Spark
- Interactive analysis
- Cluster management
- Production pipelines
- Data governance & security
Databricks Amsterdam R&D Center
Started in January
Hiring distributed systems &
database engineers!
Email me: rxin@databricks.com
SQL Streaming MLlib
Spark Core (RDD)
GraphX
Spark stack diagram
Frontend
(user facing APIs)
Backend
(execution)
Spark stack diagram
(a different take)
Frontend
(RDD, DataFrame, ML pipelines, …)
Backend
(scheduler, shuffle, operators, …)
Spark stack diagram
(a different take)
Today’s Talk
Some archaeology
- IMS, relational databases
- MapReduce
- data frames
Last 6 years of Spark evolution
Databases
IBM IMS hierarchical database (1966)
Image from https://stratechery.com/2016/oracles-cloudy-future/
Hierarchical Database
- Improvement over file system: query language & catalog
- Lack of flexibility
- Difficult to query items in different parts of the hierarchy
- Relationships are pre-determined and difficult to change
A look under the hood at Apache Spark's API and engine evolutions
“Future users of large data banks must be protected from having to
know how the data is organized in the machine. …
most application programs should remain unaffected when the
internal representation of data is changed and even when some
aspects of the external representation are changed.”
Era of relational databases (late 60s)
Two “new” important ideas
Physical Data Independence: The ability to change the physical data
layout without having to change the logical schema.
Declarative Query Language: Programmer specifies “what” rather than
“how”.
Why?
Business applications outlive the environments they were created in:
- New requirements might surface
- Underlying hardware might change
- Require physical layout changes (indexing, different storage medium, etc)
Enabled tremendous amount of innovation:
- Indexes, compression, column stores, etc
Relational Database Pros vs Cons
- Declarative and data independent
- SQL is the universal interface everybody knows
- Difficult to compose & build complex applications
- Too opinionated and inflexible
- Require data modeling before putting any data in
- SQL is the only programming language
Big Data, MapReduce,
Hadoop
Challenges Google faced
Data size growing (volume & velocity)
- Processing has to scale out over large clusters
Complexity of analysis increasing (variety)
- Massive ETL (web crawling)
- Machine learning, graph processing
The Big Data Problem
Semi-/Un-structured data doesn’t fit well with databases
Single machine can no longer process or even store all the data!
Only solution is to distribute general storage & processing over
clusters.
Google Datacenter
How do we program this thing?
19
Data-Parallel Models
Restrict the programming interface so that the system can do more
automatically
“Here’s an operation, run it on all of the data”
- I don’t care where it runs (you schedule that)
- In fact, feel free to run it twice on different nodes
- Siimlar to “declarative programming” in databases
A look under the hood at Apache Spark's API and engine evolutions
MapReduce Pros vs Cons
- Massively parallel
- Very flexible programming model & schema-on-read
- Extremely verbose & difficult to learn
- Most real applications require multiple MR steps
- 21 MR steps -> 21 mapper and reducer classes
- Lots of boilerplate code per step
- Bad performance
R, Python, data frame
Data frames in R / Python
> head(filter(df, df$waiting < 50)) # an example in R
## eruptions waiting
##1 1.750 47
##2 1.750 47
##3 1.867 48
Developed by stats community & concise syntax for ad-hoc analysis
Procedural (not declarative)
R data frames Pros and Cons
- Easy to learn
- Pretty fast on a laptop (or one server)
- No parallelism & doesn’t work well on big data
- Lack sophisticated query optimization
“Are you going to talk
about Spark at all
tonight!?”
Which one is better?
Databases, R, MapReduce?
Declarative, procedural, data independence?
Spark’s initial focus: a better MapReduce
Language-integrated API (RDD): similar to Scala’s collection library
using functional programming; incredibly powerful and composable
lines = spark.textFile(“hdfs://...”) // RDD[String]
points = lines.map(line => parsePoint(line)) // RDD[Point]
points.filter(p => p.x > 100).count()
Better performance: through a more general DAG abstraction, faster
scheduling, and in-memory caching
Programmability
WordCount in 50+ lines of Java MR
WordCount in 3 lines of Spark
Challenge with Functional API
Looks high-level, but hides many semantics of computation
• Functions are arbitrary blocks of Java bytecode
• Data stored is arbitrary Java objects
Users can mix APIs in suboptimal ways
map
filter
groupBy
sort
union
join
leftOuterJoin
rightOuterJoin
reduce
count
fold
reduceByKey
cogroup
cross
zip
sample
take
first
partitionBy
mapWith
pipe
save
...
groupByKey
Which Operator Causes Most Tickets?
Example Problem
pairs = data.map(word => (word, 1))
groups = pairs.groupByKey()
groups.map((k, vs) => (k, vs.sum))
Physical API:
Materializes all groups
as Seq[Int] objects
Then promptly
aggregates them
Challenge: Data Representation
Java objects often many times larger than underlying fields
class User(name: String, friends: Array[Int])
new User(“Bobby”, Array(1, 2))
User 0x… 0x…
String
3
0
1 2
Bobby
5 0x…
int[]
char[] 5
Recap: two primary issues
1. Many APIs specifies the “physical” behavior, rather than the
“logical” intent, aka not declarative enough.
2. Closures (user-defined functions and types) are opaque to the
engine, an as a result little room for improvement.
Sort Benchmark
Originally sponsored by Jim Gray to measure advancements in
software and hardware in 1987
Participants often used purpose-built hardware/software to compete
• Large companies: IBM, Microsoft, Yahoo, …
• Academia: UC Berkeley, UCSD, MIT, …
Sort Benchmark
• Past winners: Microsoft, Yahoo, Samsung, UCSD, …
1MB -> 100MB -> 1TB (1998) -> 100TB (2009)
Winning Attempt
Built on low-level Spark API and:
- Put all data in off-heap memory using sun.misc.Unsafe
- Use tight low level while loops rather than iterators
~ 3000 lines of low level code on Spark written by Reynold
On-Disk Sort Record:
Time to sort 100TB
2100 machines2013 Record:
Hadoop
2014 Record:
Spark
Source: Daytona GraySort benchmark, sortbenchmark.org
72 minutes
207 machines
23 minutes
Also sorted 1PB in 4 hours
How do we enable the
average users to win a
world record, using a few
lines of code?
Goals of last two year’s API evolution
1. Simpler APIs bridging the gap between big data engineering and
data science.
2. Higher level, declarative APIs that are future proof (engine can
optimize programs automatically).
Taking the best ideas from databases, big data, and data science
Structured APIs:
DataFrames + Spark SQL
DataFrames and Spark SQL
Efficient library for structured data (data with a known schema)
• Two interfaces: SQL for analysts + apps, DataFrames for programmers
Optimized computation and storage, similar to RDBMS
SIGMOD 2015
Execution Steps
Logical
Plan
Physical
Plan
Catalog
Optimizer
RDDs
…
Data
Source
API
SQL
Code
Generator
Data
Frames
DataFrame API
DataFrames hold rows with a known schema and offer relational
operations on them through a DSL
val users = spark.sql(“select * from users”)
val massUsers = users(users(“country”) === “NL”)
massUsers.count()
massUsers.groupBy(“name”).avg(“age”)
Expression AST
Spark RDD Execution
Java/Scala
frontend
JVM
backend
Python
frontend
Python
backend
opaque closures
(user-defined functions)
Spark DataFrame Execution
DataFrame
frontend
Logical Plan
Physical
execution
Catalyst
optimizer
Intermediate representation for computation
Spark DataFrame Execution
Python
DF
Logical Plan
Physical
execution
Catalyst
optimizer
Java/Scala
DF
R
DF
Intermediate representation for computation
Simple wrappers to create logical plan
Structured API Example
events =
sc.read.json(“/logs”)
stats =
events.join(users)
.groupBy(“loc”,“status”)
.avg(“duration”)
errors = stats.where(
stats.status == “ERR”)
DataFrame API Optimized Plan Specialized Code
SCAN logs SCAN users
JOIN
AGG
FILTER
while(logs.hasNext) {
e = logs.next
if(e.status == “ERR”) {
u = users.get(e.uid)
key = (u.loc, e.status)
sum(key) += e.duration
count(key) += 1
}
}
...
Benefit of Logical Plan: Simpler Frontend
Python : ~2000 line of code (built over a weekend)
R : ~1000 line of code
i.e. much easier to add new language bindings (Julia, Clojure, …)
Performance
0 2 4 6 8 10
Java/Scala
Python
Runtime for an example aggregation workload
RDD
Benefit of Logical Plan:
Performance Parity Across Languages
0 2 4 6 8 10
Java/Scala
Python
Java/Scala
Python
R
SQL
Runtime for an example aggregation workload (secs)
DataFrame
RDD
What are Spark’s structured APIs?
Combination of:
- data frame from R as the “interface” – easy to learn
- declarativity & data independence from databases -- easy to optimize &
future-proof
- flexibility & parallelism from MapReduce -- massively scalable & flexible
Future possibilities
Spark as a fast, multi-core data collection library
Spark as a performant streaming engine
Spark as a GPU computation framework
All using the same API
Python Java/Scala RSQL …
DataFrame
Logical Plan
LLVMJVM SIMD GPUs
Unified API, One Engine, Automatically Optimized
Tungsten
backend
language
frontend
…
Recap
We learn from previous generation systems to understand what works,
and what can be improved on, and evolve Spark
Latest APIs take the best ideas out of earlier systems
- data frame from R as the “interface” – easy to learn
- declarativity & data independence from databases -- easy to optimize &
future-proof
- flexibility & parallelism from MapReduce -- massively scalable & flexible
Dank je wel
@rxin

More Related Content

PDF
Productizing Structured Streaming Jobs
PDF
Arbitrary Stateful Aggregations using Structured Streaming in Apache Spark
PDF
Apache Spark in Depth: Core Concepts, Architecture & Internals
PDF
Deep Dive into Stateful Stream Processing in Structured Streaming with Tathag...
PPT
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
PPTX
Apache Spark overview
PPTX
Programming in Spark using PySpark
PDF
Deep Dive into the New Features of Apache Spark 3.0
Productizing Structured Streaming Jobs
Arbitrary Stateful Aggregations using Structured Streaming in Apache Spark
Apache Spark in Depth: Core Concepts, Architecture & Internals
Deep Dive into Stateful Stream Processing in Structured Streaming with Tathag...
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark overview
Programming in Spark using PySpark
Deep Dive into the New Features of Apache Spark 3.0

What's hot (20)

PPTX
Spark architecture
PDF
Spark (Structured) Streaming vs. Kafka Streams
PDF
Deep Dive: Memory Management in Apache Spark
PDF
Apache Spark Introduction
PDF
Understanding Query Plans and Spark UIs
PDF
Introduction to Apache Flink - Fast and reliable big data processing
PPTX
PDF
Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F...
PDF
Dynamic Allocation in Spark
PDF
Spark - Alexis Seigneurin (Français)
PDF
Parquet performance tuning: the missing guide
PDF
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
PDF
Enabling Vectorized Engine in Apache Spark
PDF
Spark shuffle introduction
PDF
Memory Management in Apache Spark
PDF
Apache Spark Core – Practical Optimization
PPTX
iceberg introduction.pptx
PDF
Performant Streaming in Production: Preventing Common Pitfalls when Productio...
PDF
From Query Plan to Query Performance: Supercharging your Apache Spark Queries...
POTX
Apache Spark Streaming: Architecture and Fault Tolerance
Spark architecture
Spark (Structured) Streaming vs. Kafka Streams
Deep Dive: Memory Management in Apache Spark
Apache Spark Introduction
Understanding Query Plans and Spark UIs
Introduction to Apache Flink - Fast and reliable big data processing
Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F...
Dynamic Allocation in Spark
Spark - Alexis Seigneurin (Français)
Parquet performance tuning: the missing guide
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Enabling Vectorized Engine in Apache Spark
Spark shuffle introduction
Memory Management in Apache Spark
Apache Spark Core – Practical Optimization
iceberg introduction.pptx
Performant Streaming in Production: Preventing Common Pitfalls when Productio...
From Query Plan to Query Performance: Supercharging your Apache Spark Queries...
Apache Spark Streaming: Architecture and Fault Tolerance
Ad

Similar to A look under the hood at Apache Spark's API and engine evolutions (20)

PDF
Unified Big Data Processing with Apache Spark
PDF
Unified Big Data Processing with Apache Spark (QCON 2014)
PDF
Big data distributed processing: Spark introduction
PPTX
Paris Data Geek - Spark Streaming
PDF
How Apache Spark fits into the Big Data landscape
PDF
An introduction To Apache Spark
PDF
Tiny Batches, in the wine: Shiny New Bits in Spark Streaming
PDF
Composable Parallel Processing in Apache Spark and Weld
PDF
How Apache Spark fits in the Big Data landscape
PPTX
This is training for spark SQL essential
PDF
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
PDF
New Developments in Spark
PPT
Apache spark-melbourne-april-2015-meetup
PDF
HKOSCon18 - Chetan Khatri - Scaling TB's of Data with Apache Spark and Scala ...
PPTX
Big data vahidamiri-tabriz-13960226-datastack.ir
PDF
Apache Spark for Everyone - Women Who Code Workshop
PDF
Big Data Analytics and Ubiquitous computing
PPTX
Pyspark presentationsfspfsjfspfjsfpsjfspfjsfpsjfsfsf
PPTX
In Memory Analytics with Apache Spark
PDF
Spark meetup TCHUG
Unified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache Spark (QCON 2014)
Big data distributed processing: Spark introduction
Paris Data Geek - Spark Streaming
How Apache Spark fits into the Big Data landscape
An introduction To Apache Spark
Tiny Batches, in the wine: Shiny New Bits in Spark Streaming
Composable Parallel Processing in Apache Spark and Weld
How Apache Spark fits in the Big Data landscape
This is training for spark SQL essential
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
New Developments in Spark
Apache spark-melbourne-april-2015-meetup
HKOSCon18 - Chetan Khatri - Scaling TB's of Data with Apache Spark and Scala ...
Big data vahidamiri-tabriz-13960226-datastack.ir
Apache Spark for Everyone - Women Who Code Workshop
Big Data Analytics and Ubiquitous computing
Pyspark presentationsfspfsjfspfjsfpsjfspfjsfpsjfsfsf
In Memory Analytics with Apache Spark
Spark meetup TCHUG
Ad

More from Databricks (20)

PPTX
DW Migration Webinar-March 2022.pptx
PPTX
Data Lakehouse Symposium | Day 1 | Part 1
PPT
Data Lakehouse Symposium | Day 1 | Part 2
PPTX
Data Lakehouse Symposium | Day 2
PPTX
Data Lakehouse Symposium | Day 4
PDF
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
PDF
Democratizing Data Quality Through a Centralized Platform
PDF
Learn to Use Databricks for Data Science
PDF
Why APM Is Not the Same As ML Monitoring
PDF
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
PDF
Stage Level Scheduling Improving Big Data and AI Integration
PDF
Simplify Data Conversion from Spark to TensorFlow and PyTorch
PDF
Scaling your Data Pipelines with Apache Spark on Kubernetes
PDF
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
PDF
Sawtooth Windows for Feature Aggregations
PDF
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
PDF
Re-imagine Data Monitoring with whylogs and Spark
PDF
Raven: End-to-end Optimization of ML Prediction Queries
PDF
Processing Large Datasets for ADAS Applications using Apache Spark
PDF
Massive Data Processing in Adobe Using Delta Lake
DW Migration Webinar-March 2022.pptx
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 4
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Democratizing Data Quality Through a Centralized Platform
Learn to Use Databricks for Data Science
Why APM Is Not the Same As ML Monitoring
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Stage Level Scheduling Improving Big Data and AI Integration
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Sawtooth Windows for Feature Aggregations
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Re-imagine Data Monitoring with whylogs and Spark
Raven: End-to-end Optimization of ML Prediction Queries
Processing Large Datasets for ADAS Applications using Apache Spark
Massive Data Processing in Adobe Using Delta Lake

Recently uploaded (20)

PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PPTX
Cloud computing and distributed systems.
PDF
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Empathic Computing: Creating Shared Understanding
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
KodekX | Application Modernization Development
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Machine learning based COVID-19 study performance prediction
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Unlocking AI with Model Context Protocol (MCP)
Diabetes mellitus diagnosis method based random forest with bat algorithm
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Cloud computing and distributed systems.
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Empathic Computing: Creating Shared Understanding
The AUB Centre for AI in Media Proposal.docx
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
KodekX | Application Modernization Development
Review of recent advances in non-invasive hemoglobin estimation
Advanced methodologies resolving dimensionality complications for autism neur...
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Machine learning based COVID-19 study performance prediction
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Chapter 3 Spatial Domain Image Processing.pdf
Spectral efficient network and resource selection model in 5G networks
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
20250228 LYD VKU AI Blended-Learning.pptx
Unlocking AI with Model Context Protocol (MCP)

A look under the hood at Apache Spark's API and engine evolutions

  • 1. A look under the hood at Apache Spark's API and engine evolutions Reynold Xin @rxin 2017-02-08, Amsterdam Meetup
  • 2. About Databricks Founded by creators of Spark Cloud data platform - Spark - Interactive analysis - Cluster management - Production pipelines - Data governance & security
  • 3. Databricks Amsterdam R&D Center Started in January Hiring distributed systems & database engineers! Email me: rxin@databricks.com
  • 4. SQL Streaming MLlib Spark Core (RDD) GraphX Spark stack diagram
  • 5. Frontend (user facing APIs) Backend (execution) Spark stack diagram (a different take)
  • 6. Frontend (RDD, DataFrame, ML pipelines, …) Backend (scheduler, shuffle, operators, …) Spark stack diagram (a different take)
  • 7. Today’s Talk Some archaeology - IMS, relational databases - MapReduce - data frames Last 6 years of Spark evolution
  • 9. IBM IMS hierarchical database (1966) Image from https://stratechery.com/2016/oracles-cloudy-future/
  • 10. Hierarchical Database - Improvement over file system: query language & catalog - Lack of flexibility - Difficult to query items in different parts of the hierarchy - Relationships are pre-determined and difficult to change
  • 12. “Future users of large data banks must be protected from having to know how the data is organized in the machine. … most application programs should remain unaffected when the internal representation of data is changed and even when some aspects of the external representation are changed.”
  • 13. Era of relational databases (late 60s) Two “new” important ideas Physical Data Independence: The ability to change the physical data layout without having to change the logical schema. Declarative Query Language: Programmer specifies “what” rather than “how”.
  • 14. Why? Business applications outlive the environments they were created in: - New requirements might surface - Underlying hardware might change - Require physical layout changes (indexing, different storage medium, etc) Enabled tremendous amount of innovation: - Indexes, compression, column stores, etc
  • 15. Relational Database Pros vs Cons - Declarative and data independent - SQL is the universal interface everybody knows - Difficult to compose & build complex applications - Too opinionated and inflexible - Require data modeling before putting any data in - SQL is the only programming language
  • 17. Challenges Google faced Data size growing (volume & velocity) - Processing has to scale out over large clusters Complexity of analysis increasing (variety) - Massive ETL (web crawling) - Machine learning, graph processing
  • 18. The Big Data Problem Semi-/Un-structured data doesn’t fit well with databases Single machine can no longer process or even store all the data! Only solution is to distribute general storage & processing over clusters.
  • 19. Google Datacenter How do we program this thing? 19
  • 20. Data-Parallel Models Restrict the programming interface so that the system can do more automatically “Here’s an operation, run it on all of the data” - I don’t care where it runs (you schedule that) - In fact, feel free to run it twice on different nodes - Siimlar to “declarative programming” in databases
  • 22. MapReduce Pros vs Cons - Massively parallel - Very flexible programming model & schema-on-read - Extremely verbose & difficult to learn - Most real applications require multiple MR steps - 21 MR steps -> 21 mapper and reducer classes - Lots of boilerplate code per step - Bad performance
  • 24. Data frames in R / Python > head(filter(df, df$waiting < 50)) # an example in R ## eruptions waiting ##1 1.750 47 ##2 1.750 47 ##3 1.867 48 Developed by stats community & concise syntax for ad-hoc analysis Procedural (not declarative)
  • 25. R data frames Pros and Cons - Easy to learn - Pretty fast on a laptop (or one server) - No parallelism & doesn’t work well on big data - Lack sophisticated query optimization
  • 26. “Are you going to talk about Spark at all tonight!?”
  • 27. Which one is better? Databases, R, MapReduce? Declarative, procedural, data independence?
  • 28. Spark’s initial focus: a better MapReduce Language-integrated API (RDD): similar to Scala’s collection library using functional programming; incredibly powerful and composable lines = spark.textFile(“hdfs://...”) // RDD[String] points = lines.map(line => parsePoint(line)) // RDD[Point] points.filter(p => p.x > 100).count() Better performance: through a more general DAG abstraction, faster scheduling, and in-memory caching
  • 29. Programmability WordCount in 50+ lines of Java MR WordCount in 3 lines of Spark
  • 30. Challenge with Functional API Looks high-level, but hides many semantics of computation • Functions are arbitrary blocks of Java bytecode • Data stored is arbitrary Java objects Users can mix APIs in suboptimal ways
  • 32. Example Problem pairs = data.map(word => (word, 1)) groups = pairs.groupByKey() groups.map((k, vs) => (k, vs.sum)) Physical API: Materializes all groups as Seq[Int] objects Then promptly aggregates them
  • 33. Challenge: Data Representation Java objects often many times larger than underlying fields class User(name: String, friends: Array[Int]) new User(“Bobby”, Array(1, 2)) User 0x… 0x… String 3 0 1 2 Bobby 5 0x… int[] char[] 5
  • 34. Recap: two primary issues 1. Many APIs specifies the “physical” behavior, rather than the “logical” intent, aka not declarative enough. 2. Closures (user-defined functions and types) are opaque to the engine, an as a result little room for improvement.
  • 35. Sort Benchmark Originally sponsored by Jim Gray to measure advancements in software and hardware in 1987 Participants often used purpose-built hardware/software to compete • Large companies: IBM, Microsoft, Yahoo, … • Academia: UC Berkeley, UCSD, MIT, …
  • 36. Sort Benchmark • Past winners: Microsoft, Yahoo, Samsung, UCSD, … 1MB -> 100MB -> 1TB (1998) -> 100TB (2009)
  • 37. Winning Attempt Built on low-level Spark API and: - Put all data in off-heap memory using sun.misc.Unsafe - Use tight low level while loops rather than iterators ~ 3000 lines of low level code on Spark written by Reynold
  • 38. On-Disk Sort Record: Time to sort 100TB 2100 machines2013 Record: Hadoop 2014 Record: Spark Source: Daytona GraySort benchmark, sortbenchmark.org 72 minutes 207 machines 23 minutes Also sorted 1PB in 4 hours
  • 39. How do we enable the average users to win a world record, using a few lines of code?
  • 40. Goals of last two year’s API evolution 1. Simpler APIs bridging the gap between big data engineering and data science. 2. Higher level, declarative APIs that are future proof (engine can optimize programs automatically). Taking the best ideas from databases, big data, and data science
  • 42. DataFrames and Spark SQL Efficient library for structured data (data with a known schema) • Two interfaces: SQL for analysts + apps, DataFrames for programmers Optimized computation and storage, similar to RDBMS SIGMOD 2015
  • 44. DataFrame API DataFrames hold rows with a known schema and offer relational operations on them through a DSL val users = spark.sql(“select * from users”) val massUsers = users(users(“country”) === “NL”) massUsers.count() massUsers.groupBy(“name”).avg(“age”) Expression AST
  • 46. Spark DataFrame Execution DataFrame frontend Logical Plan Physical execution Catalyst optimizer Intermediate representation for computation
  • 47. Spark DataFrame Execution Python DF Logical Plan Physical execution Catalyst optimizer Java/Scala DF R DF Intermediate representation for computation Simple wrappers to create logical plan
  • 48. Structured API Example events = sc.read.json(“/logs”) stats = events.join(users) .groupBy(“loc”,“status”) .avg(“duration”) errors = stats.where( stats.status == “ERR”) DataFrame API Optimized Plan Specialized Code SCAN logs SCAN users JOIN AGG FILTER while(logs.hasNext) { e = logs.next if(e.status == “ERR”) { u = users.get(e.uid) key = (u.loc, e.status) sum(key) += e.duration count(key) += 1 } } ...
  • 49. Benefit of Logical Plan: Simpler Frontend Python : ~2000 line of code (built over a weekend) R : ~1000 line of code i.e. much easier to add new language bindings (Julia, Clojure, …)
  • 50. Performance 0 2 4 6 8 10 Java/Scala Python Runtime for an example aggregation workload RDD
  • 51. Benefit of Logical Plan: Performance Parity Across Languages 0 2 4 6 8 10 Java/Scala Python Java/Scala Python R SQL Runtime for an example aggregation workload (secs) DataFrame RDD
  • 52. What are Spark’s structured APIs? Combination of: - data frame from R as the “interface” – easy to learn - declarativity & data independence from databases -- easy to optimize & future-proof - flexibility & parallelism from MapReduce -- massively scalable & flexible
  • 53. Future possibilities Spark as a fast, multi-core data collection library Spark as a performant streaming engine Spark as a GPU computation framework All using the same API
  • 54. Python Java/Scala RSQL … DataFrame Logical Plan LLVMJVM SIMD GPUs Unified API, One Engine, Automatically Optimized Tungsten backend language frontend …
  • 55. Recap We learn from previous generation systems to understand what works, and what can be improved on, and evolve Spark Latest APIs take the best ideas out of earlier systems - data frame from R as the “interface” – easy to learn - declarativity & data independence from databases -- easy to optimize & future-proof - flexibility & parallelism from MapReduce -- massively scalable & flexible