SlideShare a Scribd company logo
Productionizing Spark

and the Spark REST Job Server
Evan Chan
Distinguished Engineer
@TupleJump
Who am I
• Distinguished Engineer, Tuplejump
• @evanfchan
• http://github.com/velvia
• User and contributor to Spark since 0.9
• Co-creator and maintainer of Spark Job Server
2
TupleJump
✤ Tuplejump is a big data technology leader providing solutions and
development partnership.
✤ FiloDB - Spark-based analytics database for time series and event
data (github.com/tuplejump/FiloDB)
✤ Calliope - the first Spark-Cassandra integration
✤ Stargate - an open source Lucene indexer for Cassandra
✤ SnackFS - open source HDFS for Cassandra
3
TupleJump - Big Data Dev Partners 4
Deploying Spark
5
Choices, choices, choices
• YARN, Mesos, Standalone?
• With a distribution?
• What environment?
• How should I deploy?
• Hosted options?
• What about dependencies?
6
BasicTerminology
• The Spark documentation is really quite good.
7
What all the clusters have in common
• YARN, Mesos, and Standalone all support the following
features:
–Running the Spark driver app in cluster mode
–Restarts of the driver app upon failure
–UI to examine state of workers and apps
8
Spark Standalone Mode
• The easiest clustering mode to deploy**
–Use make-distribution.sh to package, copy to all nodes
–sbin/start-master.sh on master node, then start slaves
–Test with spark-shell
• HA Master through Zookeeper election
• Must dedicate whole cluster to Spark
• In latest survey, used by almost half of Spark users
9
Apache Mesos
• Was started by Matias in 2007 before he worked on Spark!
• Can run your entire company on Mesos, not just big data
–Great support for micro services - Docker, Marathon
–Can run non-JVM workloads like MPI
• Commercial backing from Mesosphere
• Heavily used at Twitter and AirBNB
• The Mesosphere DCOS will revolutionize Spark et al deployment - “dcos package
install spark” !!
10
Mesos vsYARN
• Mesos is a two-level resource manager, with pluggable schedulers
–You can run YARN on Mesos, with YARN delegating resource offers
to Mesos (Project Myriad)
–You can run multiple schedulers within Mesos, and write your own
• If you’re already a Hadoop / Cloudera etc shop, YARN is easy choice
• If you’re starting out, go 100% Mesos
11
Mesos Coarse vs Fine-Grained
• Spark offers two modes to run Mesos Spark apps in (and you can
choose per driver app):
–coarse-grained: Spark allocates fixed number of workers for
duration of driver app
–fine-grained (default): Dynamic executor allocation per task,
but higher overhead per task
• Use coarse-grained if you run low-latency jobs
12
What about Datastax DSE?
• Cassandra, Hadoop, Spark all bundled in one distribution, collocated
• Custom cluster manager and HA/failover logic for Spark Master,
using Cassandra gossip
• Can use CFS (Cassandra-based HDFS), SnackFS, or plain Cassandra
tables for storage
–or use Tachyon to cache, then no need to collocate (use Mesosphere
DCOS)
13
Hosted Apache Spark
• Spark on Amazon EMR - first class citizen now
–Direct S3 access!
• Google Compute Engine - “Click to Deploy” Hadoop+Spark
• Databricks Cloud
• Many more coming
• What you notice about the different environments:
–Everybody has their own way of starting: spark-submit vs dse spark vs aws
emr … vs dcos spark …
14
Mesosphere DCOS
• Automates deployment to AWS,
Google, etc.
• Common API and UI, better cost
and control, cloud
• Load balancing and routing,
Mesos for resource sharing
• dcos package install spark
15
Configuring Spark
16
Building Spark
• Make sure you build for the right Hadoop version
• eg mvn -Phadoop-2.2 -Dhadoop.version=2.2.0 -DskipTests clean package
• Make sure you build for the right Scala version - Spark supports
both 2.10 and 2.11
17
Jars schmars
• Dependency conflicts are the worst part of Spark dev
• Every distro has slightly different jars - eg CDH < 5.4 packaged a different version of
Akka
• Leave out Hive if you don’t need it
• Use the Spark UI “Environment” tab to check jars and how they got there
• spark-submit —jars / —packages forwards jars to every executor (unless it’s an
HDFS / HTTP path)
• Spark-env.sh SPARK_CLASSPATH - include dep jars you’ve deployed to every node
18
Jars schmars
• You don’t need to package every dependency with your Spark application!
• spark-streaming is included in the distribution
• spark-streaming includes some Kafka jars already
• etc.
19
ClassPath Configuration Options
• spark.driver.userClassPathFirst, spark.executor.userClassPathFirst
• One way to solve dependency conflicts - make sure YOUR jars are loaded first, ahead of
Spark’s jars
• Client mode: use spark-submit options
• —driver-class-path, —driver-library-path
• Spark Classloader order of resolution
• Classes specified via —jars, —packages first (if above flag is set)
• Everything else in SPARK_CLASSPATH
20
Some useful config options
21
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.default.parallelism or pass # partitions for shuffle/reduce tasks as second arg
spark.scheduler.mode
FAIR - enable parallelism within apps (multi-tenant or
low-latency apps like SQL server)
spark.shuffle.memoryFraction,
spark.storage.memoryFraction
Fraction of Java heap to allocate for shuffle and RDD
caching, respectively, before spilling to disk
spark.cleaner.ttl
Enables periodic cleanup of cached RDDs, good for long-
lived jobs
spark.akka.frameSize
Increase the default of 10 (MB) to send back very large
results to the driver app (code smell)
spark.task.maxFailures # of retries for task failure is this - 1
Control Spark SQL Shuffles
• By default, Spark SQL / DataFrames will use 200
partitions when doing any groupBy / distinct operations
• sqlContext.setConf(

"spark.sql.shuffle.partitions", "16")
22
Prevent temp files from filling disks
• (Spark Standalone mode only)
• spark.worker.cleanup.enabled = true
• spark.worker.cleanup.interval
• Configuring executor log file retention/rotation
spark.executor.logs.rolling.maxRetainedFiles = 90
spark.executor.logs.rolling.strategy = time
23
Tuning Spark GC
• Lots of cached RDDs = huge old gen GC cycles, stop-the-world GC
• Know which operations consume more memory (sorts, shuffles)
• Try the new G1GC … avoids whole-heap scans
• -XX:+UseG1GC
• https://databricks.com/blog/2015/05/28/tuning-java-garbage-collection-for-
spark-applications.html
24
Running Spark Applications
25
Run your apps in the cluster
• spark-submit: —deploy-mode cluster
• Spark Job Server: deploy SJS to the cluster
• Drivers and executors are very chatty - want to reduce latency and decrease
chance of networking timeouts
• Want to avoid running jobs on your local machine
26
Automatic Driver Restarts
• Standalone: —deploy-mode cluster —supervise
• YARN: —deploy-mode cluster
• Mesos: use Marathon to restart dead slaves
• Periodic checkpointing: important for recovering data
• RDD checkpointing helps reduce long RDD lineages
27
Speeding up application startup
• Spark-submit’s —packages option is super convenient for downloading
dependencies, but avoid it in production
• Downloads tons of jars from Maven when driver starts up, then executors
copy all the jars from driver
• Deploy frequently used dependencies to worker nodes yourself
• For really fast Spark jobs, use the Spark Job Server and share a SparkContext
amongst jobs!
28
Spark(Context) Metrics
• Spark’s built in MetricsSystem has sources (Spark info, JVM, etc.) and sinks
(Graphite, etc.)
• Configure metrics.properties (template in spark conf/ dir) and use these
params to spark-submit
--files=/path/to/metrics.properties 
--conf spark.metrics.conf=metrics.properties
• See http://www.hammerlab.org/2015/02/27/monitoring-spark-with-graphite-
and-grafana/
29
Application Metrics
• Missing Hadoop counters? Use Spark Accumulators
• https://gist.github.com/ibuenros/
9b94736c2bad2f4b8e23
• Above registers accumulators as a source to Spark’s
MetricsSystem
30
Watch how RDDs are cached
• RDDs cached to disk could slow down computation
31
Are your jobs stuck?
• First check cluster resources - does a job have enough CPU/mem?
• Take a thread dump of executors:
32
TheWorst Killer - Classpath
• Classpath / jar versioning issues may cause Spark to hang silently. Debug
using the Environment tab of the UI:
33
Spark Job Server
34
35
http://github.com/spark-jobserver/spark-jobserver
Open Source!!
Also find it on spark-packages.org
Spark Job Server -What
• REST Interface for your Spark jobs
• Streaming, SQL, extendable
• Job history and configuration logged to a database
• Enable interactive low-latency queries (SQL/Dataframes
works too) of cached RDDs and tables
36
Spark Job Server -Where
37
Kafka
Spark
Streaming
Datastore Spark
Spark Job
Server
Internal users Internet
HTTP/HTTPS
Spark Job Server -Why
• Spark as a service
• Share Spark across the Enterprise
• HTTPS and LDAP Authentication
• Enterprises - easy integration with other teams, any language
• Share in-memory RDDs across logical jobs
• Low-latency queries
38
Used in Production -Worldwide
39
Used in Production
• As of last month, officially included in 

Datastax Enterprise 4.8!
40
Active Community
• Large number of contributions from community
• HTTPS/LDAP contributed by team at KNIME
• Multiple committers
• Gitter IM channel, active Google group
41
Platform Independent
• Spark Standalone
• Mesos
• Yarn
• Docker
• Example: platform-independent LDAP auth, HTTPS, can be
used as a portal
42
ExampleWorkflow
43
Creating a Job Server Project
• sbt assembly -> fat jar -> upload to job server
• "provided" is used. Don’t want SBT assembly to include
the whole job server jar.
• Java projects should be possible too
44
resolvers += "Job Server Bintray" at "https://dl.bintray.com/spark-
jobserver/maven"
libraryDependencies += "spark.jobserver" % "job-server-api" %
"0.5.0" % "provided"
• In your build.sbt, add this
/**
* A super-simple Spark job example that implements the SparkJob trait and
* can be submitted to the job server.
*/
object WordCountExample extends SparkJob {
override def validate(sc: SparkContext, config: Config): SparkJobValidation =
{
Try(config.getString(“input.string”))
.map(x => SparkJobValid)
.getOrElse(SparkJobInvalid(“No input.string”))
}
override def runJob(sc: SparkContext, config: Config): Any = {
val dd = sc.parallelize(config.getString(“input.string”).split(" ").toSeq)
dd.map((_, 1)).reduceByKey(_ + _).collect().toMap
}
}
Example Job Server Job
45
What’s Different?
• Job does not create Context, Job Server does
• Decide when I run the job: in own context, or in pre-created context
• Allows for very modular Spark development
• Break up a giant Spark app into multiple logical jobs
• Example:
• One job to load DataFrames tables
• One job to query them
• One job to run diagnostics and report debugging information
46
Submitting and Running a Job
47
✦ curl --data-binary @../target/mydemo.jar localhost:8090/
jars/demo
OK[11:32 PM] ~
✦ curl -d "input.string = A lazy dog jumped mean dog"
'localhost:8090/jobs?
appName=demo&classPath=WordCountExample&sync=true'
{
"status": "OK",
"RESULT": {
"lazy": 1,
"jumped": 1,
"A": 1,
"mean": 1,
"dog": 2
}
}
Retrieve Job Statuses
48
~/s/jobserver (evan-working-1 ↩=) curl 'localhost:8090/jobs?
limit=2'
[{
"duration": "77.744 secs",
"classPath": "ooyala.cnd.CreateMaterializedView",
"startTime": "2013-11-26T20:13:09.071Z",
"context": "8b7059dd-ooyala.cnd.CreateMaterializedView",
"status": "FINISHED",
"jobId": "9982f961-aaaa-4195-88c2-962eae9b08d9"
}, {
"duration": "58.067 secs",
"classPath": "ooyala.cnd.CreateMaterializedView",
"startTime": "2013-11-26T20:22:03.257Z",
"context": "d0a5ebdc-ooyala.cnd.CreateMaterializedView",
"status": "FINISHED",
"jobId": "e9317383-6a67-41c4-8291-9c140b6d8459"
}]
Use Case: Fast Query Jobs
49
Spark as a Query Engine
• Goal: spark jobs that run in under a second and answers queries on shared
RDD data
• Query params passed in as job config
• Need to minimize context creation overhead
–Thus many jobs sharing the same SparkContext
• On-heap RDD caching means no serialization loss
• Need to consider concurrent jobs (fair scheduling)
50
51
RDDLoad Data Query JobSpark

Executors
Cassandra
REST Job Server
Query Job
Query
Result
Query
Result
new SparkContext
Create
query
context
Load
some
data
Sharing Data Between Jobs
• RDD Caching
–Benefit: no need to serialize data. Especially useful for indexes etc.
–Job server provides a NamedRdds trait for thread-safe CRUD of cached
RDDs by name
• (Compare to SparkContext’s API which uses an integer ID and is not
thread safe)
• For example, at Ooyala a number of fields are multiplexed into the RDD
name: timestamp:customerID:granularity
52
Data Concurrency
• With fair scheduler, multiple Job Server jobs can run simultaneously on one
SparkContext
• Managing multiple updates to RDDs
–Cache keeps track of which RDDs being updated
–Example: thread A spark job creates RDD “A” at t0
–thread B fetches RDD “A” at t1 > t0
–Both threads A and B, using NamedRdds, will get the RDD at time t2 when
thread A finishes creating the RDD “A”
53
Spark SQL/Hive Query Server
✤ Start a context based on SQLContext:

curl -d "" '127.0.0.1:8090/contexts/sql-context?context-
factory=spark.jobserver.context.SQLContextFactory'
✤ Run a job for loading and caching tables in DataFrames

curl -d "" '127.0.0.1:8090/jobs?
appName=test&classPath=spark.jobserver.SqlLoaderJob&context=sql-
context&sync=true'
✤ Supply a query to a Query Job. All queries are logged in database by Spark
Job Server.

curl -d ‘sql=“SELECT count(*) FROM footable”’ '127.0.0.1:8090/jobs?
appName=test&classPath=spark.jobserver.SqlQueryJob&context=sql-
context&sync=true'
54
Example: Combining Streaming And Spark SQL
55
SparkSQLStreamingContext
Kafka Streaming
Job
SQL
Query Job
DataFrames
Spark Job Server
SQL Query
SparkSQLStreamingJob
56
trait SparkSqlStreamingJob extends SparkJobBase {
type C = SQLStreamingContext
}
class SQLStreamingContext(c: SparkContext) {
val streamingContext = new StreamingContext(c, ...)
val sqlContext = new SQLContext(c)
}
Now you have access to both StreamingContext and
SQLContext, and it can be shared across jobs!
SparkSQLStreamingContext
57
To start this context:

curl -d "" “localhost:8090/contexts/stream_sqltest?context-
factory=com.abc.SQLStreamingContextFactory"
class SQLStreamingContextFactory extends SparkContextFactory {
import SparkJobUtils._
type C = SQLStreamingContext with ContextLike
def makeContext(config: Config, contextConfig: Config, contextName: String): C = {
val batchInterval = contextConfig.getInt("batch_interval")
val conf = configToSparkConf(config, contextConfig, contextName)
new SQLStreamingContext(new SparkContext(conf), Seconds(batchInterval)) with ContextLike {
def sparkContext: SparkContext = this.streamingContext.sparkContext
def isValidJob(job: SparkJobBase): Boolean = job.isInstanceOf[SparkSqlStreamingJob]
// Stop the streaming context, but not the SparkContext so that it can be re-used
// to create another streaming context if required:
def stop() { this.streamingContext.stop(false) }
}
}
}
FutureWork
58
Future Plans
• PR: Forked JVMs for supporting many concurrent
contexts
• True HA operation
• Swagger API documentation
59
HA for Job Server
Job
Server 1
Job
Server 2
Active
Job
Context
Gossip
Load balancer
60
Database
GET /jobs/<id>
HA and Hot Failover for Jobs
Job
Server 1
Job
Server 2
Active
Job
Context
HDFS
Standby
Job
Context
Gossip
Checkpoint
61
Thanks for your contributions!
• All of these were community contributed:
–HTTPS and Auth
–saving and retrieving job configuration
–forked JVM per context
• Your contributions are very welcome on Github!
62
63
And Everybody is Hiring!!
Thank you!
WhyWe Needed a Job Server
• Our vision for Spark is as a multi-team big data service
• What gets repeated by every team:
• Bastion box for running Hadoop/Spark jobs
• Deploys and process monitoring
• Tracking and serializing job status, progress, and job results
• Job validation
• No easy way to kill jobs
• Polyglot technology stack - Ruby scripts run jobs, Go services
Architecture
Completely Async Design
✤ http://spray.io - probably the fastest JVM HTTP
microframework
✤ Akka Actor based, non blocking
✤ Futures used to manage individual jobs. (Note that
Spark is using Scala futures to manage job stages now)
✤ Single JVM for now, but easy to distribute later via
remote Actors / Akka Cluster
Async Actor Flow
Spray web
API
Request
actor
Local
Supervisor
Job
Manager
Job 1
Future
Job 2
Future
Job Status
Actor
Job Result
Actor
Message flow fully documented
Production Usage
Metadata Store
✤ JarInfo, JobInfo, ConfigInfo
✤ JobSqlDAO. Store metadata to SQL database by JDBC interface.
✤ Easily configured by spark.sqldao.jdbc.url
✤ jdbc:mysql://dbserver:3306/jobserverdb
✤ Multiple Job Servers can share the same MySQL.
✤ Jars uploaded once but accessible by all servers.
✤ The default will be JobSqlDAO and H2.
✤ Single H2 DB file. Serialization and deserialization are handled by H2.
Deployment and Metrics
✤ spark-jobserver repo comes with a full suite of tests and
deploy scripts:
✤ server_deploy.sh for regular server pushes
✤ server_package.sh for Mesos and Chronos .tar.gz
✤ /metricz route for codahale-metrics monitoring
✤ /healthz route for health check
Challenges and Lessons
• Spark is based around contexts - we need a Job Server oriented around logical
jobs
• Running multiple SparkContexts in the same process
• Better long term solution is forked JVM per SparkContext
• Workaround: spark.driver.allowMultipleContexts = true
• Dynamic jar and class loading is tricky
• Manage threads carefully - each context uses lots of threads

More Related Content

PPTX
Extreme Apache Spark: how in 3 months we created a pipeline that can process ...
PDF
Deep Dive: Memory Management in Apache Spark
PDF
Debugging PySpark: Spark Summit East talk by Holden Karau
PDF
Spark shuffle introduction
PDF
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
PDF
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
PPTX
Processing Large Data with Apache Spark -- HasGeek
PDF
Apache Spark At Scale in the Cloud
Extreme Apache Spark: how in 3 months we created a pipeline that can process ...
Deep Dive: Memory Management in Apache Spark
Debugging PySpark: Spark Summit East talk by Holden Karau
Spark shuffle introduction
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Processing Large Data with Apache Spark -- HasGeek
Apache Spark At Scale in the Cloud

What's hot (20)

PDF
Introduction to Apache Spark
PDF
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
PDF
Cosco: An Efficient Facebook-Scale Shuffle Service
PDF
Deep Dive into Stateful Stream Processing in Structured Streaming with Tathag...
PPTX
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
PDF
Stanford CS347 Guest Lecture: Apache Spark
PDF
XStream: stream processing platform at facebook
PDF
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
PDF
Running Apache Spark on Kubernetes: Best Practices and Pitfalls
PDF
Physical Plans in Spark SQL
PDF
Apache Spark on K8S Best Practice and Performance in the Cloud
PPTX
Apache Spark Core
PDF
Introduction to Apache Spark
PDF
Understanding Query Plans and Spark UIs
PDF
Dive into PySpark
PDF
Using pySpark with Google Colab & Spark 3.0 preview
PDF
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
PDF
Apache Spark Performance tuning and Best Practise
PDF
Apache Spark in Depth: Core Concepts, Architecture & Internals
PPTX
Deep Dive into Apache Kafka
Introduction to Apache Spark
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Cosco: An Efficient Facebook-Scale Shuffle Service
Deep Dive into Stateful Stream Processing in Structured Streaming with Tathag...
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Stanford CS347 Guest Lecture: Apache Spark
XStream: stream processing platform at facebook
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
Running Apache Spark on Kubernetes: Best Practices and Pitfalls
Physical Plans in Spark SQL
Apache Spark on K8S Best Practice and Performance in the Cloud
Apache Spark Core
Introduction to Apache Spark
Understanding Query Plans and Spark UIs
Dive into PySpark
Using pySpark with Google Colab & Spark 3.0 preview
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Performance tuning and Best Practise
Apache Spark in Depth: Core Concepts, Architecture & Internals
Deep Dive into Apache Kafka
Ad

Viewers also liked (15)

PPTX
Get most out of Spark on YARN
PDF
Spark Compute as a Service at Paypal with Prabhu Kasinathan
PPTX
Hadoop and Spark Analytics over Better Storage
PPTX
Producing Spark on YARN for ETL
PPT
SocSciBot(01 Mar2010) - Korean Manual
PDF
Spark Summit Europe: Building a REST Job Server for interactive Spark as a se...
PDF
Spark on yarn
PDF
Dynamically Allocate Cluster Resources to your Spark Application
PPTX
Spark-on-YARN: Empower Spark Applications on Hadoop Cluster
PPTX
ETL with SPARK - First Spark London meetup
PDF
Why your Spark job is failing
PPT
Proxy Servers
PPTX
Apache Spark Model Deployment
PPT
Proxy Server
PDF
Spark 2.x Troubleshooting Guide
 
Get most out of Spark on YARN
Spark Compute as a Service at Paypal with Prabhu Kasinathan
Hadoop and Spark Analytics over Better Storage
Producing Spark on YARN for ETL
SocSciBot(01 Mar2010) - Korean Manual
Spark Summit Europe: Building a REST Job Server for interactive Spark as a se...
Spark on yarn
Dynamically Allocate Cluster Resources to your Spark Application
Spark-on-YARN: Empower Spark Applications on Hadoop Cluster
ETL with SPARK - First Spark London meetup
Why your Spark job is failing
Proxy Servers
Apache Spark Model Deployment
Proxy Server
Spark 2.x Troubleshooting Guide
 
Ad

Similar to Productionizing Spark and the Spark Job Server (20)

PPTX
Productionizing Spark and the REST Job Server- Evan Chan
PDF
Introduction to Apache Spark :: Lagos Scala Meetup session 2
PPTX
Apache Spark Fundamentals
PPTX
Intro to Apache Spark by CTO of Twingo
PPTX
Real time Analytics with Apache Kafka and Apache Spark
PDF
Lessons Learned: Using Spark and Microservices
PPTX
Tuning and Monitoring Deep Learning on Apache Spark
PPTX
Programming in Spark using PySpark
PDF
Apache Spark Tutorial
PPTX
East Bay Java User Group Oct 2014 Spark Streaming Kinesis Machine Learning
PDF
Lessons Learned From Running Spark On Docker
PDF
Spark Summit EU talk by Luca Canali
PDF
How to deploy Apache Spark 
to Mesos/DCOS
PPTX
Intro to Apache Spark
PPTX
Intro to Apache Spark
PPTX
Global Big Data Conference Sept 2014 AWS Kinesis Spark Streaming Approximatio...
PDF
Spark as a Platform to Support Multi-Tenancy and Many Kinds of Data Applicati...
PPTX
Apache Spark in Industry
PDF
Migrating ETL Workflow to Apache Spark at Scale in Pinterest
PPTX
CLOUD_COMPUTING_MODULE5_RK_BIG_DATA.pptx
Productionizing Spark and the REST Job Server- Evan Chan
Introduction to Apache Spark :: Lagos Scala Meetup session 2
Apache Spark Fundamentals
Intro to Apache Spark by CTO of Twingo
Real time Analytics with Apache Kafka and Apache Spark
Lessons Learned: Using Spark and Microservices
Tuning and Monitoring Deep Learning on Apache Spark
Programming in Spark using PySpark
Apache Spark Tutorial
East Bay Java User Group Oct 2014 Spark Streaming Kinesis Machine Learning
Lessons Learned From Running Spark On Docker
Spark Summit EU talk by Luca Canali
How to deploy Apache Spark 
to Mesos/DCOS
Intro to Apache Spark
Intro to Apache Spark
Global Big Data Conference Sept 2014 AWS Kinesis Spark Streaming Approximatio...
Spark as a Platform to Support Multi-Tenancy and Many Kinds of Data Applicati...
Apache Spark in Industry
Migrating ETL Workflow to Apache Spark at Scale in Pinterest
CLOUD_COMPUTING_MODULE5_RK_BIG_DATA.pptx

More from Evan Chan (17)

PDF
Time-State Analytics: MinneAnalytics 2024 Talk
PDF
Porting a Streaming Pipeline from Scala to Rust
PDF
Designing Stateful Apps for Cloud and Kubernetes
PDF
Histograms at scale - Monitorama 2019
PDF
FiloDB: Reactive, Real-Time, In-Memory Time Series at Scale
PDF
Building a High-Performance Database with Scala, Akka, and Spark
PDF
700 Updatable Queries Per Second: Spark as a Real-Time Web Service
PDF
Building Scalable Data Pipelines - 2016 DataPalooza Seattle
PDF
FiloDB - Breakthrough OLAP Performance with Cassandra and Spark
PDF
Breakthrough OLAP performance with Cassandra and Spark
PDF
Akka in Production - ScalaDays 2015
PDF
MIT lecture - Socrata Open Data Architecture
PDF
OLAP with Cassandra and Spark
PDF
Spark Summit 2014: Spark Job Server Talk
PDF
Spark Job Server and Spark as a Query Engine (Spark Meetup 5/14)
PDF
Cassandra Day 2014: Interactive Analytics with Cassandra and Spark
PDF
Real-time Analytics with Cassandra, Spark, and Shark
Time-State Analytics: MinneAnalytics 2024 Talk
Porting a Streaming Pipeline from Scala to Rust
Designing Stateful Apps for Cloud and Kubernetes
Histograms at scale - Monitorama 2019
FiloDB: Reactive, Real-Time, In-Memory Time Series at Scale
Building a High-Performance Database with Scala, Akka, and Spark
700 Updatable Queries Per Second: Spark as a Real-Time Web Service
Building Scalable Data Pipelines - 2016 DataPalooza Seattle
FiloDB - Breakthrough OLAP Performance with Cassandra and Spark
Breakthrough OLAP performance with Cassandra and Spark
Akka in Production - ScalaDays 2015
MIT lecture - Socrata Open Data Architecture
OLAP with Cassandra and Spark
Spark Summit 2014: Spark Job Server Talk
Spark Job Server and Spark as a Query Engine (Spark Meetup 5/14)
Cassandra Day 2014: Interactive Analytics with Cassandra and Spark
Real-time Analytics with Cassandra, Spark, and Shark

Recently uploaded (20)

PPTX
Strings in CPP - Strings in C++ are sequences of characters used to store and...
PPTX
Internet of Things (IOT) - A guide to understanding
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PPT
Drone Technology Electronics components_1
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPT
Project quality management in manufacturing
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
Fluid Mechanics, Module 3: Basics of Fluid Mechanics
PDF
Digital Logic Computer Design lecture notes
PPTX
Geodesy 1.pptx...............................................
PDF
composite construction of structures.pdf
PPTX
Welding lecture in detail for understanding
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
DOCX
573137875-Attendance-Management-System-original
PPTX
Unit 5 BSP.pptxytrrftyyydfyujfttyczcgvcd
Strings in CPP - Strings in C++ are sequences of characters used to store and...
Internet of Things (IOT) - A guide to understanding
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Drone Technology Electronics components_1
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Project quality management in manufacturing
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
bas. eng. economics group 4 presentation 1.pptx
Fluid Mechanics, Module 3: Basics of Fluid Mechanics
Digital Logic Computer Design lecture notes
Geodesy 1.pptx...............................................
composite construction of structures.pdf
Welding lecture in detail for understanding
CYBER-CRIMES AND SECURITY A guide to understanding
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
573137875-Attendance-Management-System-original
Unit 5 BSP.pptxytrrftyyydfyujfttyczcgvcd

Productionizing Spark and the Spark Job Server

  • 1. Productionizing Spark
 and the Spark REST Job Server Evan Chan Distinguished Engineer @TupleJump
  • 2. Who am I • Distinguished Engineer, Tuplejump • @evanfchan • http://github.com/velvia • User and contributor to Spark since 0.9 • Co-creator and maintainer of Spark Job Server 2
  • 3. TupleJump ✤ Tuplejump is a big data technology leader providing solutions and development partnership. ✤ FiloDB - Spark-based analytics database for time series and event data (github.com/tuplejump/FiloDB) ✤ Calliope - the first Spark-Cassandra integration ✤ Stargate - an open source Lucene indexer for Cassandra ✤ SnackFS - open source HDFS for Cassandra 3
  • 4. TupleJump - Big Data Dev Partners 4
  • 6. Choices, choices, choices • YARN, Mesos, Standalone? • With a distribution? • What environment? • How should I deploy? • Hosted options? • What about dependencies? 6
  • 7. BasicTerminology • The Spark documentation is really quite good. 7
  • 8. What all the clusters have in common • YARN, Mesos, and Standalone all support the following features: –Running the Spark driver app in cluster mode –Restarts of the driver app upon failure –UI to examine state of workers and apps 8
  • 9. Spark Standalone Mode • The easiest clustering mode to deploy** –Use make-distribution.sh to package, copy to all nodes –sbin/start-master.sh on master node, then start slaves –Test with spark-shell • HA Master through Zookeeper election • Must dedicate whole cluster to Spark • In latest survey, used by almost half of Spark users 9
  • 10. Apache Mesos • Was started by Matias in 2007 before he worked on Spark! • Can run your entire company on Mesos, not just big data –Great support for micro services - Docker, Marathon –Can run non-JVM workloads like MPI • Commercial backing from Mesosphere • Heavily used at Twitter and AirBNB • The Mesosphere DCOS will revolutionize Spark et al deployment - “dcos package install spark” !! 10
  • 11. Mesos vsYARN • Mesos is a two-level resource manager, with pluggable schedulers –You can run YARN on Mesos, with YARN delegating resource offers to Mesos (Project Myriad) –You can run multiple schedulers within Mesos, and write your own • If you’re already a Hadoop / Cloudera etc shop, YARN is easy choice • If you’re starting out, go 100% Mesos 11
  • 12. Mesos Coarse vs Fine-Grained • Spark offers two modes to run Mesos Spark apps in (and you can choose per driver app): –coarse-grained: Spark allocates fixed number of workers for duration of driver app –fine-grained (default): Dynamic executor allocation per task, but higher overhead per task • Use coarse-grained if you run low-latency jobs 12
  • 13. What about Datastax DSE? • Cassandra, Hadoop, Spark all bundled in one distribution, collocated • Custom cluster manager and HA/failover logic for Spark Master, using Cassandra gossip • Can use CFS (Cassandra-based HDFS), SnackFS, or plain Cassandra tables for storage –or use Tachyon to cache, then no need to collocate (use Mesosphere DCOS) 13
  • 14. Hosted Apache Spark • Spark on Amazon EMR - first class citizen now –Direct S3 access! • Google Compute Engine - “Click to Deploy” Hadoop+Spark • Databricks Cloud • Many more coming • What you notice about the different environments: –Everybody has their own way of starting: spark-submit vs dse spark vs aws emr … vs dcos spark … 14
  • 15. Mesosphere DCOS • Automates deployment to AWS, Google, etc. • Common API and UI, better cost and control, cloud • Load balancing and routing, Mesos for resource sharing • dcos package install spark 15
  • 17. Building Spark • Make sure you build for the right Hadoop version • eg mvn -Phadoop-2.2 -Dhadoop.version=2.2.0 -DskipTests clean package • Make sure you build for the right Scala version - Spark supports both 2.10 and 2.11 17
  • 18. Jars schmars • Dependency conflicts are the worst part of Spark dev • Every distro has slightly different jars - eg CDH < 5.4 packaged a different version of Akka • Leave out Hive if you don’t need it • Use the Spark UI “Environment” tab to check jars and how they got there • spark-submit —jars / —packages forwards jars to every executor (unless it’s an HDFS / HTTP path) • Spark-env.sh SPARK_CLASSPATH - include dep jars you’ve deployed to every node 18
  • 19. Jars schmars • You don’t need to package every dependency with your Spark application! • spark-streaming is included in the distribution • spark-streaming includes some Kafka jars already • etc. 19
  • 20. ClassPath Configuration Options • spark.driver.userClassPathFirst, spark.executor.userClassPathFirst • One way to solve dependency conflicts - make sure YOUR jars are loaded first, ahead of Spark’s jars • Client mode: use spark-submit options • —driver-class-path, —driver-library-path • Spark Classloader order of resolution • Classes specified via —jars, —packages first (if above flag is set) • Everything else in SPARK_CLASSPATH 20
  • 21. Some useful config options 21 spark.serializer org.apache.spark.serializer.KryoSerializer spark.default.parallelism or pass # partitions for shuffle/reduce tasks as second arg spark.scheduler.mode FAIR - enable parallelism within apps (multi-tenant or low-latency apps like SQL server) spark.shuffle.memoryFraction, spark.storage.memoryFraction Fraction of Java heap to allocate for shuffle and RDD caching, respectively, before spilling to disk spark.cleaner.ttl Enables periodic cleanup of cached RDDs, good for long- lived jobs spark.akka.frameSize Increase the default of 10 (MB) to send back very large results to the driver app (code smell) spark.task.maxFailures # of retries for task failure is this - 1
  • 22. Control Spark SQL Shuffles • By default, Spark SQL / DataFrames will use 200 partitions when doing any groupBy / distinct operations • sqlContext.setConf(
 "spark.sql.shuffle.partitions", "16") 22
  • 23. Prevent temp files from filling disks • (Spark Standalone mode only) • spark.worker.cleanup.enabled = true • spark.worker.cleanup.interval • Configuring executor log file retention/rotation spark.executor.logs.rolling.maxRetainedFiles = 90 spark.executor.logs.rolling.strategy = time 23
  • 24. Tuning Spark GC • Lots of cached RDDs = huge old gen GC cycles, stop-the-world GC • Know which operations consume more memory (sorts, shuffles) • Try the new G1GC … avoids whole-heap scans • -XX:+UseG1GC • https://databricks.com/blog/2015/05/28/tuning-java-garbage-collection-for- spark-applications.html 24
  • 26. Run your apps in the cluster • spark-submit: —deploy-mode cluster • Spark Job Server: deploy SJS to the cluster • Drivers and executors are very chatty - want to reduce latency and decrease chance of networking timeouts • Want to avoid running jobs on your local machine 26
  • 27. Automatic Driver Restarts • Standalone: —deploy-mode cluster —supervise • YARN: —deploy-mode cluster • Mesos: use Marathon to restart dead slaves • Periodic checkpointing: important for recovering data • RDD checkpointing helps reduce long RDD lineages 27
  • 28. Speeding up application startup • Spark-submit’s —packages option is super convenient for downloading dependencies, but avoid it in production • Downloads tons of jars from Maven when driver starts up, then executors copy all the jars from driver • Deploy frequently used dependencies to worker nodes yourself • For really fast Spark jobs, use the Spark Job Server and share a SparkContext amongst jobs! 28
  • 29. Spark(Context) Metrics • Spark’s built in MetricsSystem has sources (Spark info, JVM, etc.) and sinks (Graphite, etc.) • Configure metrics.properties (template in spark conf/ dir) and use these params to spark-submit --files=/path/to/metrics.properties --conf spark.metrics.conf=metrics.properties • See http://www.hammerlab.org/2015/02/27/monitoring-spark-with-graphite- and-grafana/ 29
  • 30. Application Metrics • Missing Hadoop counters? Use Spark Accumulators • https://gist.github.com/ibuenros/ 9b94736c2bad2f4b8e23 • Above registers accumulators as a source to Spark’s MetricsSystem 30
  • 31. Watch how RDDs are cached • RDDs cached to disk could slow down computation 31
  • 32. Are your jobs stuck? • First check cluster resources - does a job have enough CPU/mem? • Take a thread dump of executors: 32
  • 33. TheWorst Killer - Classpath • Classpath / jar versioning issues may cause Spark to hang silently. Debug using the Environment tab of the UI: 33
  • 36. Spark Job Server -What • REST Interface for your Spark jobs • Streaming, SQL, extendable • Job history and configuration logged to a database • Enable interactive low-latency queries (SQL/Dataframes works too) of cached RDDs and tables 36
  • 37. Spark Job Server -Where 37 Kafka Spark Streaming Datastore Spark Spark Job Server Internal users Internet HTTP/HTTPS
  • 38. Spark Job Server -Why • Spark as a service • Share Spark across the Enterprise • HTTPS and LDAP Authentication • Enterprises - easy integration with other teams, any language • Share in-memory RDDs across logical jobs • Low-latency queries 38
  • 39. Used in Production -Worldwide 39
  • 40. Used in Production • As of last month, officially included in 
 Datastax Enterprise 4.8! 40
  • 41. Active Community • Large number of contributions from community • HTTPS/LDAP contributed by team at KNIME • Multiple committers • Gitter IM channel, active Google group 41
  • 42. Platform Independent • Spark Standalone • Mesos • Yarn • Docker • Example: platform-independent LDAP auth, HTTPS, can be used as a portal 42
  • 44. Creating a Job Server Project • sbt assembly -> fat jar -> upload to job server • "provided" is used. Don’t want SBT assembly to include the whole job server jar. • Java projects should be possible too 44 resolvers += "Job Server Bintray" at "https://dl.bintray.com/spark- jobserver/maven" libraryDependencies += "spark.jobserver" % "job-server-api" % "0.5.0" % "provided" • In your build.sbt, add this
  • 45. /** * A super-simple Spark job example that implements the SparkJob trait and * can be submitted to the job server. */ object WordCountExample extends SparkJob { override def validate(sc: SparkContext, config: Config): SparkJobValidation = { Try(config.getString(“input.string”)) .map(x => SparkJobValid) .getOrElse(SparkJobInvalid(“No input.string”)) } override def runJob(sc: SparkContext, config: Config): Any = { val dd = sc.parallelize(config.getString(“input.string”).split(" ").toSeq) dd.map((_, 1)).reduceByKey(_ + _).collect().toMap } } Example Job Server Job 45
  • 46. What’s Different? • Job does not create Context, Job Server does • Decide when I run the job: in own context, or in pre-created context • Allows for very modular Spark development • Break up a giant Spark app into multiple logical jobs • Example: • One job to load DataFrames tables • One job to query them • One job to run diagnostics and report debugging information 46
  • 47. Submitting and Running a Job 47 ✦ curl --data-binary @../target/mydemo.jar localhost:8090/ jars/demo OK[11:32 PM] ~ ✦ curl -d "input.string = A lazy dog jumped mean dog" 'localhost:8090/jobs? appName=demo&classPath=WordCountExample&sync=true' { "status": "OK", "RESULT": { "lazy": 1, "jumped": 1, "A": 1, "mean": 1, "dog": 2 } }
  • 48. Retrieve Job Statuses 48 ~/s/jobserver (evan-working-1 ↩=) curl 'localhost:8090/jobs? limit=2' [{ "duration": "77.744 secs", "classPath": "ooyala.cnd.CreateMaterializedView", "startTime": "2013-11-26T20:13:09.071Z", "context": "8b7059dd-ooyala.cnd.CreateMaterializedView", "status": "FINISHED", "jobId": "9982f961-aaaa-4195-88c2-962eae9b08d9" }, { "duration": "58.067 secs", "classPath": "ooyala.cnd.CreateMaterializedView", "startTime": "2013-11-26T20:22:03.257Z", "context": "d0a5ebdc-ooyala.cnd.CreateMaterializedView", "status": "FINISHED", "jobId": "e9317383-6a67-41c4-8291-9c140b6d8459" }]
  • 49. Use Case: Fast Query Jobs 49
  • 50. Spark as a Query Engine • Goal: spark jobs that run in under a second and answers queries on shared RDD data • Query params passed in as job config • Need to minimize context creation overhead –Thus many jobs sharing the same SparkContext • On-heap RDD caching means no serialization loss • Need to consider concurrent jobs (fair scheduling) 50
  • 51. 51 RDDLoad Data Query JobSpark
 Executors Cassandra REST Job Server Query Job Query Result Query Result new SparkContext Create query context Load some data
  • 52. Sharing Data Between Jobs • RDD Caching –Benefit: no need to serialize data. Especially useful for indexes etc. –Job server provides a NamedRdds trait for thread-safe CRUD of cached RDDs by name • (Compare to SparkContext’s API which uses an integer ID and is not thread safe) • For example, at Ooyala a number of fields are multiplexed into the RDD name: timestamp:customerID:granularity 52
  • 53. Data Concurrency • With fair scheduler, multiple Job Server jobs can run simultaneously on one SparkContext • Managing multiple updates to RDDs –Cache keeps track of which RDDs being updated –Example: thread A spark job creates RDD “A” at t0 –thread B fetches RDD “A” at t1 > t0 –Both threads A and B, using NamedRdds, will get the RDD at time t2 when thread A finishes creating the RDD “A” 53
  • 54. Spark SQL/Hive Query Server ✤ Start a context based on SQLContext:
 curl -d "" '127.0.0.1:8090/contexts/sql-context?context- factory=spark.jobserver.context.SQLContextFactory' ✤ Run a job for loading and caching tables in DataFrames
 curl -d "" '127.0.0.1:8090/jobs? appName=test&classPath=spark.jobserver.SqlLoaderJob&context=sql- context&sync=true' ✤ Supply a query to a Query Job. All queries are logged in database by Spark Job Server.
 curl -d ‘sql=“SELECT count(*) FROM footable”’ '127.0.0.1:8090/jobs? appName=test&classPath=spark.jobserver.SqlQueryJob&context=sql- context&sync=true' 54
  • 55. Example: Combining Streaming And Spark SQL 55 SparkSQLStreamingContext Kafka Streaming Job SQL Query Job DataFrames Spark Job Server SQL Query
  • 56. SparkSQLStreamingJob 56 trait SparkSqlStreamingJob extends SparkJobBase { type C = SQLStreamingContext } class SQLStreamingContext(c: SparkContext) { val streamingContext = new StreamingContext(c, ...) val sqlContext = new SQLContext(c) } Now you have access to both StreamingContext and SQLContext, and it can be shared across jobs!
  • 57. SparkSQLStreamingContext 57 To start this context:
 curl -d "" “localhost:8090/contexts/stream_sqltest?context- factory=com.abc.SQLStreamingContextFactory" class SQLStreamingContextFactory extends SparkContextFactory { import SparkJobUtils._ type C = SQLStreamingContext with ContextLike def makeContext(config: Config, contextConfig: Config, contextName: String): C = { val batchInterval = contextConfig.getInt("batch_interval") val conf = configToSparkConf(config, contextConfig, contextName) new SQLStreamingContext(new SparkContext(conf), Seconds(batchInterval)) with ContextLike { def sparkContext: SparkContext = this.streamingContext.sparkContext def isValidJob(job: SparkJobBase): Boolean = job.isInstanceOf[SparkSqlStreamingJob] // Stop the streaming context, but not the SparkContext so that it can be re-used // to create another streaming context if required: def stop() { this.streamingContext.stop(false) } } } }
  • 59. Future Plans • PR: Forked JVMs for supporting many concurrent contexts • True HA operation • Swagger API documentation 59
  • 60. HA for Job Server Job Server 1 Job Server 2 Active Job Context Gossip Load balancer 60 Database GET /jobs/<id>
  • 61. HA and Hot Failover for Jobs Job Server 1 Job Server 2 Active Job Context HDFS Standby Job Context Gossip Checkpoint 61
  • 62. Thanks for your contributions! • All of these were community contributed: –HTTPS and Auth –saving and retrieving job configuration –forked JVM per context • Your contributions are very welcome on Github! 62
  • 63. 63 And Everybody is Hiring!! Thank you!
  • 64. WhyWe Needed a Job Server • Our vision for Spark is as a multi-team big data service • What gets repeated by every team: • Bastion box for running Hadoop/Spark jobs • Deploys and process monitoring • Tracking and serializing job status, progress, and job results • Job validation • No easy way to kill jobs • Polyglot technology stack - Ruby scripts run jobs, Go services
  • 66. Completely Async Design ✤ http://spray.io - probably the fastest JVM HTTP microframework ✤ Akka Actor based, non blocking ✤ Futures used to manage individual jobs. (Note that Spark is using Scala futures to manage job stages now) ✤ Single JVM for now, but easy to distribute later via remote Actors / Akka Cluster
  • 67. Async Actor Flow Spray web API Request actor Local Supervisor Job Manager Job 1 Future Job 2 Future Job Status Actor Job Result Actor
  • 68. Message flow fully documented
  • 70. Metadata Store ✤ JarInfo, JobInfo, ConfigInfo ✤ JobSqlDAO. Store metadata to SQL database by JDBC interface. ✤ Easily configured by spark.sqldao.jdbc.url ✤ jdbc:mysql://dbserver:3306/jobserverdb ✤ Multiple Job Servers can share the same MySQL. ✤ Jars uploaded once but accessible by all servers. ✤ The default will be JobSqlDAO and H2. ✤ Single H2 DB file. Serialization and deserialization are handled by H2.
  • 71. Deployment and Metrics ✤ spark-jobserver repo comes with a full suite of tests and deploy scripts: ✤ server_deploy.sh for regular server pushes ✤ server_package.sh for Mesos and Chronos .tar.gz ✤ /metricz route for codahale-metrics monitoring ✤ /healthz route for health check
  • 72. Challenges and Lessons • Spark is based around contexts - we need a Job Server oriented around logical jobs • Running multiple SparkContexts in the same process • Better long term solution is forked JVM per SparkContext • Workaround: spark.driver.allowMultipleContexts = true • Dynamic jar and class loading is tricky • Manage threads carefully - each context uses lots of threads