SlideShare a Scribd company logo
MILAN - 08TH OF MAY - 2015
PARTNERS
Scala in increasingly demanding
environments
Stefano Rocco – Roberto Bentivoglio
DATABIZ
Agenda
Introduction
Command Query Responsibility Segregation
Event Sourcing
Akka persistence
Apache Spark
Real-time “bidding”
Live demo (hopefully)
FAQ
1. Introduction
The picture
Highly demanding environments
- Data is increasing dramatically
- Applications are needed faster than ever
- Customers are more demanding
- Customers are becoming more sophisticated
- Services are becoming more sophisticated and complex
- Performance & Quality is becoming a must
- Rate of business change is ever increasing
- And more…
Reactive Manifesto
Introduction – The way we see
Responsive
Message Driven
ResilientElastic
We need to embrace change!
Introduction – The world is changing…
Introduction - Real Time “Bidding”
High level architecture
Akka
Persistence
Input
Output
Cassandra
Kafka
Training PredictionScoring
SparkBatch
Real Time
Action
Dispatch
Publish
Store
Journaling
2. Command Query
Responsibility Segregation
Multi-tier stereotypical architecture + CRUD
CQRS
Presentation Tier
Business Logic Tier
Data Tier
Integratio
nTier
RDBMS
ClientSystems
ExternalSystems
DTO/VO
Multi-tier stereotypical architecture + CRUD
CQRS
- Pro
- Simplicity
- Tooling
- Cons
- Difficult to scale (RDBMS is usually the bottleneck)
- Domain Driven Design not applicable (using CRUD)
Think different!
CQRS
- Do we have a different architecture model without heavily rely on:
- CRUD
- RDBMS transactions
- J2EE/Spring technologies stack
Command and Query Responsibility Segregation
Originated with Bertrand Meyer’s Command and Query Separation Principle
“It states that every method should either be a command that performs an action, or a query that returns
data to the caller, but not both. In other words, asking a question should not change the answer.
More formally, methods should return a value only if they are referentially transparent and hence
possess no side effects” (Wikipedia)
CQRS
Command and Query Responsibility Segregation (Greg Young)
CQRS
Available Services
- The service has been split into:
- Command → Write side service
- Query → Read side service
CQRS
Change status Status
changed
Get status Status
retrieved
Main architectural properties
- Consistency
- Command → consistent by definition
- Query → eventually consistent
- Data Storage
- Command → normalized way
- Query → denormalized way
- Scalability
- Command → low transactions rate
- Query → high transactions rate
CQRS
3. Event Sourcing
Storing Events…
Event Sourcing
Systems today usually rely on
- Storing of current state
- Usage of RDBMS as storage solution
Architectural choices are often “RDBMS centric”
Many systems need to store all the occurred events instead to store only the updated state
Commands vs Events
Event Sourcing
- Commands
- Ask to perform an operation (imperative tense)
- Can be rejected
- Events
- Something happened in the past (past tense)
- Cannot be undone
State mutationCommand
validation
Command received Event persisted
Command and Event sourcing
Event Sourcing
An informal and short definition...
Append to a journal every commands (or
events) received (or generated) instead of
storing the current state of the application!
CRUD vs Event sourcing
Event Sourcing
Deposited
100 EUR
Withdrawn
40 EUR
Deposited
200 EUR
- CRUD
- Account table keeps the current amount availability (260)
- Occoured events are stored in a seperated table
- Event Sourcing
- The current status is kept in-memory or by processing all events
- 100 – 40 + 200 => 260
Account created
Main properties
- There is no delete
- Performance and Scalability
- “Append only” model are easier to scale
- Horizontal Partitioning (Sharding)
- Rolling Snapshots
- No Impedance Mismatch
- Event Log can bring great business value
Event Sourcing
4. Akka persistence
Introduction
We can think about it as
AKKA PERSISTENCE = CQRS + EVENT SOURCING
Akka Persistence
Main properties
- Akka persistence enables stateful actors to persiste their internal state
- Recover state after
- Actor start
- Actor restart
- JVM crash
- By supervisor
- Cluster migration
Akka Persistence
Main properties
- Changes are append to storage
- Nothing is mutated
- high transactions rates
- Efficient replication
- Stateful actors are recovered by replying store changes
- From the begging or from a snapshot
- Provides also P2P communication with at-least-once message delivery semantics
Akka Persistence
Components
- PersistentActor → persistent stateful actor
- Command or event sourced actor
- Persist commands/events to a journal
- PersistentView → Receives journaled messages written by another persistent actor
- AtLeastOnceDelivery → also in case of sender or receiver JVM crashes
- Journal → stores the sequence of messages sent to a persistent actor
- Snapshot store → are used for optimizing recovery times
Akka Persistence
Code example
class BookActor extends PersistentActor {
override val persistenceId: String = "book-persistence"
override def receiveRecover: Receive = {
case _ => // RECOVER AFTER A CRASH HERE...
}
override def receiveCommand: Receive = {
case _ => // VALIDATE COMMANDS AND PERSIST EVENTS HERE...
}
}
type Receive = PartialFunction[Any, Unit]
Akka Persistence
5. Apache Spark
Apache Spark is a cluster computing platform designed to be fast and general-purpose
Spark SQL
Structured data
Spark Streaming
Real Time
Mllib
Machine Learning
GraphX
Graph Processing
Spark Core
Standalone Scheduler YARN Mesos
Apache Spark
The Stack
Apache Spark
The Stack
- Spark SQL: It allows querying data via SQL as well as the Apache Variant of SQL (HQL) and supports
many sources of data, including Hive tables, Parquet and JSON
- Spark Streaming: Components that enables processing of live streams of data in a elegant, fault
tolerant, scalable and fast way
- MLlib: Library containing common machine learning (ML) functionality including algorithms such as
classification, regression, clustering, collaborative filtering etc. to scale out across a cluster
- GraphX: Library for manipulating graphs and performing graph-parallel computation
- Cluster Managers: Spark is designed to efficiently scale up from one to many thousands of compute
nodes. It can run over a variety of cluster managers including Hadoop, YARN, Apache Mesos etc. Spark
has a simple cluster manager included in Spark itself called the Standalone Scheduler
Apache Spark
Core Concepts
SparkContext
Driver Program
Worker Node
Worker Node
Executor
Task Task
Worker Node
Executor
Task Task
Apache Spark
Core Concepts
- Every Spark application consists of a driver program that launches various parallel operations on
the cluster. The driver program contains your application’s main function and defines distributed
datasets on the cluster, then applies operations to them
- Driver programs access spark through the SparkContext object, which represents a connection to
a computing cluster.
- The SparkContext can be used to build RDDs (Resilient distributed datasets) on which you can
run a series of operations
- To run these operations, driver programs typically manage a number of nodes called executors
Apache Spark
RDD (Resilient Distributed Dataset)
It is an immutable distributed collection of data, which is partitioned across
machines in a cluster.
It facilitates two types of operations: transformation and action
-Resilient: It can be recreated when data in memory is lost
-Distributed: stored in memory across the cluster
-Dataset: data that comes from file or created programmatically
Apache Spark
Transformations
- A transformation is an operation such as map(), filter() or union on a RDD that yield
another RDD.
- Transformations are lazilly evaluated, in that the don’t run until an action is executed.
- Spark driver remembers the transformation applied to an RDD, so if a partition is lost,
that partition can easily be reconstructed on some other machine in the cluster.
(Resilient)
- Resiliency is achieved via a Lineage Graph.
Apache Spark
Actions
- Compute a result based on a RDD and either return it to the driver program
or save it to an external storage system.
- Typical RDD actions are count(), first(), take(n)
Apache Spark
Transformations vs Actions
RDD RDD
RDD Value
Transformations: define new RDDs based on current one. E.g. map, filter, reduce etc.
Actions: return values. E.g. count, sum, collect, etc.
Apache Spark
Benefits
Scalable Can be deployed on very large clusters
Fast In memory processing for speed
Resilient Recover in case of data loss
Written in Scala… has a simple high level API for Scala, Java and Python
Apache Spark
Lambda Architecture – One fits all technology!
New data
Batch Layer
Speed Layer
Serving
Layer
Data
Consumers
Query
Spark
Spark
- Spark Streaming receives streaming input, and divides the data into batches which are then
processed by the Spark Core
Input data
Stream
Batches of
input data
Batches of
processed data
Spark
Streaming
Spark Core
Apache Spark
Speed Layer
val numThreads = 1
val group = "test"
val topicMap = group.split(",").map((_, numThreads)).toMap
val conf = new SparkConf().setMaster("local[*]").setAppName("KafkaWordCount")
val sc = new SparkContext(conf)
val ssc = new StreamingContext(sc, Seconds(2))
val lines = KafkaUtils.createStream(ssc, "localhost:2181", group,
topicMap).map(_._2)
val words = lines.flatMap(_.split(","))
val wordCounts = words.map { x => (x, 1L) }.reduceByKey(_ + _)
....
ssc.start()
ssc.awaitTermination()
Apache Spark – Streaming word count example
Streaming with Spark and Kafka
6. Real-time “bidding”
Real Time “Bidding”
High level architecture
Akka
Persistence
Input
Output
Cassandra
Kafka
Training PredictionScoring
SparkBatch
Real Time
Action
Dispatch
Publish
Store
Journaling
Apache Kafka
Distributed messaging system
- Fast: Hight throughput for both publishing and subribing
- Scalable: Very easy to scale out
- Durable: Support persistence of messages
- Consumers are responsible to track their location in each log
Producer
1
Producer
2
Consumer
A
Consumer
B
Consumer
C
Partition 1
Partition 2
Partition 3
Apache Cassandra
Massively Scalable NoSql datastore
- Elastic Scalability
- No single point of failure
- Fast linear scale performance
1 Clients write to any Cassandra node
2 Coordinator node replicates to nodes and zones
3 Nodes returns ack to client
4 Data written to internal commit log disk
5 If a node goes offline, hinted handoff completes the write
when the node comes back up
- Regions = Datacenters
- Zones = Racks
Node
Node
Node
Node
Node
Node
Cluster
7. Live demo
MILAN - 08TH OF MAY - 2015
PARTNERS
THANK YOU!
Stefano Rocco - @whispurr_it
Roberto Bentivoglio - @robbenti
@DATABIZit
PARTNERS
FAQ
We’re hiring!

More Related Content

PDF
SMACK Stack - Fast Data Done Right by Stefan Siprell at Codemotion Dubai
PDF
Typesafe & William Hill: Cassandra, Spark, and Kafka - The New Streaming Data...
PDF
A Tale of Two APIs: Using Spark Streaming In Production
PDF
Near Real Time Indexing Kafka Messages into Apache Blur: Presented by Dibyend...
PDF
2015 01-17 Lambda Architecture with Apache Spark, NextML Conference
PPTX
Developing a Real-time Engine with Akka, Cassandra, and Spray
PDF
Real-time personal trainer on the SMACK stack
PDF
Using Spark, Kafka, Cassandra and Akka on Mesos for Real-Time Personalization
SMACK Stack - Fast Data Done Right by Stefan Siprell at Codemotion Dubai
Typesafe & William Hill: Cassandra, Spark, and Kafka - The New Streaming Data...
A Tale of Two APIs: Using Spark Streaming In Production
Near Real Time Indexing Kafka Messages into Apache Blur: Presented by Dibyend...
2015 01-17 Lambda Architecture with Apache Spark, NextML Conference
Developing a Real-time Engine with Akka, Cassandra, and Spray
Real-time personal trainer on the SMACK stack
Using Spark, Kafka, Cassandra and Akka on Mesos for Real-Time Personalization

What's hot (20)

PDF
NoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch Analysis
PPTX
Kappa Architecture on Apache Kafka and Querona: datamass.io
PDF
Apache Spark 1.6 with Zeppelin - Transformations and Actions on RDDs
PDF
Software Development with Apache Cassandra
PDF
Lambda Architecture with Spark Streaming, Kafka, Cassandra, Akka, Scala
PDF
Lambda architecture
PDF
Reactive app using actor model & apache spark
ODP
Lambda Architecture with Spark
PDF
Streaming Big Data with Spark, Kafka, Cassandra, Akka & Scala (from webinar)
PDF
Reactive dashboard’s using apache spark
PDF
Fast and Simplified Streaming, Ad-Hoc and Batch Analytics with FiloDB and Spa...
PDF
Akka in Production - ScalaDays 2015
PDF
Using the SDACK Architecture to Build a Big Data Product
PDF
Real-Time Anomaly Detection with Spark MLlib, Akka and Cassandra
PDF
Cassandra at eBay - Cassandra Summit 2012
PDF
Lambda Architecture with Spark, Spark Streaming, Kafka, Cassandra, Akka and S...
PPTX
Spark Kernel Talk - Apache Spark Meetup San Francisco (July 2015)
PDF
Sa introduction to big data pipelining with cassandra & spark west mins...
PPTX
Glint with Apache Spark
PPTX
Kafka On YARN (KOYA): An Open Source Initiative to integrate Kafka & YARN
NoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch Analysis
Kappa Architecture on Apache Kafka and Querona: datamass.io
Apache Spark 1.6 with Zeppelin - Transformations and Actions on RDDs
Software Development with Apache Cassandra
Lambda Architecture with Spark Streaming, Kafka, Cassandra, Akka, Scala
Lambda architecture
Reactive app using actor model & apache spark
Lambda Architecture with Spark
Streaming Big Data with Spark, Kafka, Cassandra, Akka & Scala (from webinar)
Reactive dashboard’s using apache spark
Fast and Simplified Streaming, Ad-Hoc and Batch Analytics with FiloDB and Spa...
Akka in Production - ScalaDays 2015
Using the SDACK Architecture to Build a Big Data Product
Real-Time Anomaly Detection with Spark MLlib, Akka and Cassandra
Cassandra at eBay - Cassandra Summit 2012
Lambda Architecture with Spark, Spark Streaming, Kafka, Cassandra, Akka and S...
Spark Kernel Talk - Apache Spark Meetup San Francisco (July 2015)
Sa introduction to big data pipelining with cassandra & spark west mins...
Glint with Apache Spark
Kafka On YARN (KOYA): An Open Source Initiative to integrate Kafka & YARN
Ad

Similar to Scala in increasingly demanding environments - DATABIZ (20)

PPTX
SnappyData overview NikeTechTalk 11/19/15
PPTX
Nike tech talk.2
PDF
BBL KAPPA Lesfurets.com
PPTX
Bring the Spark To Your Eyes
PDF
Apache Spark - A High Level overview
PPTX
Spark Streaming Recipes and "Exactly Once" Semantics Revised
PDF
Extending Spark Streaming to Support Complex Event Processing
PDF
Spark cep
PPTX
Real time Analytics with Apache Kafka and Apache Spark
PPT
Spark streaming with kafka
PPT
Spark stream - Kafka
PDF
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
PPTX
Paris Data Geek - Spark Streaming
PDF
Spark Saturday: Spark SQL & DataFrame Workshop with Apache Spark 2.3
PDF
Unified Big Data Processing with Apache Spark
PDF
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
PDF
Fast Data Analytics with Spark and Python
PDF
Headaches and Breakthroughs in Building Continuous Applications
PPTX
Global Big Data Conference Sept 2014 AWS Kinesis Spark Streaming Approximatio...
PPTX
Spark + AI Summit 2019: Headaches and Breakthroughs in Building Continuous Ap...
SnappyData overview NikeTechTalk 11/19/15
Nike tech talk.2
BBL KAPPA Lesfurets.com
Bring the Spark To Your Eyes
Apache Spark - A High Level overview
Spark Streaming Recipes and "Exactly Once" Semantics Revised
Extending Spark Streaming to Support Complex Event Processing
Spark cep
Real time Analytics with Apache Kafka and Apache Spark
Spark streaming with kafka
Spark stream - Kafka
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
Paris Data Geek - Spark Streaming
Spark Saturday: Spark SQL & DataFrame Workshop with Apache Spark 2.3
Unified Big Data Processing with Apache Spark
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
Fast Data Analytics with Spark and Python
Headaches and Breakthroughs in Building Continuous Applications
Global Big Data Conference Sept 2014 AWS Kinesis Spark Streaming Approximatio...
Spark + AI Summit 2019: Headaches and Breakthroughs in Building Continuous Ap...
Ad

Recently uploaded (20)

PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
Advanced IT Governance
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPTX
MYSQL Presentation for SQL database connectivity
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
GamePlan Trading System Review: Professional Trader's Honest Take
PPT
Teaching material agriculture food technology
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Machine learning based COVID-19 study performance prediction
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
Network Security Unit 5.pdf for BCA BBA.
The Rise and Fall of 3GPP – Time for a Sabbatical?
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Advanced IT Governance
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
CIFDAQ's Market Insight: SEC Turns Pro Crypto
NewMind AI Weekly Chronicles - August'25 Week I
MYSQL Presentation for SQL database connectivity
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
The AUB Centre for AI in Media Proposal.docx
GamePlan Trading System Review: Professional Trader's Honest Take
Teaching material agriculture food technology
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Machine learning based COVID-19 study performance prediction
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Reach Out and Touch Someone: Haptics and Empathic Computing
Advanced methodologies resolving dimensionality complications for autism neur...

Scala in increasingly demanding environments - DATABIZ

  • 1. MILAN - 08TH OF MAY - 2015 PARTNERS Scala in increasingly demanding environments Stefano Rocco – Roberto Bentivoglio DATABIZ
  • 2. Agenda Introduction Command Query Responsibility Segregation Event Sourcing Akka persistence Apache Spark Real-time “bidding” Live demo (hopefully) FAQ
  • 4. The picture Highly demanding environments - Data is increasing dramatically - Applications are needed faster than ever - Customers are more demanding - Customers are becoming more sophisticated - Services are becoming more sophisticated and complex - Performance & Quality is becoming a must - Rate of business change is ever increasing - And more…
  • 5. Reactive Manifesto Introduction – The way we see Responsive Message Driven ResilientElastic
  • 6. We need to embrace change! Introduction – The world is changing…
  • 7. Introduction - Real Time “Bidding” High level architecture Akka Persistence Input Output Cassandra Kafka Training PredictionScoring SparkBatch Real Time Action Dispatch Publish Store Journaling
  • 9. Multi-tier stereotypical architecture + CRUD CQRS Presentation Tier Business Logic Tier Data Tier Integratio nTier RDBMS ClientSystems ExternalSystems DTO/VO
  • 10. Multi-tier stereotypical architecture + CRUD CQRS - Pro - Simplicity - Tooling - Cons - Difficult to scale (RDBMS is usually the bottleneck) - Domain Driven Design not applicable (using CRUD)
  • 11. Think different! CQRS - Do we have a different architecture model without heavily rely on: - CRUD - RDBMS transactions - J2EE/Spring technologies stack
  • 12. Command and Query Responsibility Segregation Originated with Bertrand Meyer’s Command and Query Separation Principle “It states that every method should either be a command that performs an action, or a query that returns data to the caller, but not both. In other words, asking a question should not change the answer. More formally, methods should return a value only if they are referentially transparent and hence possess no side effects” (Wikipedia) CQRS
  • 13. Command and Query Responsibility Segregation (Greg Young) CQRS
  • 14. Available Services - The service has been split into: - Command → Write side service - Query → Read side service CQRS Change status Status changed Get status Status retrieved
  • 15. Main architectural properties - Consistency - Command → consistent by definition - Query → eventually consistent - Data Storage - Command → normalized way - Query → denormalized way - Scalability - Command → low transactions rate - Query → high transactions rate CQRS
  • 17. Storing Events… Event Sourcing Systems today usually rely on - Storing of current state - Usage of RDBMS as storage solution Architectural choices are often “RDBMS centric” Many systems need to store all the occurred events instead to store only the updated state
  • 18. Commands vs Events Event Sourcing - Commands - Ask to perform an operation (imperative tense) - Can be rejected - Events - Something happened in the past (past tense) - Cannot be undone State mutationCommand validation Command received Event persisted
  • 19. Command and Event sourcing Event Sourcing An informal and short definition... Append to a journal every commands (or events) received (or generated) instead of storing the current state of the application!
  • 20. CRUD vs Event sourcing Event Sourcing Deposited 100 EUR Withdrawn 40 EUR Deposited 200 EUR - CRUD - Account table keeps the current amount availability (260) - Occoured events are stored in a seperated table - Event Sourcing - The current status is kept in-memory or by processing all events - 100 – 40 + 200 => 260 Account created
  • 21. Main properties - There is no delete - Performance and Scalability - “Append only” model are easier to scale - Horizontal Partitioning (Sharding) - Rolling Snapshots - No Impedance Mismatch - Event Log can bring great business value Event Sourcing
  • 23. Introduction We can think about it as AKKA PERSISTENCE = CQRS + EVENT SOURCING Akka Persistence
  • 24. Main properties - Akka persistence enables stateful actors to persiste their internal state - Recover state after - Actor start - Actor restart - JVM crash - By supervisor - Cluster migration Akka Persistence
  • 25. Main properties - Changes are append to storage - Nothing is mutated - high transactions rates - Efficient replication - Stateful actors are recovered by replying store changes - From the begging or from a snapshot - Provides also P2P communication with at-least-once message delivery semantics Akka Persistence
  • 26. Components - PersistentActor → persistent stateful actor - Command or event sourced actor - Persist commands/events to a journal - PersistentView → Receives journaled messages written by another persistent actor - AtLeastOnceDelivery → also in case of sender or receiver JVM crashes - Journal → stores the sequence of messages sent to a persistent actor - Snapshot store → are used for optimizing recovery times Akka Persistence
  • 27. Code example class BookActor extends PersistentActor { override val persistenceId: String = "book-persistence" override def receiveRecover: Receive = { case _ => // RECOVER AFTER A CRASH HERE... } override def receiveCommand: Receive = { case _ => // VALIDATE COMMANDS AND PERSIST EVENTS HERE... } } type Receive = PartialFunction[Any, Unit] Akka Persistence
  • 29. Apache Spark is a cluster computing platform designed to be fast and general-purpose Spark SQL Structured data Spark Streaming Real Time Mllib Machine Learning GraphX Graph Processing Spark Core Standalone Scheduler YARN Mesos Apache Spark The Stack
  • 30. Apache Spark The Stack - Spark SQL: It allows querying data via SQL as well as the Apache Variant of SQL (HQL) and supports many sources of data, including Hive tables, Parquet and JSON - Spark Streaming: Components that enables processing of live streams of data in a elegant, fault tolerant, scalable and fast way - MLlib: Library containing common machine learning (ML) functionality including algorithms such as classification, regression, clustering, collaborative filtering etc. to scale out across a cluster - GraphX: Library for manipulating graphs and performing graph-parallel computation - Cluster Managers: Spark is designed to efficiently scale up from one to many thousands of compute nodes. It can run over a variety of cluster managers including Hadoop, YARN, Apache Mesos etc. Spark has a simple cluster manager included in Spark itself called the Standalone Scheduler
  • 31. Apache Spark Core Concepts SparkContext Driver Program Worker Node Worker Node Executor Task Task Worker Node Executor Task Task
  • 32. Apache Spark Core Concepts - Every Spark application consists of a driver program that launches various parallel operations on the cluster. The driver program contains your application’s main function and defines distributed datasets on the cluster, then applies operations to them - Driver programs access spark through the SparkContext object, which represents a connection to a computing cluster. - The SparkContext can be used to build RDDs (Resilient distributed datasets) on which you can run a series of operations - To run these operations, driver programs typically manage a number of nodes called executors
  • 33. Apache Spark RDD (Resilient Distributed Dataset) It is an immutable distributed collection of data, which is partitioned across machines in a cluster. It facilitates two types of operations: transformation and action -Resilient: It can be recreated when data in memory is lost -Distributed: stored in memory across the cluster -Dataset: data that comes from file or created programmatically
  • 34. Apache Spark Transformations - A transformation is an operation such as map(), filter() or union on a RDD that yield another RDD. - Transformations are lazilly evaluated, in that the don’t run until an action is executed. - Spark driver remembers the transformation applied to an RDD, so if a partition is lost, that partition can easily be reconstructed on some other machine in the cluster. (Resilient) - Resiliency is achieved via a Lineage Graph.
  • 35. Apache Spark Actions - Compute a result based on a RDD and either return it to the driver program or save it to an external storage system. - Typical RDD actions are count(), first(), take(n)
  • 36. Apache Spark Transformations vs Actions RDD RDD RDD Value Transformations: define new RDDs based on current one. E.g. map, filter, reduce etc. Actions: return values. E.g. count, sum, collect, etc.
  • 37. Apache Spark Benefits Scalable Can be deployed on very large clusters Fast In memory processing for speed Resilient Recover in case of data loss Written in Scala… has a simple high level API for Scala, Java and Python
  • 38. Apache Spark Lambda Architecture – One fits all technology! New data Batch Layer Speed Layer Serving Layer Data Consumers Query Spark Spark
  • 39. - Spark Streaming receives streaming input, and divides the data into batches which are then processed by the Spark Core Input data Stream Batches of input data Batches of processed data Spark Streaming Spark Core Apache Spark Speed Layer
  • 40. val numThreads = 1 val group = "test" val topicMap = group.split(",").map((_, numThreads)).toMap val conf = new SparkConf().setMaster("local[*]").setAppName("KafkaWordCount") val sc = new SparkContext(conf) val ssc = new StreamingContext(sc, Seconds(2)) val lines = KafkaUtils.createStream(ssc, "localhost:2181", group, topicMap).map(_._2) val words = lines.flatMap(_.split(",")) val wordCounts = words.map { x => (x, 1L) }.reduceByKey(_ + _) .... ssc.start() ssc.awaitTermination() Apache Spark – Streaming word count example Streaming with Spark and Kafka
  • 42. Real Time “Bidding” High level architecture Akka Persistence Input Output Cassandra Kafka Training PredictionScoring SparkBatch Real Time Action Dispatch Publish Store Journaling
  • 43. Apache Kafka Distributed messaging system - Fast: Hight throughput for both publishing and subribing - Scalable: Very easy to scale out - Durable: Support persistence of messages - Consumers are responsible to track their location in each log Producer 1 Producer 2 Consumer A Consumer B Consumer C Partition 1 Partition 2 Partition 3
  • 44. Apache Cassandra Massively Scalable NoSql datastore - Elastic Scalability - No single point of failure - Fast linear scale performance 1 Clients write to any Cassandra node 2 Coordinator node replicates to nodes and zones 3 Nodes returns ack to client 4 Data written to internal commit log disk 5 If a node goes offline, hinted handoff completes the write when the node comes back up - Regions = Datacenters - Zones = Racks Node Node Node Node Node Node Cluster
  • 46. MILAN - 08TH OF MAY - 2015 PARTNERS THANK YOU! Stefano Rocco - @whispurr_it Roberto Bentivoglio - @robbenti @DATABIZit PARTNERS FAQ We’re hiring!