SlideShare a Scribd company logo
1© Cloudera, Inc. All rights reserved.
IoT with Spark Streaming
Anand Iyer, Senior Product Manager
2© Cloudera, Inc. All rights reserved.
Spark Streaming
• Incoming data stream is represented as DStreams (Discretized Streams)
• Stream is broken down into micro-batches
• Each micro-batch is an RDD – process using RDD operations
• Micro-batches usually 0.5 sec in size
3© Cloudera, Inc. All rights reserved.
Cloudera customer use case examples – Streaming
• On-line fraud
detection
Financial
Services
• On-line
recommender
systems
• Inventory
management
Retail
• Incident
prediction
(sepsis)
Health
• Analysis of ad
performance in
real-time
Ad tech
4© Cloudera, Inc. All rights reserved.
Concrete end-to-end IoT Use Case
Using Spark Streaming with Kafka, HBase & Solr
5© Cloudera, Inc. All rights reserved.
Proactive maintenance and accident prevention in Railways
• Sensor information continuously streaming in from railway carriages
• Goal: Early detection of damage to rail carriage wheels or to railway tracks
• Proactively fix issues before they become severe
• Prevent derailments, save money and lives
• Based on real-world use case, modified to fit the talk
6© Cloudera, Inc. All rights reserved.
Locomotive Wheel Axle Sensors
Each Sensor Reading Contains:
- Unique ID
- Locomotive ID
- Speed
- Temperature
- Pressure
- Acoustic signals
- GPS Co-ordinates
- Timestamp
- etc
7© Cloudera, Inc. All rights reserved.
Identify Damage to locomotive axle or wheels
Manifests as sustained increase
in sensor readings like temperature,
pressure, acoustic noise, etc.
8© Cloudera, Inc. All rights reserved.
Identify Damage on railway tracks
Manifests as a sudden spike
in sensor readings for
pressure or acoustic noise.
9© Cloudera, Inc. All rights reserved.
Real-Time Detection of Locomotive Wheel Damage
Kafka
- Enrich incoming events with relevant meta-
data
- Locomotive information from
locomotive ID: type, weight, cargo,etc
- Sensor information from Sensor ID:
precise location, type, etc
- GPS co-ordinates to location
characteristics such as gradient of track.
- Recommend HBase as metadata store.
- Use HBase-spark module to fetch data.
- Apply application logic to determine if
sensor readings indicate damage
- Simple rule based
- Complex predictive machine learning
model
10© Cloudera, Inc. All rights reserved.
Real-Time Detection of Locomotive Wheel Damage
Kafka Kafka
https://github.com/harishreedharan/spark-streaming-kafka-output
HDFS
11© Cloudera, Inc. All rights reserved.
Real-Time Detection of Locomotive Wheel Damage
- When an alert is thrown, technician will need to diagnose the event
- Requires visualizing sensor data as a time-series:
- Over arbitrary windows of time
- Compare with values from prior trips
- Software for visualization: http://grafana.org/
- Technician can take appropriate action based on analysis:
- Send rail carriage for maintenance
- Stop train immediately to prevent accident
Visualize Time-Series Sensor Data
12© Cloudera, Inc. All rights reserved.
Data Store for Time-Series Data
Ideal solution: Kudu
- Time series data entails sequential scans for writes and reads, interspersed with
random seeks
Until Kudu is GA:
- Use HBase and model tables for time-series data
- OpenTSDB:
- Built on top of HBase
- Uses a HBase table schema optimized for time-series data
- Simple HTTP API
13© Cloudera, Inc. All rights reserved.
Real-Time Detection of Locomotive Wheel Damage
Kafka Kafka
HDFS
14© Cloudera, Inc. All rights reserved.
Detecting damage to Railtracks
• They manifest as a sharp spike in sensor readings (pressure, acoustic noise)
• Multiple sensors will demonstrate the same spike at the same location (GPS co-
ordinates)
• Multiple sensors from multiple trains will give similar readings at the same
location.
How to detect?
• Index each sensor reading, in Solr, such that they can be queried by GPS co-
ordinates
• When a “spike” is observed, and corresponding alert event is fired, trigger a
search
15© Cloudera, Inc. All rights reserved.
Detecting damage to Rail tracks
• Index each sensor reading, with the Morphlines library
• Embed call to Morphlines in your Spark Streaming application
• Values can be kept in the index for specified period of time, such as a month. Solr can automatically
purge old documents from the index.
• When a “spike” is observed, and corresponding alert event is fired, trigger a
search (manually or programmatically)
• Search for sensor readings at the same GPS co-ordinates as the latest spike.
• Filter out irrelevant readings (e.g. readings on the left track, if spike was observed on the right track)
• Sort results by time, latest to oldest
• If majority of recent readings show a “spike”, indicative of track damage
16© Cloudera, Inc. All rights reserved.
Final Architecture
Kafka Kafka
HDFS
17© Cloudera, Inc. All rights reserved.
Noteworthy Streaming Constructs
18© Cloudera, Inc. All rights reserved.
Sliding Window Operations
Example usages:
- compute counts of items in latest window of time, such as occurrences of exceptions in a
log or trending hashtags in a tweet stream
- Join two streams by matching keys within same window
Note: Provide adequate memory to hold a window’s worth of data
Define operations on data
within a sliding window.
Window Parameters:
- window length
- sliding interval
19© Cloudera, Inc. All rights reserved.
Maintain and update arbitrary state
updateStateByKey(...)
• Define initial state
• Provide state update function
• Continuously update with new information
Examples:
• Running count of words seen in text stream
• Per user session state from activity stream
Note: Requires periodic check-pointing to fault-tolerant storage.
20© Cloudera, Inc. All rights reserved.
Lessons from Production
21© Cloudera, Inc. All rights reserved.
Use Kafka Direct Connector whenever possible
• Better efficiency and performance than Receiver based Connectors
• Automatic back-pressure: steady performance
Kafka
Spark
Driver
Executor
Executor
Executor
Executor
Receiver
Receiver
Spark
Driver
Executor
Executor
Executor
Executor
22© Cloudera, Inc. All rights reserved.
The challenge with Checkpoints
• Spark checkpoints are java serialized
• Upgradeability can be an issue – upgrading the version of Spark or your
application can make checkpointed data unreadable
But long running applications need updates and upgrades!!
23© Cloudera, Inc. All rights reserved.
Upgrades with Checkpoints
• Most often, all you need to pick up is some previous state – maybe an RDD or some
“state”(updateStateByKey), or last processed Kafka offsets
• The solution: Disable Spark Checkpoints
• Use foreachRDD to persist state yourself, to HDFS, in a format your application can
understand
• E.g. Avro, Protobuf, Parquet…
• For upateStateByKey, generate the new state - then persist
24© Cloudera, Inc. All rights reserved.
updateStateByKey(…) upcoming improvements
• Time-out: Automatically delete data after a preset number of micro-batches
• Efficient Updates: Only update a subset of the keys
• Callback to persist state during graceful shutdown
25© Cloudera, Inc. All rights reserved.
Exactly Once Semantics
What is it?
Given a stream of incoming data, any operator is applied exactly once on each
item.
Why is it important?
Prevent erroneous processing of data stream. E.g., Double counting of
aggregations or throwing of redundant alerts
Spark Streaming provides exactly one semantics for data transformations.
However, output operations provide at-least once semantics!!
26© Cloudera, Inc. All rights reserved.
Exactly Once Semantics with Spark Streaming & Kafka
• Associate a “key” with each value written to external store, that can be used for
de-duping
• This key needs to be unique for a given micro-batch
• Kafka Direct Connector provides the following associated with each record, which
will be the same for a given micro-batch:
Kafka-Partition + start-offset + end-offset
• Check out org.apache.spark.streaming.kafka.OffsetRanges and
org.apache.spark.streaming.kafka.HasOffsetRanges
27© Cloudera, Inc. All rights reserved.
Thank You

More Related Content

PDF
Kudu austin oct 2015.pptx
PDF
Tuning Java Driver for Apache Cassandra by Nenad Bozic at Big Data Spain 2017
PPTX
Next Gen Big Data Analytics with Apache Apex
PDF
End of the Myth: Ultra-Scalable Transactional Management by Ricardo Jiménez-P...
ODP
Lambda Architecture with Spark
PPTX
Spark Technology Center IBM
PPTX
Big Data Berlin v8.0 Stream Processing with Apache Apex
PPTX
C* Capacity Forecasting (Ajay Upadhyay, Jyoti Shandil, Arun Agrawal, Netflix)...
Kudu austin oct 2015.pptx
Tuning Java Driver for Apache Cassandra by Nenad Bozic at Big Data Spain 2017
Next Gen Big Data Analytics with Apache Apex
End of the Myth: Ultra-Scalable Transactional Management by Ricardo Jiménez-P...
Lambda Architecture with Spark
Spark Technology Center IBM
Big Data Berlin v8.0 Stream Processing with Apache Apex
C* Capacity Forecasting (Ajay Upadhyay, Jyoti Shandil, Arun Agrawal, Netflix)...

What's hot (20)

PPTX
Integrating Apache Phoenix with Distributed Query Engines
PDF
Spark Summit EU talk by Mike Percy
PDF
Apache Flink & Kudu: a connector to develop Kappa architectures
PDF
Auto Scaling Systems With Elastic Spark Streaming: Spark Summit East talk by ...
PDF
Conviva spark
PDF
Spark Summit EU talk by Zoltan Zvara
PDF
Spark-on-Yarn: The Road Ahead-(Marcelo Vanzin, Cloudera)
PDF
Big Data Day LA 2015 - Always-on Ingestion for Data at Scale by Arvind Prabha...
PDF
Downscaling: The Achilles heel of Autoscaling Apache Spark Clusters
PPTX
Ingestion and Dimensions Compute and Enrich using Apache Apex
PPSX
GE IOT Predix Time Series & Data Ingestion Service using Apache Apex (Hadoop)
PDF
Spark Summit EU talk by Kaarthik Sivashanmugam
PDF
How to Boost 100x Performance for Real World Application with Apache Spark-(G...
PPTX
In Flux Limiting for a multi-tenant logging service
PPTX
Self-Service Analytics on Hadoop: Lessons Learned
PDF
Online Security Analytics on Large Scale Video Surveillance System by Yu Cao ...
PPTX
Design Patterns for Large-Scale Real-Time Learning
PPTX
Real Time Data Processing Using Spark Streaming
PPTX
Capital One's Next Generation Decision in less than 2 ms
PPTX
Bridging the gap of Relational to Hadoop using Sqoop @ Expedia
Integrating Apache Phoenix with Distributed Query Engines
Spark Summit EU talk by Mike Percy
Apache Flink & Kudu: a connector to develop Kappa architectures
Auto Scaling Systems With Elastic Spark Streaming: Spark Summit East talk by ...
Conviva spark
Spark Summit EU talk by Zoltan Zvara
Spark-on-Yarn: The Road Ahead-(Marcelo Vanzin, Cloudera)
Big Data Day LA 2015 - Always-on Ingestion for Data at Scale by Arvind Prabha...
Downscaling: The Achilles heel of Autoscaling Apache Spark Clusters
Ingestion and Dimensions Compute and Enrich using Apache Apex
GE IOT Predix Time Series & Data Ingestion Service using Apache Apex (Hadoop)
Spark Summit EU talk by Kaarthik Sivashanmugam
How to Boost 100x Performance for Real World Application with Apache Spark-(G...
In Flux Limiting for a multi-tenant logging service
Self-Service Analytics on Hadoop: Lessons Learned
Online Security Analytics on Large Scale Video Surveillance System by Yu Cao ...
Design Patterns for Large-Scale Real-Time Learning
Real Time Data Processing Using Spark Streaming
Capital One's Next Generation Decision in less than 2 ms
Bridging the gap of Relational to Hadoop using Sqoop @ Expedia
Ad

Viewers also liked (15)

PPTX
Hadoop, Iot and Analytics- The Three Musketeers
PDF
Healthcare IoT and Analytics to treat Parkinsons
PDF
Event Driven Streaming Analytics - Demostration on Architecture of IoT
PPTX
IOT, Streaming Analytics and Machine Learning
PPTX
Spark Streaming the Industrial IoT
PDF
MongoDB World 2016: The Best IoT Analytics with MongoDB
PDF
Intelligent APIs for Big Data & IoT Create customized data views for mobile,...
PDF
Data Analytics for IoT Device Deployments: Industry Trends and Architectural ...
PDF
The What, Why and How of (Web) Analytics Testing (Web, IoT, Big Data)
PDF
Spark Streaming and IoT by Mike Freedman
PPTX
Make Streaming IoT Analytics Work for You
PDF
User and IoT Data Analytics
PDF
IoT Analytics Company Presentation
PDF
Predictive Analytics for IoT Network Capacity Planning: Spark Summit East tal...
PDF
Data Analytics for IoT
Hadoop, Iot and Analytics- The Three Musketeers
Healthcare IoT and Analytics to treat Parkinsons
Event Driven Streaming Analytics - Demostration on Architecture of IoT
IOT, Streaming Analytics and Machine Learning
Spark Streaming the Industrial IoT
MongoDB World 2016: The Best IoT Analytics with MongoDB
Intelligent APIs for Big Data & IoT Create customized data views for mobile,...
Data Analytics for IoT Device Deployments: Industry Trends and Architectural ...
The What, Why and How of (Web) Analytics Testing (Web, IoT, Big Data)
Spark Streaming and IoT by Mike Freedman
Make Streaming IoT Analytics Work for You
User and IoT Data Analytics
IoT Analytics Company Presentation
Predictive Analytics for IoT Network Capacity Planning: Spark Summit East tal...
Data Analytics for IoT
Ad

Similar to IoT Austin CUG talk (20)

PPTX
Leveraging Azure Databricks to minimize time to insight by combining Batch an...
PPTX
Spark Streaming & Kafka-The Future of Stream Processing
PPTX
Spark Streaming& Kafka-The Future of Stream Processing by Hari Shreedharan of...
PDF
Strata NYC 2015: What's new in Spark Streaming
PDF
NoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch Analysis
PPTX
East Bay Java User Group Oct 2014 Spark Streaming Kinesis Machine Learning
PPTX
Spark Streaming Recipes and "Exactly Once" Semantics Revised
PDF
Spark streaming state of the union
PDF
Spark Streaming Data Pipelines
PDF
Fast, Scalable, Streaming Applications with Spark Streaming, the Kafka API an...
PDF
Real-Time Data Pipelines Made Easy with Structured Streaming in Apache Spark.pdf
PDF
Fraud Detection using Hadoop
PDF
Streaming Sensor Data Slides_Virender
PDF
What No One Tells You About Writing a Streaming App: Spark Summit East talk b...
PDF
What no one tells you about writing a streaming app
PDF
Fast and Simplified Streaming, Ad-Hoc and Batch Analytics with FiloDB and Spa...
PPTX
Fraud Detection Architecture
PPTX
Architecting a Fraud Detection Application with Hadoop
PDF
Building end to end streaming application on Spark
PDF
Top 5 mistakes when writing Streaming applications
Leveraging Azure Databricks to minimize time to insight by combining Batch an...
Spark Streaming & Kafka-The Future of Stream Processing
Spark Streaming& Kafka-The Future of Stream Processing by Hari Shreedharan of...
Strata NYC 2015: What's new in Spark Streaming
NoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch Analysis
East Bay Java User Group Oct 2014 Spark Streaming Kinesis Machine Learning
Spark Streaming Recipes and "Exactly Once" Semantics Revised
Spark streaming state of the union
Spark Streaming Data Pipelines
Fast, Scalable, Streaming Applications with Spark Streaming, the Kafka API an...
Real-Time Data Pipelines Made Easy with Structured Streaming in Apache Spark.pdf
Fraud Detection using Hadoop
Streaming Sensor Data Slides_Virender
What No One Tells You About Writing a Streaming App: Spark Summit East talk b...
What no one tells you about writing a streaming app
Fast and Simplified Streaming, Ad-Hoc and Batch Analytics with FiloDB and Spa...
Fraud Detection Architecture
Architecting a Fraud Detection Application with Hadoop
Building end to end streaming application on Spark
Top 5 mistakes when writing Streaming applications

More from Felicia Haggarty (7)

PDF
8 Tips for Deploying DevSecOps
PDF
Yarn presentation - DFW CUG - December 2015
PDF
Building a system for machine and event-oriented data - SF HUG Nov 2015
PPTX
SFHUG Kudu Talk
PDF
Impala tech-talk by Dimitris Tsirogiannis
PDF
Data revolution by Doug Cutting
PDF
Whither the Hadoop Developer Experience, June Hadoop Meetup, Nitin Motgi
8 Tips for Deploying DevSecOps
Yarn presentation - DFW CUG - December 2015
Building a system for machine and event-oriented data - SF HUG Nov 2015
SFHUG Kudu Talk
Impala tech-talk by Dimitris Tsirogiannis
Data revolution by Doug Cutting
Whither the Hadoop Developer Experience, June Hadoop Meetup, Nitin Motgi

Recently uploaded (20)

PDF
Approach and Philosophy of On baking technology
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPTX
Cloud computing and distributed systems.
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Machine learning based COVID-19 study performance prediction
PDF
Unlocking AI with Model Context Protocol (MCP)
PPT
Teaching material agriculture food technology
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PPTX
MYSQL Presentation for SQL database connectivity
Approach and Philosophy of On baking technology
Spectral efficient network and resource selection model in 5G networks
NewMind AI Monthly Chronicles - July 2025
Dropbox Q2 2025 Financial Results & Investor Presentation
Reach Out and Touch Someone: Haptics and Empathic Computing
CIFDAQ's Market Insight: SEC Turns Pro Crypto
“AI and Expert System Decision Support & Business Intelligence Systems”
Cloud computing and distributed systems.
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?
Machine learning based COVID-19 study performance prediction
Unlocking AI with Model Context Protocol (MCP)
Teaching material agriculture food technology
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Chapter 3 Spatial Domain Image Processing.pdf
Building Integrated photovoltaic BIPV_UPV.pdf
Encapsulation_ Review paper, used for researhc scholars
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
MYSQL Presentation for SQL database connectivity

IoT Austin CUG talk

  • 1. 1© Cloudera, Inc. All rights reserved. IoT with Spark Streaming Anand Iyer, Senior Product Manager
  • 2. 2© Cloudera, Inc. All rights reserved. Spark Streaming • Incoming data stream is represented as DStreams (Discretized Streams) • Stream is broken down into micro-batches • Each micro-batch is an RDD – process using RDD operations • Micro-batches usually 0.5 sec in size
  • 3. 3© Cloudera, Inc. All rights reserved. Cloudera customer use case examples – Streaming • On-line fraud detection Financial Services • On-line recommender systems • Inventory management Retail • Incident prediction (sepsis) Health • Analysis of ad performance in real-time Ad tech
  • 4. 4© Cloudera, Inc. All rights reserved. Concrete end-to-end IoT Use Case Using Spark Streaming with Kafka, HBase & Solr
  • 5. 5© Cloudera, Inc. All rights reserved. Proactive maintenance and accident prevention in Railways • Sensor information continuously streaming in from railway carriages • Goal: Early detection of damage to rail carriage wheels or to railway tracks • Proactively fix issues before they become severe • Prevent derailments, save money and lives • Based on real-world use case, modified to fit the talk
  • 6. 6© Cloudera, Inc. All rights reserved. Locomotive Wheel Axle Sensors Each Sensor Reading Contains: - Unique ID - Locomotive ID - Speed - Temperature - Pressure - Acoustic signals - GPS Co-ordinates - Timestamp - etc
  • 7. 7© Cloudera, Inc. All rights reserved. Identify Damage to locomotive axle or wheels Manifests as sustained increase in sensor readings like temperature, pressure, acoustic noise, etc.
  • 8. 8© Cloudera, Inc. All rights reserved. Identify Damage on railway tracks Manifests as a sudden spike in sensor readings for pressure or acoustic noise.
  • 9. 9© Cloudera, Inc. All rights reserved. Real-Time Detection of Locomotive Wheel Damage Kafka - Enrich incoming events with relevant meta- data - Locomotive information from locomotive ID: type, weight, cargo,etc - Sensor information from Sensor ID: precise location, type, etc - GPS co-ordinates to location characteristics such as gradient of track. - Recommend HBase as metadata store. - Use HBase-spark module to fetch data. - Apply application logic to determine if sensor readings indicate damage - Simple rule based - Complex predictive machine learning model
  • 10. 10© Cloudera, Inc. All rights reserved. Real-Time Detection of Locomotive Wheel Damage Kafka Kafka https://github.com/harishreedharan/spark-streaming-kafka-output HDFS
  • 11. 11© Cloudera, Inc. All rights reserved. Real-Time Detection of Locomotive Wheel Damage - When an alert is thrown, technician will need to diagnose the event - Requires visualizing sensor data as a time-series: - Over arbitrary windows of time - Compare with values from prior trips - Software for visualization: http://grafana.org/ - Technician can take appropriate action based on analysis: - Send rail carriage for maintenance - Stop train immediately to prevent accident Visualize Time-Series Sensor Data
  • 12. 12© Cloudera, Inc. All rights reserved. Data Store for Time-Series Data Ideal solution: Kudu - Time series data entails sequential scans for writes and reads, interspersed with random seeks Until Kudu is GA: - Use HBase and model tables for time-series data - OpenTSDB: - Built on top of HBase - Uses a HBase table schema optimized for time-series data - Simple HTTP API
  • 13. 13© Cloudera, Inc. All rights reserved. Real-Time Detection of Locomotive Wheel Damage Kafka Kafka HDFS
  • 14. 14© Cloudera, Inc. All rights reserved. Detecting damage to Railtracks • They manifest as a sharp spike in sensor readings (pressure, acoustic noise) • Multiple sensors will demonstrate the same spike at the same location (GPS co- ordinates) • Multiple sensors from multiple trains will give similar readings at the same location. How to detect? • Index each sensor reading, in Solr, such that they can be queried by GPS co- ordinates • When a “spike” is observed, and corresponding alert event is fired, trigger a search
  • 15. 15© Cloudera, Inc. All rights reserved. Detecting damage to Rail tracks • Index each sensor reading, with the Morphlines library • Embed call to Morphlines in your Spark Streaming application • Values can be kept in the index for specified period of time, such as a month. Solr can automatically purge old documents from the index. • When a “spike” is observed, and corresponding alert event is fired, trigger a search (manually or programmatically) • Search for sensor readings at the same GPS co-ordinates as the latest spike. • Filter out irrelevant readings (e.g. readings on the left track, if spike was observed on the right track) • Sort results by time, latest to oldest • If majority of recent readings show a “spike”, indicative of track damage
  • 16. 16© Cloudera, Inc. All rights reserved. Final Architecture Kafka Kafka HDFS
  • 17. 17© Cloudera, Inc. All rights reserved. Noteworthy Streaming Constructs
  • 18. 18© Cloudera, Inc. All rights reserved. Sliding Window Operations Example usages: - compute counts of items in latest window of time, such as occurrences of exceptions in a log or trending hashtags in a tweet stream - Join two streams by matching keys within same window Note: Provide adequate memory to hold a window’s worth of data Define operations on data within a sliding window. Window Parameters: - window length - sliding interval
  • 19. 19© Cloudera, Inc. All rights reserved. Maintain and update arbitrary state updateStateByKey(...) • Define initial state • Provide state update function • Continuously update with new information Examples: • Running count of words seen in text stream • Per user session state from activity stream Note: Requires periodic check-pointing to fault-tolerant storage.
  • 20. 20© Cloudera, Inc. All rights reserved. Lessons from Production
  • 21. 21© Cloudera, Inc. All rights reserved. Use Kafka Direct Connector whenever possible • Better efficiency and performance than Receiver based Connectors • Automatic back-pressure: steady performance Kafka Spark Driver Executor Executor Executor Executor Receiver Receiver Spark Driver Executor Executor Executor Executor
  • 22. 22© Cloudera, Inc. All rights reserved. The challenge with Checkpoints • Spark checkpoints are java serialized • Upgradeability can be an issue – upgrading the version of Spark or your application can make checkpointed data unreadable But long running applications need updates and upgrades!!
  • 23. 23© Cloudera, Inc. All rights reserved. Upgrades with Checkpoints • Most often, all you need to pick up is some previous state – maybe an RDD or some “state”(updateStateByKey), or last processed Kafka offsets • The solution: Disable Spark Checkpoints • Use foreachRDD to persist state yourself, to HDFS, in a format your application can understand • E.g. Avro, Protobuf, Parquet… • For upateStateByKey, generate the new state - then persist
  • 24. 24© Cloudera, Inc. All rights reserved. updateStateByKey(…) upcoming improvements • Time-out: Automatically delete data after a preset number of micro-batches • Efficient Updates: Only update a subset of the keys • Callback to persist state during graceful shutdown
  • 25. 25© Cloudera, Inc. All rights reserved. Exactly Once Semantics What is it? Given a stream of incoming data, any operator is applied exactly once on each item. Why is it important? Prevent erroneous processing of data stream. E.g., Double counting of aggregations or throwing of redundant alerts Spark Streaming provides exactly one semantics for data transformations. However, output operations provide at-least once semantics!!
  • 26. 26© Cloudera, Inc. All rights reserved. Exactly Once Semantics with Spark Streaming & Kafka • Associate a “key” with each value written to external store, that can be used for de-duping • This key needs to be unique for a given micro-batch • Kafka Direct Connector provides the following associated with each record, which will be the same for a given micro-batch: Kafka-Partition + start-offset + end-offset • Check out org.apache.spark.streaming.kafka.OffsetRanges and org.apache.spark.streaming.kafka.HasOffsetRanges
  • 27. 27© Cloudera, Inc. All rights reserved. Thank You

Editor's Notes

  • #2: Good afternoon Everyone. Today we are going to talk about one of the most popular extensions of Spark : Spark Streaming. And we will talk about using Spark Streaming to implement a use case in a fast growing, and simply put, really cool and popular domain: the Internet of Things. We wall walk you through a concrete Internet of Things use case. When we talk about the use case, we will focus on end-to-end architectures. After covering the use case, we will do a deeper dive into some interesting spark streaming features such as sliding windows, streaming state, ml algorithms, and share some pro-tips or best practices with you.
  • #3: So first, a very quick primer on spark streaming: In Spark Streaming, each incoming data stream is represented by an abstraction called a Dstream…which stands for Discretized Stream. A Dstream is a continuous stream of data, broken up into chunks called micro-batches. Data in each micro-batch becomes and RDD, and is processed by RDD operations. A batch spark job is defined as a sequence of transformatins and actions of RDDs…..similarly a streaming job is authored as a sequence of transformations and actions on Dstreams. Dstream micro-batch sizes are often 1 second or even 0.5 second in size.
  • #4: Spark Streaming has seen tremendous adoption over the past year and we are seeing customers deploy it for a wide variety of use cases….and here I have a random collection of examples of use cases in diverse industries.
  • #6: But today we will talk about an Internet of Things use case: Proactive maintenance and accident prevention in railways. The internet of things is all about sensors, continuously sending data back to your data center. In our case, we are talking about sensors fitted to railway locomotives and railway carriages The goal is to process this sensor data to identify 2 critical issues: Damage to the the wheels or axles of trains Damage to railway tracks At one end, this will help us prevent derailments. Trains are among the safest modes of transporation…much safer than cars. However, many of these accidents are preventable. Also, the proportion of freight trains is a lot higher than trains transporting humans. When freight train derailment happens, there many not always be a loss of life…..and hence not covered in the news…but there is a heavy financial loss….all of which can be avoided. That is one end of the spectrum. The other piece is simply identifying defects early, so that they can be fixed proactively thus extending the lifespan of locomotives and rail carriages as well as tracks….fixing issues early, nipping it in the bud so to spreak, will invariably save costs. This example is based on a real-world use case, but it has been heavily modified and simplified to fit a 15 minute slide deck
  • #7: Lets do a deeper dive into the sensors we are talking about: In this diagram of a railway carriage, the red spots on the wheels of the train, denote where the sensors will be located. These sensors will send back information, on a regular basis….lets say couple measurements per second. The frequency of readings is adjustable. Each reading will have: A unique ID, that identifies the sensor An ID that identifies the locomotive A speed measurement….while diagnosing an issue it is important to know how fast the train was going Temperature measurement….if something goes wrong, invariably something is bound to get too hot Pressure…if the wheels can not spin comfortable because something is hindering them, the pressure readings will go up Acoustic Signals…..basically noise….noise is a good indicator of problems…for example the sound of clanging metal is a lot different than the smooth turning of wheels or humming of engines GPS co-ordinates….this is important, we need to know where the train is for many reasons….which we will talk about shortly Timestamp….you need to know when the reading was taken
  • #8: Ok….so given these sensor readings, how do we identify damage to the wheels: They will manifest as a sustained increase in sensor readings like temperature, pressure or acoustic noise. It will be a pronounced lasting increase, possible progressively getting worse
  • #9: How do we identify damage to the rail track? Damage to the railtrack is going to be at a specific location…..often on just one side of the track. When a wheel goes over the problem area, there is bound to be a sudden sharp pronounced spike in sensor readings….most likely acoustic noise and pressure. The key thing is that it will be a pronounced spike, at a specific location, after which the sensors readings will come back down to normal.
  • #10: Cool. Now lets talk about the implementation. Data gets from the locomotive sensors to our data center….how that happens is not in the scope of this talk…..if you are really curious you probably need to attend some conference by Cisco or Intel. Once it gets into your data center, write it to a streaming data channel….we recommend Kafka. From Kafka, you can read the events in your spark streaming job and process them. We recommend using the receiverless direct connector to read from Kafka. In your spark streaming job, you will first need to enrich the data….that is….for each event attach it with relevant metadata which is required to identify damage. For example. Use the locomotive ID to get information about the locomotive such as type…is it freight or passenger, weight, type of cargo….if it is carrying dangerious chemicals you probably want to want to stop the train even for slight damage….vs an empty cargo train. Similarly, join with information about the sensor, such as its location on the train….is it on the right wheel, the left wheel….and stuff like that. From GPS information, figure out where the train is….if it is going up a steep incline, temperature readings may go up, and that is ok. We recommend storing this type of metadata in Hbase which is ideal for randome key reads….and Hbase comes with the hbase-spark module that makes it easy to call hbase from spark jobs. Once you have enriched the data, and transformed it…you can determine if the sensor readiings signify damage….and that can be rule based or it can be a machine learing model that is trained….again….outside the domain of this talk since we are not a bunch of mechanical engineers.
  • #11: When potential damage is identified write an event out to Kafka….say to a topic called, “alerts”….and have an application listening to this topic that will in fact send out a pager alert or email alert or other form of alert to technicians. Write raw data to HDFS. It will come in handy: Team of data scientists can do offline analysis More important, the raw data will come in handy when there is a bug in application logic…or a faulty sensor, your end results don’t match expectations. So for auditing purposes….auditing your application and in coming data.
  • #12: So we have identified a potential problem. The next step is for a technician to step in and diagnose. Diagnosing the issue will require visualizing the sensor readings as time series data….look at how they are trending, looks at readings for different windows of time…..compare with readings from a different time window……all of this entails visualing time series data. For visualization of time series, grafana is a popular and useful application….but there are many other options available, and it is also fairly easy to build one with javascript. The technician can manually inspect the data and decide what to do…..send the railway carriage for maintenance or if things seem bad, stop it and get it physically checked out.
  • #13: We are talking about a lot of sensors producing 1 or more readings per second…that is a lot of data and it needs to be stored in a way that lends itself for time series visualization. Time series data entails sequential reads….since you look at a continuous window of time….similarly writing time series data is also sequential since you will keep appending newer readings….these sequential scans are interspersed with random reads….when for example you change your window start and stop time or move back on forth between different points in time. The ideal storage for this is Kudu: Kudu delivers the best performance for mixed scan and random seek workloads. Until Kudu is GA, use Hbase. Hbase performance may not match Kudu performance, but it will certainly work for this use case.
  • #15: Can we not use sensors on the tracks. Sure. But sensors on locomotives are easy, few. Raitracks travel through remote areas Those are the hardest ones to put sensors, and those are probably the ones that do need monitoring.
  • #16: Can we not use sensors on the tracks. Sure. But sensors on locomotives are easy, few. Raitracks travel through remote areas Those are the hardest ones to put sensors, and those are probably the ones that do need monitoring. Call out Hue!!