SlideShare a Scribd company logo
© 2019 Ververica
Aljoscha Krettek – Software Engineer, Flink PMC
Stephan Ewen – CTO, Flink PMC/Chair
Towards Flink 2.0: Rethinking the Stack and APIs
to unify Batch & Streaming
© 2019 Ververica2
This is joint work with many members of
the Apache Flink community
Timo, Dawid, Shaoxuan, Kurt, Guowei, Becket, Jincheng, Fabian, Till, Andrey, Gary, Piotr, Stefan, etc.
And many others …
© 2019 Ververica3
Some of this presents work that is in
progress in the Flink community. Other
things are planned and/or have design
documents. Some were discussed at one
point or another on the mailing lists or in
person.
This represents our understaning of the
current state, this is not a fixed roadmap,
Flink is an open-source Apache project.
© 2019 Ververica
Batch and Streaming
© 2019 Ververica5
Everything Is a Stream
© 2019 Ververica6
What changes faster? Data or Query?
ad-hoc queries, data exploration,
ML training and
(hyper) parameter tuning
continuous applications,
data pipelines, standing queries,
anomaly detection, ML evaluation, …
Data changes slowly
compared to fast
changing queries
Data changes fast
application logic
is long-lived
“batch” “streaming”
© 2019 Ververica7
Latency vs. Completeness (for geeks)
1977 1980 1983 1999 2002 2005 2015
Processing Time
Episode
IV
Episode
V
Episode
VI
Episode
I
Episode
II
Episode
III
Episode
VII
Event Time
2016
Rogue
One
III.5
2017
Episode
VIII
© 2019 Ververica8
Latency vs. Completeness (more formally)
*from the excellent Streaming 101 by Tyler Akidau:
https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-
© 2019 Ververica9
Latency vs. Completeness
Bounded/
Batch
Unbounded/
Streaming
Data is as complete
as it gets within that
Batch Job
No fine latency control
Trade of latency
versus completeness
Processing-time
timers for latency
control
© 2019 Ververica10
What does this mean for processing?
older more recent
files,
message
bus, or …
© 2019 Ververica11
Stream-style processing
older more recent
watermark
unprocesse
d
© 2019 Ververica12
Batch-style processing
older more recent
watermark
unprocesse
d
© 2019 Ververica13
Batch and Streaming Processing Styles
S S S
M M M
R R R
S S S
S S S
M M M
R R R
S S S
more batch-y more stream-y
running
not running
can do things one-by-one everything is always-on
running
© 2019 Ververica14
So this is the reason why we have
different APIs, different batch and
stream processing systems?
• Different requirements
• Optimization potential for batch and
streaming
• Also: historic developments and slow-
changing organizations
💡
© 2019 Ververica
Current Stack
© 2019 Ververica16
Runtime
Tasks / JobGraph / Network Interconnect / Fault Tolerance
DataSet
“Operators“ and Drivers / Operator Graph / Visitors
DataStream
StreamOperators / StreamTransformation Graph* /
Monolithic Translators
Table API / SQL
Logical Nodes* / Different translation paths
© 2019 Ververica17
What could be improved?
• Each API has its own internal graph representation  code duplication
• Multiple translation components between the different graphs  code duplication
– DataStream API has an intermediate graph structure: StreamTransformation  StreamGraph 
JobGraph
• Separate (incompatible) operator implementations
– DataStream API has StreamOperator, DataSet API has Drivers  two map operators, two flatMap
operators
– These are run by different lower-level Tasks
– DataSet operators are optimized for different requirements than DataSet operators
• Table API is translated to two distinct lower-level APIs  two different translation
stacks
– ”project operator” for DataStream and for DataSet
• Connectors for each API are separate  a whole bunch of connectors all over the
From a system design/code quality/architecture/development perspective
© 2019 Ververica18
What does this mean for users?
• You have to decide between DataSet and DataStream when writing a job
– Two (slightly) different APIs, with different capabilities
– Different set of supported connectors: no Kafka DataSet connector, no HBase DataStream connector
– Different performance characteristics
– Different fault-tolerance behavior
– Different scheduling logic
• With Table API, you only have to learn one API
– Still, the set of supported connectors depends on the underlying execution API
– Feature set depends on whether there is an implementation for your underlying API
• You cannot combine more batch-y with more stream-y sources/sinks
• A “soft problem”: with two stacks of everything, less developer power will go into
each one individual stack  less features, worse performance, more bugs that are
fixed slower
© 2019 Ververica
Future Stack
© 2019 Ververica20
DataSet
“Operators“ and Drivers / Operator
Graph / Visitors
DataStream
StreamOperator /
StreamTransformation Graph* /
Monolithic Translators
Batch is a subset of streaming!
Can’t we just?
✅❌
Done!
🥳
© 2019 Ververica21
Unifying the Batch and Streaming APIs
• DataStream API functionality is already a superset of DataSet API functionality
• We need to introduce BoundedStream to harness optimization potential, semantics
are clear from earlier:
–No processing-time timers
–Watermark “jumps” from –Infinity to +Infinity at end of processing
• DataStream translation and runtime (operators) need to be enhanced to use the
added optimization potential
• Streaming execution is the generic case that always works, “batch” enables
additional “optimization rules”: bounded operators, different scheduling  we get
feature parity automatically ✅
• Sources need to be unified as well  see later
© 2019 Ververica22
A typical “unified” use case: Bootstrapping state
* See for example Bootstrapping State in Flink by Gregory Fee https://sf-2018.flink-
“stream
”
source
“batch”
source
Stateful
operatio
n
batch-y partstream-y part
• We have a streaming use
case
• We want to bootstrap the
state of some operations
from a historical source
• First execute bounded parts
of the graph, then start the
rest
© 2019 Ververica23
Under-the-hood Changes
• StreamTransformation/StreamGraph
need to be beefed up to carry the
additional information about
boundedness
• Translation, scheduling, deployment,
memory management and network stack
needs to take this into account
Graph Representation / DAG Operator / Task
• StreamOperator needs to support
batch-style execution  see next slide
• Network stack must eventually support
blocking inputs
© 2019 Ververica24
Selective Push Model Operator FLINK-11875
batch: pull-based operator (or Driver) streaming: push-based StreamOperator
• StreamOperator needs additional API
to tell the runtime which input to
consume
• Network stack/graph needs to be
enhanced to deal with blocking inputs,
😱
© 2019 Ververica25
Current Source Interfaces
InputFormat
createInputSplits(): splits
openSplit(split)
assignInputSplit()
nextRecord(): T
closeCurrentSplit()
SourceFunction
run(OutputContext)
close()
batch streaming
© 2019 Ververica26
Batch InputFormat Processing
TaskManager TaskManager TaskManager
JobManager
(1) request split
(2) send split
(3) process split
• Splits are assigned to TaskManagers by the JobManager, which runs a copy of
the InputFormat  Flink knows about splits and can be clever about scheduling,
be reactive
• Splits can be processed in arbitrary order
• Split processing pulls records from the InputFormat
• InputFormat knows nothing about watermarks, timestamps, checkpointing  bad
for streaming
© 2019 Ververica27
Stream SourceFunction Processing
• Source have a run-loop that they manage completely on their own
• Sources have flexibility and can efficiently work with the source system: batch
accesses, dealing with multiple topics from one consumer, threading model,
etc…
• Flink does not know what’s going on inside and can’t be clever about it
• Sources have to implement their own per-partition watermarking, idleness
tracking, what have you
TaskManagerTaskManager TaskManager
(1) do your thing
© 2019 Ververica28
A New (unified) Source Interface
• This must support both batch and streaming use cases, allow Flink to be clever, be
able to deal with event-time, watermarks, source idiosyncrasies, and enable
snapshotting
• This should enable new features: generic idleness detection, event-time alignment
FLIP-27
Source
createSplitEnumerator()
createSplitReader
SplitEnumerator
discoverNewSplits()
nextSplit()
snapshotState()
isDone()
SplitReader
addSplit()
hasAvailable(): Future
snapshotState()
emitNext(Context): Status
* FLINK-10886: Event-time alignment for sources; Jamie Grier (Lyft) contributed the first parts of this
© 2019 Ververica29
A New (unified) SourceInterface: Execution Style I
TaskManager TaskManager TaskManager
JobManager
(1) request split
(2) send split
(3) process split
• Splits are assigned to TaskManagers by the JobManager, which runs a copy of
the SplitEnumerator  Flink knows about splits and can be clever about
scheduling, be reactive
• Splits can be processed in arbitrary order
• Split processing is driven by the TaskManager working with SplitReader
• SplitReader emits watermarks but Flink deals with idleness, per-split
watermarking
© 2019 Ververica30
A New (unified) SourceInterface: Execution Style II
© 2019 Ververica31
Table API / SQL
• API: easy, it’s already unified ✅
• Translation and runtime (operators) need to be enhanced to use the added
optimization potential but use the StreamOperator for both batch and streaming
style execution
• Streaming execution is the generic case that always works, “batch” enables
additional “optimization rules”: bounded operators, different scheduling  we get
feature parity automatically ✅
• Sources will be unified from the unified source interface ✅
• This is already available in the Blink fork (by Alibaba), FLINK-11439 is the effort of
getting that into Flink
© 2019 Ververica32
StreamTransformation DAG / StreamOperator
DataStream
“Physical“ Application API
Table API / SQL
Declarative API
Runtime
The Future
Stack
© 2019 Ververica
Closing
© 2019 Ververica34
Summary
• Semantics of unified processing are quite clear already
• For some things, work is ongoing and there are design documents (FLIPs)
• Some things are farther in the future
• This project requires changes in all components/layers of Flink:
–API, deployment, network stack, scheduling, fault tolerance
• You can follow all of this on the public mailing lists and FLIPs!
• The Flink tech stack is going to be quite nice! 😎
© 2019 Ververica
www.ververica.com @VervericaDataaljoscha@ververica.com
stephan@ververica.com
© 2019 Ververica
Thank you!
Questions?

More Related Content

PDF
Future of Apache Flink Deployments: Containers, Kubernetes and More - Flink F...
PPTX
Flink Forward San Francisco 2019: Using Flink to inspect live data as it flow...
PDF
Streaming your Lyft Ride Prices - Flink Forward SF 2019
PPTX
KEYNOTE Flink Forward San Francisco 2019: From Stream Processor to a Unified ...
PDF
Flink Forward San Francisco 2019: The Trade Desk's Year in Flink - Jonathan ...
PDF
Kubernetes + Operator + PaaSTA = Flink @ Yelp - Antonio Verardi, Yelp
PPTX
Flink Forward San Francisco 2019: Moving from Lambda and Kappa Architectures ...
PDF
Flink Forward San Francisco 2019: Building production Flink jobs with Airstre...
Future of Apache Flink Deployments: Containers, Kubernetes and More - Flink F...
Flink Forward San Francisco 2019: Using Flink to inspect live data as it flow...
Streaming your Lyft Ride Prices - Flink Forward SF 2019
KEYNOTE Flink Forward San Francisco 2019: From Stream Processor to a Unified ...
Flink Forward San Francisco 2019: The Trade Desk's Year in Flink - Jonathan ...
Kubernetes + Operator + PaaSTA = Flink @ Yelp - Antonio Verardi, Yelp
Flink Forward San Francisco 2019: Moving from Lambda and Kappa Architectures ...
Flink Forward San Francisco 2019: Building production Flink jobs with Airstre...

What's hot (19)

PPTX
Virtual Flink Forward 2020: A deep dive into Flink SQL - Jark Wu
PPTX
Do Flink on Web with FLOW
PDF
Flink Connector Development Tips & Tricks
PPTX
Flink Forward San Francisco 2018: David Reniz & Dahyr Vergara - "Real-time m...
PDF
Virtual Flink Forward 2020: Machine learning with Flink in Weibo - Yu Qian
PPTX
Towards Flink 2.0: Unified Batch & Stream Processing - Aljoscha Krettek, Ver...
PDF
Scaling stream data pipelines with Pravega and Apache Flink
PPTX
Flink Forward San Francisco 2018: - Jinkui Shi and Radu Tudoran "Flink real-t...
PPTX
Virtual Flink Forward 2020: Integrate Flink with Kubernetes natively - Yang Wang
PDF
Virtual Flink Forward 2020: Keynote: The Evolution of Data Infrastructure at ...
PDF
Unify Enterprise Data Processing System Platform Level Integration of Flink a...
PDF
Flink Forward San Francisco 2019: Managing Flink on Kubernetes - FlinkK8sOper...
PPTX
Flink Forward San Francisco 2018: Andrew Gao & Jeff Sharpe - "Finding Bad Ac...
PDF
Flink Forward San Francisco 2018: Gregory Fee - "Bootstrapping State In Apach...
PDF
Marton Balassi – Stateful Stream Processing
PDF
Flink Forward San Francisco 2019: Apache Beam portability in the times of rea...
PDF
Virtual Flink Forward 2020: Everything is connected: How watermarking, scalin...
PDF
Flink Forward San Francisco 2018: Stefan Richter - "How to build a modern str...
PPTX
data Artisans Product Announcement
Virtual Flink Forward 2020: A deep dive into Flink SQL - Jark Wu
Do Flink on Web with FLOW
Flink Connector Development Tips & Tricks
Flink Forward San Francisco 2018: David Reniz & Dahyr Vergara - "Real-time m...
Virtual Flink Forward 2020: Machine learning with Flink in Weibo - Yu Qian
Towards Flink 2.0: Unified Batch & Stream Processing - Aljoscha Krettek, Ver...
Scaling stream data pipelines with Pravega and Apache Flink
Flink Forward San Francisco 2018: - Jinkui Shi and Radu Tudoran "Flink real-t...
Virtual Flink Forward 2020: Integrate Flink with Kubernetes natively - Yang Wang
Virtual Flink Forward 2020: Keynote: The Evolution of Data Infrastructure at ...
Unify Enterprise Data Processing System Platform Level Integration of Flink a...
Flink Forward San Francisco 2019: Managing Flink on Kubernetes - FlinkK8sOper...
Flink Forward San Francisco 2018: Andrew Gao & Jeff Sharpe - "Finding Bad Ac...
Flink Forward San Francisco 2018: Gregory Fee - "Bootstrapping State In Apach...
Marton Balassi – Stateful Stream Processing
Flink Forward San Francisco 2019: Apache Beam portability in the times of rea...
Virtual Flink Forward 2020: Everything is connected: How watermarking, scalin...
Flink Forward San Francisco 2018: Stefan Richter - "How to build a modern str...
data Artisans Product Announcement
Ad

Similar to Flink Forward San Francisco 2019: Towards Flink 2.0: Rethinking the stack and APIs to unify Batch & Stream - Stephan Ewen & Aljoscha Krettek (20)

PPTX
Towards Flink 2.0: Unified Batch & Stream Processing - Aljoscha Krettek, Verv...
PDF
Getting Data In and Out of Flink - Understanding Flink and Its Connector Ecos...
PDF
Unified Data Processing with Apache Flink and Apache Pulsar_Seth Wiesman
PDF
Stream processing with Apache Flink (Timo Walther - Ververica)
PDF
Introduction to Stream Processing with Apache Flink (2019-11-02 Bengaluru Mee...
PPTX
Flexible and Real-Time Stream Processing with Apache Flink
PPTX
Chicago Flink Meetup: Flink's streaming architecture
PPTX
Apache Flink Meetup Munich (November 2015): Flink Overview, Architecture, Int...
PPTX
Flink Streaming @BudapestData
PPTX
Flink history, roadmap and vision
PPTX
Architecture of Flink's Streaming Runtime @ ApacheCon EU 2015
PPTX
Apache Flink: Past, Present and Future
PDF
Introduction to Flink Streaming
PPTX
ApacheCon: Apache Flink - Fast and Reliable Large-Scale Data Processing
PPTX
Flink Streaming
PDF
Advanced Streaming Analytics with Apache Flink and Apache Kafka, Stephan Ewen
PDF
Tips and tricks for developing streaming and table connectors - Eron Wright,...
PDF
Tips and Tricks for Developing Streaming and table Connectors - Wron Wright, ...
PPTX
Apache flink 1.7 and Beyond
PPTX
Flink Streaming Hadoop Summit San Jose
Towards Flink 2.0: Unified Batch & Stream Processing - Aljoscha Krettek, Verv...
Getting Data In and Out of Flink - Understanding Flink and Its Connector Ecos...
Unified Data Processing with Apache Flink and Apache Pulsar_Seth Wiesman
Stream processing with Apache Flink (Timo Walther - Ververica)
Introduction to Stream Processing with Apache Flink (2019-11-02 Bengaluru Mee...
Flexible and Real-Time Stream Processing with Apache Flink
Chicago Flink Meetup: Flink's streaming architecture
Apache Flink Meetup Munich (November 2015): Flink Overview, Architecture, Int...
Flink Streaming @BudapestData
Flink history, roadmap and vision
Architecture of Flink's Streaming Runtime @ ApacheCon EU 2015
Apache Flink: Past, Present and Future
Introduction to Flink Streaming
ApacheCon: Apache Flink - Fast and Reliable Large-Scale Data Processing
Flink Streaming
Advanced Streaming Analytics with Apache Flink and Apache Kafka, Stephan Ewen
Tips and tricks for developing streaming and table connectors - Eron Wright,...
Tips and Tricks for Developing Streaming and table Connectors - Wron Wright, ...
Apache flink 1.7 and Beyond
Flink Streaming Hadoop Summit San Jose
Ad

More from Flink Forward (20)

PDF
Building a fully managed stream processing platform on Flink at scale for Lin...
PPTX
Evening out the uneven: dealing with skew in Flink
PPTX
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
PDF
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
PDF
Introducing the Apache Flink Kubernetes Operator
PPTX
Autoscaling Flink with Reactive Mode
PDF
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
PPTX
One sink to rule them all: Introducing the new Async Sink
PPTX
Tuning Apache Kafka Connectors for Flink.pptx
PDF
Flink powered stream processing platform at Pinterest
PPTX
Apache Flink in the Cloud-Native Era
PPTX
Where is my bottleneck? Performance troubleshooting in Flink
PPTX
Using the New Apache Flink Kubernetes Operator in a Production Deployment
PPTX
The Current State of Table API in 2022
PDF
Flink SQL on Pulsar made easy
PPTX
Dynamic Rule-based Real-time Market Data Alerts
PPTX
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
PPTX
Processing Semantically-Ordered Streams in Financial Services
PDF
Tame the small files problem and optimize data layout for streaming ingestion...
PDF
Batch Processing at Scale with Flink & Iceberg
Building a fully managed stream processing platform on Flink at scale for Lin...
Evening out the uneven: dealing with skew in Flink
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing the Apache Flink Kubernetes Operator
Autoscaling Flink with Reactive Mode
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
One sink to rule them all: Introducing the new Async Sink
Tuning Apache Kafka Connectors for Flink.pptx
Flink powered stream processing platform at Pinterest
Apache Flink in the Cloud-Native Era
Where is my bottleneck? Performance troubleshooting in Flink
Using the New Apache Flink Kubernetes Operator in a Production Deployment
The Current State of Table API in 2022
Flink SQL on Pulsar made easy
Dynamic Rule-based Real-time Market Data Alerts
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Processing Semantically-Ordered Streams in Financial Services
Tame the small files problem and optimize data layout for streaming ingestion...
Batch Processing at Scale with Flink & Iceberg

Recently uploaded (20)

PDF
Network Security Unit 5.pdf for BCA BBA.
PPTX
Cloud computing and distributed systems.
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Machine learning based COVID-19 study performance prediction
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
Big Data Technologies - Introduction.pptx
PPT
Teaching material agriculture food technology
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PPTX
Machine Learning_overview_presentation.pptx
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
sap open course for s4hana steps from ECC to s4
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
Network Security Unit 5.pdf for BCA BBA.
Cloud computing and distributed systems.
Diabetes mellitus diagnosis method based random forest with bat algorithm
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Unlocking AI with Model Context Protocol (MCP)
Machine learning based COVID-19 study performance prediction
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Big Data Technologies - Introduction.pptx
Teaching material agriculture food technology
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Machine Learning_overview_presentation.pptx
“AI and Expert System Decision Support & Business Intelligence Systems”
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
sap open course for s4hana steps from ECC to s4
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Spectral efficient network and resource selection model in 5G networks
Advanced methodologies resolving dimensionality complications for autism neur...
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation

Flink Forward San Francisco 2019: Towards Flink 2.0: Rethinking the stack and APIs to unify Batch & Stream - Stephan Ewen & Aljoscha Krettek

  • 1. © 2019 Ververica Aljoscha Krettek – Software Engineer, Flink PMC Stephan Ewen – CTO, Flink PMC/Chair Towards Flink 2.0: Rethinking the Stack and APIs to unify Batch & Streaming
  • 2. © 2019 Ververica2 This is joint work with many members of the Apache Flink community Timo, Dawid, Shaoxuan, Kurt, Guowei, Becket, Jincheng, Fabian, Till, Andrey, Gary, Piotr, Stefan, etc. And many others …
  • 3. © 2019 Ververica3 Some of this presents work that is in progress in the Flink community. Other things are planned and/or have design documents. Some were discussed at one point or another on the mailing lists or in person. This represents our understaning of the current state, this is not a fixed roadmap, Flink is an open-source Apache project.
  • 4. © 2019 Ververica Batch and Streaming
  • 6. © 2019 Ververica6 What changes faster? Data or Query? ad-hoc queries, data exploration, ML training and (hyper) parameter tuning continuous applications, data pipelines, standing queries, anomaly detection, ML evaluation, … Data changes slowly compared to fast changing queries Data changes fast application logic is long-lived “batch” “streaming”
  • 7. © 2019 Ververica7 Latency vs. Completeness (for geeks) 1977 1980 1983 1999 2002 2005 2015 Processing Time Episode IV Episode V Episode VI Episode I Episode II Episode III Episode VII Event Time 2016 Rogue One III.5 2017 Episode VIII
  • 8. © 2019 Ververica8 Latency vs. Completeness (more formally) *from the excellent Streaming 101 by Tyler Akidau: https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-
  • 9. © 2019 Ververica9 Latency vs. Completeness Bounded/ Batch Unbounded/ Streaming Data is as complete as it gets within that Batch Job No fine latency control Trade of latency versus completeness Processing-time timers for latency control
  • 10. © 2019 Ververica10 What does this mean for processing? older more recent files, message bus, or …
  • 11. © 2019 Ververica11 Stream-style processing older more recent watermark unprocesse d
  • 12. © 2019 Ververica12 Batch-style processing older more recent watermark unprocesse d
  • 13. © 2019 Ververica13 Batch and Streaming Processing Styles S S S M M M R R R S S S S S S M M M R R R S S S more batch-y more stream-y running not running can do things one-by-one everything is always-on running
  • 14. © 2019 Ververica14 So this is the reason why we have different APIs, different batch and stream processing systems? • Different requirements • Optimization potential for batch and streaming • Also: historic developments and slow- changing organizations 💡
  • 16. © 2019 Ververica16 Runtime Tasks / JobGraph / Network Interconnect / Fault Tolerance DataSet “Operators“ and Drivers / Operator Graph / Visitors DataStream StreamOperators / StreamTransformation Graph* / Monolithic Translators Table API / SQL Logical Nodes* / Different translation paths
  • 17. © 2019 Ververica17 What could be improved? • Each API has its own internal graph representation  code duplication • Multiple translation components between the different graphs  code duplication – DataStream API has an intermediate graph structure: StreamTransformation  StreamGraph  JobGraph • Separate (incompatible) operator implementations – DataStream API has StreamOperator, DataSet API has Drivers  two map operators, two flatMap operators – These are run by different lower-level Tasks – DataSet operators are optimized for different requirements than DataSet operators • Table API is translated to two distinct lower-level APIs  two different translation stacks – ”project operator” for DataStream and for DataSet • Connectors for each API are separate  a whole bunch of connectors all over the From a system design/code quality/architecture/development perspective
  • 18. © 2019 Ververica18 What does this mean for users? • You have to decide between DataSet and DataStream when writing a job – Two (slightly) different APIs, with different capabilities – Different set of supported connectors: no Kafka DataSet connector, no HBase DataStream connector – Different performance characteristics – Different fault-tolerance behavior – Different scheduling logic • With Table API, you only have to learn one API – Still, the set of supported connectors depends on the underlying execution API – Feature set depends on whether there is an implementation for your underlying API • You cannot combine more batch-y with more stream-y sources/sinks • A “soft problem”: with two stacks of everything, less developer power will go into each one individual stack  less features, worse performance, more bugs that are fixed slower
  • 20. © 2019 Ververica20 DataSet “Operators“ and Drivers / Operator Graph / Visitors DataStream StreamOperator / StreamTransformation Graph* / Monolithic Translators Batch is a subset of streaming! Can’t we just? ✅❌ Done! 🥳
  • 21. © 2019 Ververica21 Unifying the Batch and Streaming APIs • DataStream API functionality is already a superset of DataSet API functionality • We need to introduce BoundedStream to harness optimization potential, semantics are clear from earlier: –No processing-time timers –Watermark “jumps” from –Infinity to +Infinity at end of processing • DataStream translation and runtime (operators) need to be enhanced to use the added optimization potential • Streaming execution is the generic case that always works, “batch” enables additional “optimization rules”: bounded operators, different scheduling  we get feature parity automatically ✅ • Sources need to be unified as well  see later
  • 22. © 2019 Ververica22 A typical “unified” use case: Bootstrapping state * See for example Bootstrapping State in Flink by Gregory Fee https://sf-2018.flink- “stream ” source “batch” source Stateful operatio n batch-y partstream-y part • We have a streaming use case • We want to bootstrap the state of some operations from a historical source • First execute bounded parts of the graph, then start the rest
  • 23. © 2019 Ververica23 Under-the-hood Changes • StreamTransformation/StreamGraph need to be beefed up to carry the additional information about boundedness • Translation, scheduling, deployment, memory management and network stack needs to take this into account Graph Representation / DAG Operator / Task • StreamOperator needs to support batch-style execution  see next slide • Network stack must eventually support blocking inputs
  • 24. © 2019 Ververica24 Selective Push Model Operator FLINK-11875 batch: pull-based operator (or Driver) streaming: push-based StreamOperator • StreamOperator needs additional API to tell the runtime which input to consume • Network stack/graph needs to be enhanced to deal with blocking inputs, 😱
  • 25. © 2019 Ververica25 Current Source Interfaces InputFormat createInputSplits(): splits openSplit(split) assignInputSplit() nextRecord(): T closeCurrentSplit() SourceFunction run(OutputContext) close() batch streaming
  • 26. © 2019 Ververica26 Batch InputFormat Processing TaskManager TaskManager TaskManager JobManager (1) request split (2) send split (3) process split • Splits are assigned to TaskManagers by the JobManager, which runs a copy of the InputFormat  Flink knows about splits and can be clever about scheduling, be reactive • Splits can be processed in arbitrary order • Split processing pulls records from the InputFormat • InputFormat knows nothing about watermarks, timestamps, checkpointing  bad for streaming
  • 27. © 2019 Ververica27 Stream SourceFunction Processing • Source have a run-loop that they manage completely on their own • Sources have flexibility and can efficiently work with the source system: batch accesses, dealing with multiple topics from one consumer, threading model, etc… • Flink does not know what’s going on inside and can’t be clever about it • Sources have to implement their own per-partition watermarking, idleness tracking, what have you TaskManagerTaskManager TaskManager (1) do your thing
  • 28. © 2019 Ververica28 A New (unified) Source Interface • This must support both batch and streaming use cases, allow Flink to be clever, be able to deal with event-time, watermarks, source idiosyncrasies, and enable snapshotting • This should enable new features: generic idleness detection, event-time alignment FLIP-27 Source createSplitEnumerator() createSplitReader SplitEnumerator discoverNewSplits() nextSplit() snapshotState() isDone() SplitReader addSplit() hasAvailable(): Future snapshotState() emitNext(Context): Status * FLINK-10886: Event-time alignment for sources; Jamie Grier (Lyft) contributed the first parts of this
  • 29. © 2019 Ververica29 A New (unified) SourceInterface: Execution Style I TaskManager TaskManager TaskManager JobManager (1) request split (2) send split (3) process split • Splits are assigned to TaskManagers by the JobManager, which runs a copy of the SplitEnumerator  Flink knows about splits and can be clever about scheduling, be reactive • Splits can be processed in arbitrary order • Split processing is driven by the TaskManager working with SplitReader • SplitReader emits watermarks but Flink deals with idleness, per-split watermarking
  • 30. © 2019 Ververica30 A New (unified) SourceInterface: Execution Style II
  • 31. © 2019 Ververica31 Table API / SQL • API: easy, it’s already unified ✅ • Translation and runtime (operators) need to be enhanced to use the added optimization potential but use the StreamOperator for both batch and streaming style execution • Streaming execution is the generic case that always works, “batch” enables additional “optimization rules”: bounded operators, different scheduling  we get feature parity automatically ✅ • Sources will be unified from the unified source interface ✅ • This is already available in the Blink fork (by Alibaba), FLINK-11439 is the effort of getting that into Flink
  • 32. © 2019 Ververica32 StreamTransformation DAG / StreamOperator DataStream “Physical“ Application API Table API / SQL Declarative API Runtime The Future Stack
  • 34. © 2019 Ververica34 Summary • Semantics of unified processing are quite clear already • For some things, work is ongoing and there are design documents (FLIPs) • Some things are farther in the future • This project requires changes in all components/layers of Flink: –API, deployment, network stack, scheduling, fault tolerance • You can follow all of this on the public mailing lists and FLIPs! • The Flink tech stack is going to be quite nice! 😎
  • 35. © 2019 Ververica www.ververica.com @VervericaDataaljoscha@ververica.com stephan@ververica.com
  • 36. © 2019 Ververica Thank you! Questions?

Editor's Notes

  • #7: Streaming Keep up with real time, some extra capacity for catch-up Receive data roughly in order as produced Latency is important Batch Fast forward through months/years of history Massively parallel unordered reads Throughput most important
  • #12: Time in data stream must be quasi monotonous to produce time progress (watermarks) Always have close-to-latest incremental results Resource requirements change over time Recovery must catch up very fast
  • #13: Order of time in data does not matter (parallel unordered reads) Bulk operations (2 phase hash/sort) Longer time for recovery (no low latency SLA) Resource requirements change fast throughout the execution of a single job
  • #14: Understanding this difference will help later, when we discuss scheduling changes.
  • #18: Possibly put these on separate slides, with fewer words. Or even some graphics.
  • #19: Possibly put these on separate slides, with fewer words. Or even some graphics.
  • #22: There are some quirks when you use DataStream for batch a groupReduce would be window with a GlobalWindow MapPartition would have to finalizing things in close() Joins would have to specify global window Of course, state requirements are bad for the naïve approach, i.e. large state, inefficient access patterns Joins and grouping can be a lot faster with specific algorithms Hash Join, Merge join, etc…
  • #23: Recall the earlier processing-styles slide: batch wants step by step streaming is all at once This has been mentioned a lot. Lyft has given a talk about this at last FF
  • #24: For example different window operator Different join implementations The scheduling stuff and networking would be a whole talk on their own. Memory management is another issue.
  • #25: Pull-based operator is how most databases were/are implemented. Note how the pull model enables hash join, merge join, … Side inputs benefit from a pull-based model Bring the dog-drinking-from-hose example, also for Join operator This will allow porting batch operators/algorithms to StreamOperator
  • #27: Note that this nicely jibes with the pull-based model. Enables the things we need for batch.
  • #28: Mention the dog with the hose. Sources just keep spitting out records as fast as they can.