SlideShare a Scribd company logo
OpenLineage For Stream Processing
Paweł Leszczyński (github pawel-big-lebowski)
Maciej Obuchowski (github mobuchowski)
Kafka Summit 2024
2
Agenda
● OpenLineage intro & demo
○ Why do we need lineage?
○ Why having an open lineage?
○ Marquez and Flink demo
● Flink integration deep dive
○ Lineage for batch & streaming
○ Review of OpenLineage-Flink integration, FLIP-314
○ What does the future hold?
OpenLineage
1
Autumn Rhythm - Jackson Pollock
https://www.flickr.com/photos/thoth188/276162883
https://flic.kr/p/hjxW62
OpenLineage for Stream Processing | Kafka Summit London
7
To define an open standard
for the collection of lineage
metadata from pipelines
as they are running.
OpenLineage
Mission
Data model
8
Run is particular instance
of a streaming job
Job is data pipeline that
processes data
Datasets are Kafka topics,
Iceberg tables, Object
Storage destinations and
so on
transition
transition time
Run State Update
run uuid
Run
job id
(name based)
Job
dataset id
(name based)
Dataset
Run Facet
Job Facet
Dataset
Facet
run
job
inputs /
outputs
Producers Consumers
Marquez and Flink
Integration Demo
2
Demo
● Available under
https://github.com/OpenLineage/workshops/tree/main/flink-streaming
● Contains
○ Two Flink jobs
■ Kafka to Postgres
■ Postgres to Kafka
○ Airflow to run some Postgres queries
○ Marquez to present lineage graph
Flink applications - read & write to Kafka
Airflow DAG
OpenLineage for
Streaming
3
What is different for Streaming jobs?
15
Batch and streaming differ in many
aspects, but for lineage there are
few questions that matter:
● When does the unbounded
job end?
● When and how datasets get
updated?
● Does the transformation
change during execution?
When does job end?
16
● It might seem that streaming
jobs never end naturally
● Schema changes, new job
versions, new engine versions
- points when it’s worth to start
another run
When does dataset gets updated?
17
● Dataset versioning is pretty
important - bug analysis, data
freshness
● Implicit - “last update
timestamp”, Airflow’s data
interval - OL default
● Explicit - Iceberg, Delta Lake
dataset version
When does dataset gets updated?
18
● In streaming, it’s not so
obvious as in batch
● Update on each row write
would produce more
metadata than actual data…
● Update only on potential job
end would not give us any
meaningful information in the
meantime
When does dataset gets updated?
19
● Flink: maybe on checkpoint?
● Checkpointing is finicky,
100ms vs 10 minute
checkpoint interval
● Configure minimum event
emission interval separately
● OpenLineage’s additive
model fits that really well
● Spark: microbatch?
Dynamic transformation modification
20
● KafkaSource can find new
topic during execution when
passed a wildcard pattern
● We can catch this and emit
event containing this
information when this
happens
OpenLineage Flink
Integration
update
4
OpenLineage has Flink integration!
● OpenLineage has Flink
JobListener that notifies you
on job start and end
● Support for Kafka, Iceberg,
Cassandra, JDBC…
● Notifies you when job starts,
ends, and on checkpoint with
particular interval
● Additional metadata:
schemas, how much data
processed…
Idea is simple, execution is more complex
The integration has its limits
● Very limited, requires few
undesirable things like setting
execution.attached
● No SQL or Table API support!
● Need to manually attach
JobListener to every job
● OpenLineage preferred
solution would be to run
listener on JobManager in a
separate thread
And the internals are even more complex
● Basically, a lot of reflection
● API wasn’t made for this use
case, a lot of things are
private, a lot of things are in
the class internals
● OpenLineage preferred
solution would be to have API
for connectors to implement,
where they would be
responsible for providing
correct data
And even has evil hacks
● List of transformations inside
StreamExecutionEnvironment
gets cleared moment before
calling JobListeners
● Before that happens, we
replace the clearable list with
one that keeps copy of data
on `clear`.
So, why bother?
● We’ve opportunistically created the integration despite limitations, to gather
interest and provide even that limited value
● The long-term solution would be new API for Flink that would not have any of
those limitations
○ Single API that for DataStream and SQL APIs
○ Not depending on any particular execution mode
○ Connectors responsible for their own lineage - testable and dependable!
○ No reflection :)
○ Possible to have Column-Level Lineage support in the future
● And we’ve waited in that state for a bit
And then something happened
● FLIP-314 - Support Customized Flink Job Listener by Fang Yong, Zhanghao Chen
● New JobStatusChangedListener
○ JobCreatedEvent
○ JobExecutionStatusEvent
● JobCreatedEvent contains LineageGraph
● Both DataStream and
SQL/Table API support
● No attachment problem
● Sounds perfect?
LineageGraph
Problem with LineageVertex
● How do you know all possible connector implementations?
Problem with LineageVertex
● How do you know all connector implementations?
● How do you support custom connectors, where we can’t get the source?
○ …reflection?
Problem with LineageVertex
● How do you know all connector implementations?
● How do you support custom connectors, for which the code is not known?
● How do you deal with breaking changes in connectors?
○ …even more reflection?
Find a solution with community
● Voice your concern, propose how to resolve the issue
● Open discussion on Jira, Flink Slack, mailing list
● Managed to gain consensus and develop a solution that fits everyone involved
● Build community around lineage
Resulting API is really nice
Resulting API is really nice
Facets Allow to Extend Data
● Directly inspired by
OpenLineage facets
● Allow you to attach any atomic
piece of metadata to your
dataset or vertex metadata
● Both build-in into Flink - like
DatasetSchemaFacet - and
external, or specific per
connector
FLIP-314 will power OpenLineage
● Lineage driven by connectors is resilient
● Works for both DataStream and SQL/Table APIs
● Not dependant on any execution mode
What does the
future hold?
5
Support for other streaming systems
● Spark Streaming
● Kafka Connect
● …
Column-level lineage support for Flink
● It’s a hard problem!
● But maybe not for SQL?
● UDFs definitely break simple solutions
Native support for Spark connectors
● In contrast to Flink, Spark already has extension mechanism that allows you to
view the internals of the job as it’s running - SparkListener
● We use LogicalPlan abstraction to extract metadata
● We have very similar issues as with Flink :)
● Internal vs external logical plan interfaces
● DataSourceV2 implementations
Support for “raw” Kafka client
● It’s very popular to use raw client to build your own system, not only external
systems
● bootstrap.servers is non unique and ambiguous - use Kafka cluster ID
● Execution is spread over multiple clients - but maybe not every one of them
needs to always report
OpenLineage is Open Source
● OpenLineage integrations are open source and open governance
within LF AI & Data
● The best way to fix a problem is to fix it yourself :)
● Second best way is to be active and raise awareness
○ Maybe other people are also interested?
Thanks :)

More Related Content

PPTX
Syphilis
PPTX
Congenital cataract & ITS MANAGEMENT
PPTX
Retinal artery occlusion
PPTX
Esotropia
PPTX
PPTX
Diplopia charting
PPSX
Proptosis in pediatrics
PDF
Optic disc melanocytoma
Syphilis
Congenital cataract & ITS MANAGEMENT
Retinal artery occlusion
Esotropia
Diplopia charting
Proptosis in pediatrics
Optic disc melanocytoma

What's hot (20)

PPTX
Geriatric ophthalmology
PPT
13 exotropia
PPTX
07 INDICATIONS & CONTRA INDICATIONS OF CL.pptx
PPTX
Retinal Dystrophy 1. Farhad.pptx
PPT
VASCULAR AND HEREDITARY RETINAL DISEASE
PPTX
Differential diagnosis retina
PPTX
Pars Planitis
PPTX
Diabetic Retinopathy
PPTX
Visual Dysfunction in Parkinson Disease
PPTX
Phakomatoses ppt
PPTX
Branch retinal vein occlusion case presentation shine.pptx
PPTX
Vernal keratoconjunctivitis
PPT
Proptosis in adults
PPTX
A Case Of Bilateral Exudative Retinal Detachment
PPT
Diabetic retinopathy
PPTX
Introduction to Contact Lens Fitting_ Module 3.1
PPTX
Evaluation of ptosis
PPTX
Cataract indicators
PPTX
Hereditary vitreoretinal degenerations
PPTX
Herpetic viral retinitis
Geriatric ophthalmology
13 exotropia
07 INDICATIONS & CONTRA INDICATIONS OF CL.pptx
Retinal Dystrophy 1. Farhad.pptx
VASCULAR AND HEREDITARY RETINAL DISEASE
Differential diagnosis retina
Pars Planitis
Diabetic Retinopathy
Visual Dysfunction in Parkinson Disease
Phakomatoses ppt
Branch retinal vein occlusion case presentation shine.pptx
Vernal keratoconjunctivitis
Proptosis in adults
A Case Of Bilateral Exudative Retinal Detachment
Diabetic retinopathy
Introduction to Contact Lens Fitting_ Module 3.1
Evaluation of ptosis
Cataract indicators
Hereditary vitreoretinal degenerations
Herpetic viral retinitis
Ad

Similar to OpenLineage for Stream Processing | Kafka Summit London (20)

PDF
Open core summit: Observability for data pipelines with OpenLineage
PDF
Apache Flink internals
PDF
[FFE19] Build a Flink AI Ecosystem
PDF
Towards Apache Flink 2.0 - Unified Data Processing and Beyond, Bowen Li
PPTX
Apache Flink: Past, Present and Future
PPTX
Flink Streaming @BudapestData
PPTX
Flink internals web
PDF
Evolution of Real-time User Engagement Event Consumption at Pinterest
PPTX
Towards Flink 2.0: Unified Batch & Stream Processing - Aljoscha Krettek, Verv...
PPTX
Towards Flink 2.0: Unified Batch & Stream Processing - Aljoscha Krettek, Ver...
PDF
Data and AI summit: data pipelines observability with open lineage
PDF
Observability for Data Pipelines With OpenLineage
PDF
Stream Loops on Flink - Reinventing the wheel for the streaming era
PDF
Flink Forward Berlin 2018: Paris Carbone - "Stream Loops on Flink: Reinventin...
PPTX
Flink Forward San Francisco 2019: Towards Flink 2.0: Rethinking the stack and...
PPTX
Flink history, roadmap and vision
PPTX
Architecture of Flink's Streaming Runtime @ ApacheCon EU 2015
PPTX
Chicago Flink Meetup: Flink's streaming architecture
PDF
Baymeetup-FlinkResearch
PDF
Apache Flink 101 - the rise of stream processing and beyond
Open core summit: Observability for data pipelines with OpenLineage
Apache Flink internals
[FFE19] Build a Flink AI Ecosystem
Towards Apache Flink 2.0 - Unified Data Processing and Beyond, Bowen Li
Apache Flink: Past, Present and Future
Flink Streaming @BudapestData
Flink internals web
Evolution of Real-time User Engagement Event Consumption at Pinterest
Towards Flink 2.0: Unified Batch & Stream Processing - Aljoscha Krettek, Verv...
Towards Flink 2.0: Unified Batch & Stream Processing - Aljoscha Krettek, Ver...
Data and AI summit: data pipelines observability with open lineage
Observability for Data Pipelines With OpenLineage
Stream Loops on Flink - Reinventing the wheel for the streaming era
Flink Forward Berlin 2018: Paris Carbone - "Stream Loops on Flink: Reinventin...
Flink Forward San Francisco 2019: Towards Flink 2.0: Rethinking the stack and...
Flink history, roadmap and vision
Architecture of Flink's Streaming Runtime @ ApacheCon EU 2015
Chicago Flink Meetup: Flink's streaming architecture
Baymeetup-FlinkResearch
Apache Flink 101 - the rise of stream processing and beyond
Ad

More from HostedbyConfluent (20)

PDF
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
PDF
Renaming a Kafka Topic | Kafka Summit London
PDF
Evolution of NRT Data Ingestion Pipeline at Trendyol
PDF
Ensuring Kafka Service Resilience: A Dive into Health-Checking Techniques
PDF
Exactly-once Stream Processing with Arroyo and Kafka
PDF
Fish Plays Pokemon | Kafka Summit London
PDF
Tiered Storage 101 | Kafla Summit London
PDF
Building a Self-Service Stream Processing Portal: How And Why
PDF
From the Trenches: Improving Kafka Connect Source Connector Ingestion from 7 ...
PDF
Future with Zero Down-Time: End-to-end Resiliency with Chaos Engineering and ...
PDF
Navigating Private Network Connectivity Options for Kafka Clusters
PDF
Apache Flink: Building a Company-wide Self-service Streaming Data Platform
PDF
Explaining How Real-Time GenAI Works in a Noisy Pub
PDF
TL;DR Kafka Metrics | Kafka Summit London
PDF
A Window Into Your Kafka Streams Tasks | KSL
PDF
Mastering Kafka Producer Configs: A Guide to Optimizing Performance
PDF
Data Contracts Management: Schema Registry and Beyond
PDF
Code-First Approach: Crafting Efficient Flink Apps
PDF
Debezium vs. the World: An Overview of the CDC Ecosystem
PDF
Beyond Tiered Storage: Serverless Kafka with No Local Disks
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Renaming a Kafka Topic | Kafka Summit London
Evolution of NRT Data Ingestion Pipeline at Trendyol
Ensuring Kafka Service Resilience: A Dive into Health-Checking Techniques
Exactly-once Stream Processing with Arroyo and Kafka
Fish Plays Pokemon | Kafka Summit London
Tiered Storage 101 | Kafla Summit London
Building a Self-Service Stream Processing Portal: How And Why
From the Trenches: Improving Kafka Connect Source Connector Ingestion from 7 ...
Future with Zero Down-Time: End-to-end Resiliency with Chaos Engineering and ...
Navigating Private Network Connectivity Options for Kafka Clusters
Apache Flink: Building a Company-wide Self-service Streaming Data Platform
Explaining How Real-Time GenAI Works in a Noisy Pub
TL;DR Kafka Metrics | Kafka Summit London
A Window Into Your Kafka Streams Tasks | KSL
Mastering Kafka Producer Configs: A Guide to Optimizing Performance
Data Contracts Management: Schema Registry and Beyond
Code-First Approach: Crafting Efficient Flink Apps
Debezium vs. the World: An Overview of the CDC Ecosystem
Beyond Tiered Storage: Serverless Kafka with No Local Disks

Recently uploaded (20)

PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Encapsulation theory and applications.pdf
PDF
Machine learning based COVID-19 study performance prediction
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Empathic Computing: Creating Shared Understanding
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Electronic commerce courselecture one. Pdf
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PPTX
Spectroscopy.pptx food analysis technology
PPTX
Machine Learning_overview_presentation.pptx
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
DOCX
The AUB Centre for AI in Media Proposal.docx
PPTX
Programs and apps: productivity, graphics, security and other tools
PPTX
Cloud computing and distributed systems.
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Unlocking AI with Model Context Protocol (MCP)
The Rise and Fall of 3GPP – Time for a Sabbatical?
Encapsulation theory and applications.pdf
Machine learning based COVID-19 study performance prediction
Dropbox Q2 2025 Financial Results & Investor Presentation
A comparative analysis of optical character recognition models for extracting...
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Mobile App Security Testing_ A Comprehensive Guide.pdf
Empathic Computing: Creating Shared Understanding
MIND Revenue Release Quarter 2 2025 Press Release
Electronic commerce courselecture one. Pdf
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Spectroscopy.pptx food analysis technology
Machine Learning_overview_presentation.pptx
Reach Out and Touch Someone: Haptics and Empathic Computing
The AUB Centre for AI in Media Proposal.docx
Programs and apps: productivity, graphics, security and other tools
Cloud computing and distributed systems.

OpenLineage for Stream Processing | Kafka Summit London

  • 1. OpenLineage For Stream Processing Paweł Leszczyński (github pawel-big-lebowski) Maciej Obuchowski (github mobuchowski) Kafka Summit 2024
  • 2. 2 Agenda ● OpenLineage intro & demo ○ Why do we need lineage? ○ Why having an open lineage? ○ Marquez and Flink demo ● Flink integration deep dive ○ Lineage for batch & streaming ○ Review of OpenLineage-Flink integration, FLIP-314 ○ What does the future hold?
  • 4. Autumn Rhythm - Jackson Pollock https://www.flickr.com/photos/thoth188/276162883
  • 7. 7 To define an open standard for the collection of lineage metadata from pipelines as they are running. OpenLineage Mission
  • 8. Data model 8 Run is particular instance of a streaming job Job is data pipeline that processes data Datasets are Kafka topics, Iceberg tables, Object Storage destinations and so on transition transition time Run State Update run uuid Run job id (name based) Job dataset id (name based) Dataset Run Facet Job Facet Dataset Facet run job inputs / outputs
  • 11. Demo ● Available under https://github.com/OpenLineage/workshops/tree/main/flink-streaming ● Contains ○ Two Flink jobs ■ Kafka to Postgres ■ Postgres to Kafka ○ Airflow to run some Postgres queries ○ Marquez to present lineage graph
  • 12. Flink applications - read & write to Kafka
  • 15. What is different for Streaming jobs? 15 Batch and streaming differ in many aspects, but for lineage there are few questions that matter: ● When does the unbounded job end? ● When and how datasets get updated? ● Does the transformation change during execution?
  • 16. When does job end? 16 ● It might seem that streaming jobs never end naturally ● Schema changes, new job versions, new engine versions - points when it’s worth to start another run
  • 17. When does dataset gets updated? 17 ● Dataset versioning is pretty important - bug analysis, data freshness ● Implicit - “last update timestamp”, Airflow’s data interval - OL default ● Explicit - Iceberg, Delta Lake dataset version
  • 18. When does dataset gets updated? 18 ● In streaming, it’s not so obvious as in batch ● Update on each row write would produce more metadata than actual data… ● Update only on potential job end would not give us any meaningful information in the meantime
  • 19. When does dataset gets updated? 19 ● Flink: maybe on checkpoint? ● Checkpointing is finicky, 100ms vs 10 minute checkpoint interval ● Configure minimum event emission interval separately ● OpenLineage’s additive model fits that really well ● Spark: microbatch?
  • 20. Dynamic transformation modification 20 ● KafkaSource can find new topic during execution when passed a wildcard pattern ● We can catch this and emit event containing this information when this happens
  • 22. OpenLineage has Flink integration! ● OpenLineage has Flink JobListener that notifies you on job start and end ● Support for Kafka, Iceberg, Cassandra, JDBC… ● Notifies you when job starts, ends, and on checkpoint with particular interval ● Additional metadata: schemas, how much data processed…
  • 23. Idea is simple, execution is more complex
  • 24. The integration has its limits ● Very limited, requires few undesirable things like setting execution.attached ● No SQL or Table API support! ● Need to manually attach JobListener to every job ● OpenLineage preferred solution would be to run listener on JobManager in a separate thread
  • 25. And the internals are even more complex ● Basically, a lot of reflection ● API wasn’t made for this use case, a lot of things are private, a lot of things are in the class internals ● OpenLineage preferred solution would be to have API for connectors to implement, where they would be responsible for providing correct data
  • 26. And even has evil hacks ● List of transformations inside StreamExecutionEnvironment gets cleared moment before calling JobListeners ● Before that happens, we replace the clearable list with one that keeps copy of data on `clear`.
  • 27. So, why bother? ● We’ve opportunistically created the integration despite limitations, to gather interest and provide even that limited value ● The long-term solution would be new API for Flink that would not have any of those limitations ○ Single API that for DataStream and SQL APIs ○ Not depending on any particular execution mode ○ Connectors responsible for their own lineage - testable and dependable! ○ No reflection :) ○ Possible to have Column-Level Lineage support in the future ● And we’ve waited in that state for a bit
  • 28. And then something happened ● FLIP-314 - Support Customized Flink Job Listener by Fang Yong, Zhanghao Chen ● New JobStatusChangedListener ○ JobCreatedEvent ○ JobExecutionStatusEvent ● JobCreatedEvent contains LineageGraph ● Both DataStream and SQL/Table API support ● No attachment problem ● Sounds perfect?
  • 30. Problem with LineageVertex ● How do you know all possible connector implementations?
  • 31. Problem with LineageVertex ● How do you know all connector implementations? ● How do you support custom connectors, where we can’t get the source? ○ …reflection?
  • 32. Problem with LineageVertex ● How do you know all connector implementations? ● How do you support custom connectors, for which the code is not known? ● How do you deal with breaking changes in connectors? ○ …even more reflection?
  • 33. Find a solution with community ● Voice your concern, propose how to resolve the issue ● Open discussion on Jira, Flink Slack, mailing list ● Managed to gain consensus and develop a solution that fits everyone involved ● Build community around lineage
  • 34. Resulting API is really nice
  • 35. Resulting API is really nice
  • 36. Facets Allow to Extend Data ● Directly inspired by OpenLineage facets ● Allow you to attach any atomic piece of metadata to your dataset or vertex metadata ● Both build-in into Flink - like DatasetSchemaFacet - and external, or specific per connector
  • 37. FLIP-314 will power OpenLineage ● Lineage driven by connectors is resilient ● Works for both DataStream and SQL/Table APIs ● Not dependant on any execution mode
  • 39. Support for other streaming systems ● Spark Streaming ● Kafka Connect ● …
  • 40. Column-level lineage support for Flink ● It’s a hard problem! ● But maybe not for SQL? ● UDFs definitely break simple solutions
  • 41. Native support for Spark connectors ● In contrast to Flink, Spark already has extension mechanism that allows you to view the internals of the job as it’s running - SparkListener ● We use LogicalPlan abstraction to extract metadata ● We have very similar issues as with Flink :) ● Internal vs external logical plan interfaces ● DataSourceV2 implementations
  • 42. Support for “raw” Kafka client ● It’s very popular to use raw client to build your own system, not only external systems ● bootstrap.servers is non unique and ambiguous - use Kafka cluster ID ● Execution is spread over multiple clients - but maybe not every one of them needs to always report
  • 43. OpenLineage is Open Source ● OpenLineage integrations are open source and open governance within LF AI & Data ● The best way to fix a problem is to fix it yourself :) ● Second best way is to be active and raise awareness ○ Maybe other people are also interested?