SlideShare a Scribd company logo
7 Key Recipes for
Data Engineering
Introduction
We will explore 7 key recipes about Data
Engineering.
The 5th is absolutely game changing!
Thank You
Bachir AIT MBAREK
BI and Big Data Consultant
About Me
Jonathan WINANDY
Lead Data Engineer:
- Data Lake building,
- Audit / Coaching,
- Spark Training.
Founder of Univalence (BI / Big Data)
Co-Founder of CYM (IoT / Predictive Maintenance),
Craft Analytics† (BI / Big Data),
and Valwin (Health Care Data).
2016 has been amazing for Data Engineering !
but ...
1.It’s all about our
organisations!
1.It’s all about our Organisations
Data engineering is not about scaling
computation.
1.It’s all about our Organisations
Data engineering is not a support
function
for Data Scientists[1].
[1] whatever they are nowadays
1.It’s all about our Organisations
Instead, Data engineering
enables access to Data!
1.It’s all about our Organisations
access to Data … in complex organisations.
Product OpsBI You
Marketing
data
new data
Holding
1.It’s all about our Organisations
access to Data … in complex organisations.
Marketing
Yo
u
data
new data
Entity 1
MarketingIT
Entity N
MarketingIT
1.It’s all about our Organisations
access to Data … in complex organisations.
It’s very frustrating!
We run a support group meetup if you are interested : Paris Data Engineers!
1.It’s all about our Organisations
Small tips :
Only one hadoop cluster (no TEST/REC/INT/PREPROD).
No Air-Data-Eng, it helps no one.
Radical transparency with other teams.
Hack that sh**.
2. Optimising our work
2. Optimising our work
There are 3 key concerns governing our decisions :
Lead time
Impact
Failure management
2. Optimising our work
Lead time (noun) :
The period of time between the initial phase of a process and the emergence
of results, as between the planning and completed manufacture of a product.
Short lead times are essential!
The Elastic stack helps a lot in this area.
2. Optimising our work
Impact
To have impact, we have to analyse beyond
immediate needs. That way, we’re able to
provide solutions to entire kinds of
problems.
2. Optimising our work
Failure management
Things fail, be prepared!
On the same morning the RER A public transports
and
our Hadoop job tracker can fail.
Unprepared failures may pile up and lead to huge wastes.
2. Optimising our work
“What is likely to fail?” $componentName_____
“How? (root cause)”
“Can we know if this will fail?”
“Can we prevent this failure?”
“What are the impacts?”
“How to fix it when it happens?”
“Can we facilitate today?”
How to mitigate failure in 7 questions.
2. Optimising our work
Track your work!
3. Staging the Data
3. Staging the data
Data is moving around, freeze it!
Staging changed with Big Data. We moved from transient
staging (FTP, NFS, etc.) to persistent staging in distributed
solutions:
● In Streaming with Kafka, we may retain logs in Kafka
for several months.
● In Batch, staging in HDFS may retain source Data for
years.
3. Staging the data
Modern staging anti-pattern :
Dropping destination places before moving the Data.
Having incomplete data visible.
Short log retention in streams (=> new failure modes).
Modern staging should be seen as a persistent data structure.
3. Staging the data
HDFS staging :
/staging
|-- $tablename
|-- dtint=$dtint
|-- dsparam.name=$dsparam.value
|-- ...
|-- ...
|-- uuid=$uuid
4. Using RDDs or Dataframes
4. Using RDDs or Dataframes
Dataframes have great performance,
but are untyped and foreign.
RDDs have a robust Scala API,
but are a pain to map from data sources.
btw, SQL is AWESOME
4. Using RDDs or Dataframes
Dataframes RDDs
Predicate push down Types !!
Bare metal / unboxed Nested structures
Connectors Better unit tests
Pluggable Optimizer Less stages
SQL + Meta Scala * Scala
4. Using RDDs or Dataframes
We should use RDDs in large ETL jobs :
Loading the data with dataframe APIs,
Basic case class mapping (or better Datasets),
Typesafe transformations,
Storing with dataframe APIs
4. Using RDDs or Dataframes
Dataframes are perfect for :
Exploration, drill down,
Light jobs,
Dynamic jobs.
4. Using RDDs or Dataframes
RDD based jobs are like marine
mammals.
5. Cogroup all the things
5. Cogroup all the things
The cogroup is the best operation
to link data together.
It changes fundamentally the way we work with data.
5. Cogroup all the things
join (left:RDD[(K,A)],right:RDD[(K,B)]):RDD[(K,( A , B ))]
leftJoin (left:RDD[(K,A)],right:RDD[(K,B)]):RDD[(K,( A ,Option[B]))]
rightJoin (left:RDD[(K,A)],right:RDD[(K,B)]):RDD[(K,(Option[A], B) )]
outerJoin (left:RDD[(K,A)],right:RDD[(K,B)]):RDD[(K,(Option[A],Option[B]))]
cogroup (left:RDD[(K,A)],right:RDD[(K,B)]):RDD[(K,( Seq[A], Seq[B]))]
groupBy (rdd:RDD[(K,A)]):RDD[(K,Seq[A])]
On cogroup and groupBy, for a given key:K, there is only one
unique row with that key in the output dataset.
5. Cogroup all the things
5. Cogroup all the things
{case (k,(s1,s2)) => (k,(s1.map(fA).filter(pA)
,s2.map(fB).filter(pB)))}
CHECKPOINT
5. Cogroup all the things
3k LoC
30 minutes to run (non-
blocking)
15 LoC
11 hours to run
(blocking)
5. Cogroup all the things
What about tests? Cogrouping allows us to have “ScalaChecks-like” tests, by
minimising examples.
Test workflow :
Write a predicate to isolate the bug.
Get the minimal cogrouped row
ouput the row in test resources.
Reproduce the bug.
Write tests and fix code.
6. Inline data quality
6. Inline data quality
Data quality improves resilience to bad data.
But data quality concerns come second.
6. Inline data quality
case class FixeVisiteur(
devicetype: String,
isrobot: Boolean,
recherche_visitorid: String,
sessions: List[FixeSession]
) {
def recherches: List[FixeRecherche] = sessions.flatMap(_.recherches)
}
object FixeVisiteur {
@autoBuildResult
def build(
devicetype: Result[String],
isrobot: Result[Boolean],
recherche_visitorid: Result[String],
sessions: Result[List[FixeSession]]
): Result[FixeVisiteur] = MacroMarker.generated_applicative
}
Example :
6. Inline data quality
case class Annotation(
anchor: Anchor,
message: String,
badData: Option[String],
expectedData: List[String],
remainingData: List[String],
level: String @@ AnnotationLevel,
annotationId: Option[AnnotationId],
stage: String
)
case class Anchor(path: String @@ AnchorPath,
typeName: String)
6. Inline data quality
Message :
EMPTY_STRING
MULTIPLE_VALUES
NOT_IN_ENUM
PARSE_ERROR
______________
Levels :
WARNING
ERROR
CRITICAL
6. Inline data quality
Data quality is available within the output rows.
case class HVisiteur(
visitorId: String,
atVisitorId: Option[String],
isRobot: Boolean,
typeClient: String @@ TypeClient,
typeSupport: String @@ TypeSupport,
typeSource: String @@ TypeSource,
hVisiteurPlus: Option[HVisiteurPlus],
sessions: List[HSession],
annotations: Seq[HAnnotation]
)
6. Inline data quality
(KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path -> lib_source, message ->
NOT_IN_ENUM, type -> String @@ LibSource, level -> WARNING)),657366)
(KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path ->
analyseInfos.analyse_typequoi, message -> EMPTY_STRING, type -> String @@ TypeRecherche, level ->
WARNING)),201930)
(KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path -> isrobot, message ->
MULTIPLE_VALUE, type -> String, level -> WARNING)),15)
(KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path -> rechercheInfos, message ->
MULTIPLE_VALUE, type -> String, level -> WARNING)),566973)
(KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path ->
reponseInfos.reponse_nbblocs, message -> MULTIPLE_VALUE, type -> String, level -> WARNING)),571313)
(KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path ->
requeteInfos.requete_typerequete, message -> MULTIPLE_VALUE, type -> String, level -> WARNING)),315297)
(KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path ->
analyseInfos.analyse_typequoi_sec, message -> EMPTY_STRING, type -> String @@ TypeRecherche, level ->
WARNING)),201930)
(KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path -> typereponse, message ->
EMPTY_STRING, type -> String @@ TypeReponse, level -> WARNING)),323614)
(KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path -> grp_source, message ->
MULTIPLE_VALUE, type -> String, level -> WARNING)),94)
6. Inline data quality
https://github.com/ahoy-jon/autoBuild (presented in october 2015)
There are opportunities to make those approaches more “precepte-like”.
(DAG of workflow, provenance of every fields, structure tags)
7. Create real programs
7. Create real programs
Most pipelines are designed as “Stateless” computation.
They require no state (good)
Or
Infer the current state based on filesystem’ states (bad).
7. Create real programs
Solution : Allow pipelines to access a commit log to read about past execution
and to push data for future execution.
7. Create real programs
In progress: project codename Kerguelen
Multi level abstractions / commit log backed / api for jobs.
Allow creation of jobs that have different concern level.
Level 1 : name resolving
Level 2 : smart intermediaries (schema capture, stats, delta, …)
Level 3 : smart high level scheduler (replay, load management, coherence)
Level 4 : “code as data” (=> continuous delivery, auto QA, auto mep)
Conclusion
Thank you
for listening!
Questions?
jonathan@univalence.io

More Related Content

PDF
Data Science with Spark
PDF
Db tech show - hivemall
PDF
A look inside pandas design and development
PDF
Hive
PPT
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
PPTX
Apache Sparkを用いたスケーラブルな時系列データの異常検知モデル学習ソフトウェアの開発
PPTX
Spark - Philly JUG
PPT
Hadoop summit 2010 frameworks panel elephant bird
Data Science with Spark
Db tech show - hivemall
A look inside pandas design and development
Hive
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
Apache Sparkを用いたスケーラブルな時系列データの異常検知モデル学習ソフトウェアの開発
Spark - Philly JUG
Hadoop summit 2010 frameworks panel elephant bird

What's hot (20)

PDF
Net flowhadoop flocon2013_yhlee_final
PDF
Best Practices for Building and Deploying Data Pipelines in Apache Spark
PPTX
Introduction to Apache Drill - interactive query and analysis at scale
PDF
Introduction to the Hadoop Ecosystem (FrOSCon Edition)
PPTX
Large scale, interactive ad-hoc queries over different datastores with Apache...
PPTX
Building data pipelines
PDF
Getting started with Hadoop, Hive, and Elastic MapReduce
PDF
Practical Problem Solving with Apache Hadoop & Pig
PPT
Hadoop basics
PDF
運用CNTK 實作深度學習物件辨識 Deep Learning based Object Detection with Microsoft Cogniti...
PDF
Optiq: A dynamic data management framework
PDF
Introduction to the Hadoop Ecosystem with Hadoop 2.0 aka YARN (Java Serbia Ed...
PDF
Deep Dive into Project Tungsten: Bringing Spark Closer to Bare Metal-(Josh Ro...
KEY
Hadoop, Pig, and Twitter (NoSQL East 2009)
PDF
Enabling Python to be a Better Big Data Citizen
PDF
Fast Data Analytics with Spark and Python
PPTX
Bigdata : Big picture
PDF
Treasure Data Cloud Strategy
PPT
SQL on Big Data using Optiq
PDF
JavaDayKiev'15 Java in production for Data Mining Research projects
Net flowhadoop flocon2013_yhlee_final
Best Practices for Building and Deploying Data Pipelines in Apache Spark
Introduction to Apache Drill - interactive query and analysis at scale
Introduction to the Hadoop Ecosystem (FrOSCon Edition)
Large scale, interactive ad-hoc queries over different datastores with Apache...
Building data pipelines
Getting started with Hadoop, Hive, and Elastic MapReduce
Practical Problem Solving with Apache Hadoop & Pig
Hadoop basics
運用CNTK 實作深度學習物件辨識 Deep Learning based Object Detection with Microsoft Cogniti...
Optiq: A dynamic data management framework
Introduction to the Hadoop Ecosystem with Hadoop 2.0 aka YARN (Java Serbia Ed...
Deep Dive into Project Tungsten: Bringing Spark Closer to Bare Metal-(Josh Ro...
Hadoop, Pig, and Twitter (NoSQL East 2009)
Enabling Python to be a Better Big Data Citizen
Fast Data Analytics with Spark and Python
Bigdata : Big picture
Treasure Data Cloud Strategy
SQL on Big Data using Optiq
JavaDayKiev'15 Java in production for Data Mining Research projects
Ad

Viewers also liked (20)

PDF
Preparing for distributed system failures using akka #ScalaMatsuri
PDF
Akka-chan's Survival Guide for the Streaming World
PPTX
ここまで来た!2017年 Web VRでできること
PDF
ARもVRもMRもまとめてドドンドーン!
PPTX
C++ Coroutines
PPTX
Big Data Commercialization and associated IoT Platform Implications by Ramnik...
PDF
Understand the Breadth and Depth of Solr via the Admin UI: Presented by Upaya...
PDF
Projectmanagement en systemisch werken
PDF
Giip bp-giip connectivity1703
PPT
Finding HMAS Sydney Chapter 9 - Search for Sydney
PDF
High Availability Architecture for Legacy Stuff - a 10.000 feet overview
PDF
소셜 코딩 GitHub & branch & branch strategy
PDF
Migrating to aws
PPT
Experimental Photography Artist Research
PPTX
How OpenTable uses Big Data to impact growth by Raman Marya
PDF
The Beauty of BAD code
PDF
Collaboration with Eclipse final
PPTX
Conociendo los servicios adicionales en big data
PPTX
VMworld 2015: Take Virtualization to the Next Level vSphere with Operations M...
PDF
How to Crunch Petabytes with Hadoop and Big Data using InfoSphere BigInsights...
Preparing for distributed system failures using akka #ScalaMatsuri
Akka-chan's Survival Guide for the Streaming World
ここまで来た!2017年 Web VRでできること
ARもVRもMRもまとめてドドンドーン!
C++ Coroutines
Big Data Commercialization and associated IoT Platform Implications by Ramnik...
Understand the Breadth and Depth of Solr via the Admin UI: Presented by Upaya...
Projectmanagement en systemisch werken
Giip bp-giip connectivity1703
Finding HMAS Sydney Chapter 9 - Search for Sydney
High Availability Architecture for Legacy Stuff - a 10.000 feet overview
소셜 코딩 GitHub & branch & branch strategy
Migrating to aws
Experimental Photography Artist Research
How OpenTable uses Big Data to impact growth by Raman Marya
The Beauty of BAD code
Collaboration with Eclipse final
Conociendo los servicios adicionales en big data
VMworld 2015: Take Virtualization to the Next Level vSphere with Operations M...
How to Crunch Petabytes with Hadoop and Big Data using InfoSphere BigInsights...
Ad

Similar to 7 key recipes for data engineering (20)

PDF
Azure HDInsight
PDF
IRJET - Survey Paper on Map Reduce Processing using HADOOP
PPTX
Big data
PDF
Cio summit 20170223_v20
PDF
Interview questions on Apache spark [part 2]
PDF
A Big-Data Process Consigned Geographically by Employing Mapreduce Frame Work
PDF
IRJET- Hadoop based Frequent Closed Item-Sets for Association Rules form ...
PPTX
Hadoop Interview Questions and Answers
PPTX
Unit 2 - Data Manipulation with R.pptx
PDF
Big Data with Hadoop – For Data Management, Processing and Storing
PDF
Reduce Side Joins
PDF
User 2013-oracle-big-data-analytics-1971985
PDF
Graphs & Big Data - Philip Rathle and Andreas Kollegger @ Big Data Science Me...
PDF
Big Data and OSS at IBM
PPT
Big data with hadoop
PPTX
Data science with Windows Azure - A Brief Introduction
PDF
Big data analysis concepts and references
PPTX
Big Data Transformation Powered By Apache Spark.pptx
PPTX
Big Data Transformations Powered By Spark
PDF
IIPGH Webinar 1: Getting Started With Data Science
Azure HDInsight
IRJET - Survey Paper on Map Reduce Processing using HADOOP
Big data
Cio summit 20170223_v20
Interview questions on Apache spark [part 2]
A Big-Data Process Consigned Geographically by Employing Mapreduce Frame Work
IRJET- Hadoop based Frequent Closed Item-Sets for Association Rules form ...
Hadoop Interview Questions and Answers
Unit 2 - Data Manipulation with R.pptx
Big Data with Hadoop – For Data Management, Processing and Storing
Reduce Side Joins
User 2013-oracle-big-data-analytics-1971985
Graphs & Big Data - Philip Rathle and Andreas Kollegger @ Big Data Science Me...
Big Data and OSS at IBM
Big data with hadoop
Data science with Windows Azure - A Brief Introduction
Big data analysis concepts and references
Big Data Transformation Powered By Apache Spark.pptx
Big Data Transformations Powered By Spark
IIPGH Webinar 1: Getting Started With Data Science

More from univalence (9)

PDF
Scala pour le Data Eng
PDF
Spark-adabra, Comment Construire un DATALAKE ! (Devoxx 2017)
PDF
7 key recipes for data engineering
PDF
Streaming in Scala with Avro
PDF
Beyond tabular data
PDF
Introduction à kafka
PDF
Data encoding and Metadata for Streams
PDF
Introduction aux Macros
PDF
Big data forever
Scala pour le Data Eng
Spark-adabra, Comment Construire un DATALAKE ! (Devoxx 2017)
7 key recipes for data engineering
Streaming in Scala with Avro
Beyond tabular data
Introduction à kafka
Data encoding and Metadata for Streams
Introduction aux Macros
Big data forever

Recently uploaded (20)

PPTX
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
PDF
PTS Company Brochure 2025 (1).pdf.......
PPTX
Essential Infomation Tech presentation.pptx
PDF
System and Network Administration Chapter 2
PPTX
Introduction to Artificial Intelligence
PPTX
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
PPTX
Odoo POS Development Services by CandidRoot Solutions
PDF
Wondershare Filmora 15 Crack With Activation Key [2025
PDF
Which alternative to Crystal Reports is best for small or large businesses.pdf
PDF
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
PDF
Adobe Illustrator 28.6 Crack My Vision of Vector Design
PPTX
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
PPTX
Transform Your Business with a Software ERP System
PDF
Raksha Bandhan Grocery Pricing Trends in India 2025.pdf
PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PDF
AI in Product Development-omnex systems
PPTX
CHAPTER 2 - PM Management and IT Context
PDF
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
PDF
medical staffing services at VALiNTRY
PDF
System and Network Administraation Chapter 3
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
PTS Company Brochure 2025 (1).pdf.......
Essential Infomation Tech presentation.pptx
System and Network Administration Chapter 2
Introduction to Artificial Intelligence
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
Odoo POS Development Services by CandidRoot Solutions
Wondershare Filmora 15 Crack With Activation Key [2025
Which alternative to Crystal Reports is best for small or large businesses.pdf
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
Adobe Illustrator 28.6 Crack My Vision of Vector Design
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
Transform Your Business with a Software ERP System
Raksha Bandhan Grocery Pricing Trends in India 2025.pdf
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
AI in Product Development-omnex systems
CHAPTER 2 - PM Management and IT Context
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
medical staffing services at VALiNTRY
System and Network Administraation Chapter 3

7 key recipes for data engineering

  • 1. 7 Key Recipes for Data Engineering
  • 2. Introduction We will explore 7 key recipes about Data Engineering. The 5th is absolutely game changing!
  • 3. Thank You Bachir AIT MBAREK BI and Big Data Consultant
  • 4. About Me Jonathan WINANDY Lead Data Engineer: - Data Lake building, - Audit / Coaching, - Spark Training. Founder of Univalence (BI / Big Data) Co-Founder of CYM (IoT / Predictive Maintenance), Craft Analytics† (BI / Big Data), and Valwin (Health Care Data).
  • 5. 2016 has been amazing for Data Engineering ! but ...
  • 6. 1.It’s all about our organisations!
  • 7. 1.It’s all about our Organisations Data engineering is not about scaling computation.
  • 8. 1.It’s all about our Organisations Data engineering is not a support function for Data Scientists[1]. [1] whatever they are nowadays
  • 9. 1.It’s all about our Organisations Instead, Data engineering enables access to Data!
  • 10. 1.It’s all about our Organisations access to Data … in complex organisations. Product OpsBI You Marketing data new data
  • 11. Holding 1.It’s all about our Organisations access to Data … in complex organisations. Marketing Yo u data new data Entity 1 MarketingIT Entity N MarketingIT
  • 12. 1.It’s all about our Organisations access to Data … in complex organisations. It’s very frustrating! We run a support group meetup if you are interested : Paris Data Engineers!
  • 13. 1.It’s all about our Organisations Small tips : Only one hadoop cluster (no TEST/REC/INT/PREPROD). No Air-Data-Eng, it helps no one. Radical transparency with other teams. Hack that sh**.
  • 15. 2. Optimising our work There are 3 key concerns governing our decisions : Lead time Impact Failure management
  • 16. 2. Optimising our work Lead time (noun) : The period of time between the initial phase of a process and the emergence of results, as between the planning and completed manufacture of a product. Short lead times are essential! The Elastic stack helps a lot in this area.
  • 17. 2. Optimising our work Impact To have impact, we have to analyse beyond immediate needs. That way, we’re able to provide solutions to entire kinds of problems.
  • 18. 2. Optimising our work Failure management Things fail, be prepared! On the same morning the RER A public transports and our Hadoop job tracker can fail. Unprepared failures may pile up and lead to huge wastes.
  • 19. 2. Optimising our work “What is likely to fail?” $componentName_____ “How? (root cause)” “Can we know if this will fail?” “Can we prevent this failure?” “What are the impacts?” “How to fix it when it happens?” “Can we facilitate today?” How to mitigate failure in 7 questions.
  • 20. 2. Optimising our work Track your work!
  • 22. 3. Staging the data Data is moving around, freeze it! Staging changed with Big Data. We moved from transient staging (FTP, NFS, etc.) to persistent staging in distributed solutions: ● In Streaming with Kafka, we may retain logs in Kafka for several months. ● In Batch, staging in HDFS may retain source Data for years.
  • 23. 3. Staging the data Modern staging anti-pattern : Dropping destination places before moving the Data. Having incomplete data visible. Short log retention in streams (=> new failure modes). Modern staging should be seen as a persistent data structure.
  • 24. 3. Staging the data HDFS staging : /staging |-- $tablename |-- dtint=$dtint |-- dsparam.name=$dsparam.value |-- ... |-- ... |-- uuid=$uuid
  • 25. 4. Using RDDs or Dataframes
  • 26. 4. Using RDDs or Dataframes Dataframes have great performance, but are untyped and foreign. RDDs have a robust Scala API, but are a pain to map from data sources. btw, SQL is AWESOME
  • 27. 4. Using RDDs or Dataframes Dataframes RDDs Predicate push down Types !! Bare metal / unboxed Nested structures Connectors Better unit tests Pluggable Optimizer Less stages SQL + Meta Scala * Scala
  • 28. 4. Using RDDs or Dataframes We should use RDDs in large ETL jobs : Loading the data with dataframe APIs, Basic case class mapping (or better Datasets), Typesafe transformations, Storing with dataframe APIs
  • 29. 4. Using RDDs or Dataframes Dataframes are perfect for : Exploration, drill down, Light jobs, Dynamic jobs.
  • 30. 4. Using RDDs or Dataframes RDD based jobs are like marine mammals.
  • 31. 5. Cogroup all the things
  • 32. 5. Cogroup all the things The cogroup is the best operation to link data together. It changes fundamentally the way we work with data.
  • 33. 5. Cogroup all the things join (left:RDD[(K,A)],right:RDD[(K,B)]):RDD[(K,( A , B ))] leftJoin (left:RDD[(K,A)],right:RDD[(K,B)]):RDD[(K,( A ,Option[B]))] rightJoin (left:RDD[(K,A)],right:RDD[(K,B)]):RDD[(K,(Option[A], B) )] outerJoin (left:RDD[(K,A)],right:RDD[(K,B)]):RDD[(K,(Option[A],Option[B]))] cogroup (left:RDD[(K,A)],right:RDD[(K,B)]):RDD[(K,( Seq[A], Seq[B]))] groupBy (rdd:RDD[(K,A)]):RDD[(K,Seq[A])] On cogroup and groupBy, for a given key:K, there is only one unique row with that key in the output dataset.
  • 34. 5. Cogroup all the things
  • 35. 5. Cogroup all the things {case (k,(s1,s2)) => (k,(s1.map(fA).filter(pA) ,s2.map(fB).filter(pB)))} CHECKPOINT
  • 36. 5. Cogroup all the things 3k LoC 30 minutes to run (non- blocking) 15 LoC 11 hours to run (blocking)
  • 37. 5. Cogroup all the things What about tests? Cogrouping allows us to have “ScalaChecks-like” tests, by minimising examples. Test workflow : Write a predicate to isolate the bug. Get the minimal cogrouped row ouput the row in test resources. Reproduce the bug. Write tests and fix code.
  • 38. 6. Inline data quality
  • 39. 6. Inline data quality Data quality improves resilience to bad data. But data quality concerns come second.
  • 40. 6. Inline data quality case class FixeVisiteur( devicetype: String, isrobot: Boolean, recherche_visitorid: String, sessions: List[FixeSession] ) { def recherches: List[FixeRecherche] = sessions.flatMap(_.recherches) } object FixeVisiteur { @autoBuildResult def build( devicetype: Result[String], isrobot: Result[Boolean], recherche_visitorid: Result[String], sessions: Result[List[FixeSession]] ): Result[FixeVisiteur] = MacroMarker.generated_applicative } Example :
  • 41. 6. Inline data quality case class Annotation( anchor: Anchor, message: String, badData: Option[String], expectedData: List[String], remainingData: List[String], level: String @@ AnnotationLevel, annotationId: Option[AnnotationId], stage: String ) case class Anchor(path: String @@ AnchorPath, typeName: String)
  • 42. 6. Inline data quality Message : EMPTY_STRING MULTIPLE_VALUES NOT_IN_ENUM PARSE_ERROR ______________ Levels : WARNING ERROR CRITICAL
  • 43. 6. Inline data quality Data quality is available within the output rows. case class HVisiteur( visitorId: String, atVisitorId: Option[String], isRobot: Boolean, typeClient: String @@ TypeClient, typeSupport: String @@ TypeSupport, typeSource: String @@ TypeSource, hVisiteurPlus: Option[HVisiteurPlus], sessions: List[HSession], annotations: Seq[HAnnotation] )
  • 44. 6. Inline data quality (KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path -> lib_source, message -> NOT_IN_ENUM, type -> String @@ LibSource, level -> WARNING)),657366) (KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path -> analyseInfos.analyse_typequoi, message -> EMPTY_STRING, type -> String @@ TypeRecherche, level -> WARNING)),201930) (KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path -> isrobot, message -> MULTIPLE_VALUE, type -> String, level -> WARNING)),15) (KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path -> rechercheInfos, message -> MULTIPLE_VALUE, type -> String, level -> WARNING)),566973) (KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path -> reponseInfos.reponse_nbblocs, message -> MULTIPLE_VALUE, type -> String, level -> WARNING)),571313) (KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path -> requeteInfos.requete_typerequete, message -> MULTIPLE_VALUE, type -> String, level -> WARNING)),315297) (KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path -> analyseInfos.analyse_typequoi_sec, message -> EMPTY_STRING, type -> String @@ TypeRecherche, level -> WARNING)),201930) (KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path -> typereponse, message -> EMPTY_STRING, type -> String @@ TypeReponse, level -> WARNING)),323614) (KeyPerformanceIndicator(Annotation,annotation,Map(stage -> MappingFixe, path -> grp_source, message -> MULTIPLE_VALUE, type -> String, level -> WARNING)),94)
  • 45. 6. Inline data quality https://github.com/ahoy-jon/autoBuild (presented in october 2015) There are opportunities to make those approaches more “precepte-like”. (DAG of workflow, provenance of every fields, structure tags)
  • 46. 7. Create real programs
  • 47. 7. Create real programs Most pipelines are designed as “Stateless” computation. They require no state (good) Or Infer the current state based on filesystem’ states (bad).
  • 48. 7. Create real programs Solution : Allow pipelines to access a commit log to read about past execution and to push data for future execution.
  • 49. 7. Create real programs In progress: project codename Kerguelen Multi level abstractions / commit log backed / api for jobs. Allow creation of jobs that have different concern level. Level 1 : name resolving Level 2 : smart intermediaries (schema capture, stats, delta, …) Level 3 : smart high level scheduler (replay, load management, coherence) Level 4 : “code as data” (=> continuous delivery, auto QA, auto mep)

Editor's Notes

  • #37: ~15 lines of code