SlideShare a Scribd company logo
Mutable Data @ Scale
afinkelstein@salesforce.com
Alexey Finkelstein, Software Engineer
Private & Confidential
Datorama At-A-Glance
Founded in
Employees
& growing
quickly
Acquired in
October 2018
Brands
Agencies Publishers
Industry verticalsBy Ran Sarig, Efi Cohen
& Katrin Ribant
450+
2012
192018
Offices
worldwide
2000+
300+
23
50+
Private & Confidential
Private & Confidential
+20
Verticals
Broad blue-chip customer base
+23
Verticals
300
Agencies
+2000
Brands
Every agency holding group that has run an RFP for a global client reporting
solution in the last 3 years has selected Datorama as their platform of record.
Datorama
Connect & Unify Marketing Data Sources
Integrate, cleanse, and classify data into a unified
view using AI
Visualize AI-Powered Insights
Surface insights to optimize channel and campaign
performance in real-time
Report Across Channels and Campaigns
Powerful one-click dashboards, custom
visualizations, and shareable reports
Collaborate and Act to Drive ROI
Make every insight actionable with cross-platform
alerts and activations
Enable cross-platform marketing intelligence
+
Spend Your Time Wisely
+80% On Insights
+80% On Preparations
Time to Insight
Data To Insights In Minutes
Scale - in numbers
• 3.5M interactive
analytical queries served
per day
• 700,000 Data Stream
processed daily
• 100,000 Reports
generated daily
• 25,000 Workspaces
• 30,000 Users
• 1.5 PB of Data available
for interactive querying
• 99.9% Uptime
• 4 Different fully
redundant geographical
deployments
• ~600 Servers
• >50 microservices
Salesforce Acquisition
August 2018 - $850M
Data Lake
DatoLakes (Datorama Data Lakes)
Granular data support in reduced cost
● Your granular data together with your
aggregated data in one view
● Aimed for Raw data, including ETL, storage,
SQL access and reporting.
● Aimed to support data which is accessed less
frequently and in low concurrency, in lower
cost.
● Raw data can later on be aggregated and
joined with the rest of the data model.
DatoLakes (Datorama Data Lakes)
● Managing a data lake is a big hassle. (ETL, queries & other controls)
● Merging between granular and aggregate sources is a must
● Datorama to provide “lake as a service”
Challenges
Data is NOT immutable
● External vendors have windows of reconciliations (up to 6 months)
● Our users want to update/delete specific rows/set
● Our users love to backdate
● Most (if not all) big data solutions are append only and updating the data is considered a
heavy process
● Transactional updates required
The Solution
Requirements
● Separation of compute and storage - MUST
● MPP query engine - MUST
● ANSI SQL - MUST
● JDBC (for external clients) - MUST
● Transactional and not append only - MUST
● Cloud Vendor Agnostic - MUST
● Linear Scale - MUST
The solution we decided on was Presto and S3/Azure Storage
High Level Update Flow
1. Read the input file
2. Determine what data segments it operates on
3. Read the corresponding segments of the table from storage
4. Update the segments with input data
5. Store to a new location with the new version number
6. Add the updated partitions to Hive
7. Outdated partitions are cleaned in the background
A
B
C
A
B
A*
B*
C*
Mutable Data - Swap Partition Requirements
● The ETL process should trigger a swap partition(s) at the end of the process
● We need the swap to be transactional (to avoid dirty reads)
● It needs to support transactional change of multiple partitions in multiple tables at the
same time
Architecture
S3/AzureBlobStorage
Meta
Store
ETL Q
Queue
Resource
Manager
Query
Solution #1 - First Attempt (Past)
1. Partition the table by “key_version” field
a. key = actual column value
b. version = incremental number
c. e.g. 20190101_009
2. Create an external metastore that holds the
active versions of each partition (per table)
3. Commit the changes at the end of the ETL
(cross partition/ cross tables) to support a
transactional process
4. Connect the metastore table into hive and
include a subquery in every generated query.
Solution #2 - Present
Inline SQL didn’t initiate partition pruning by
Presto
1. Query the meta store while generating the
query to get the list of the relevant partitions for
the query
2. Inline the filter in the query
Solution #3 - Future
Process requires 2 steps (query meta + query
presto) and does not support direct SQL
access to clients
1. Update hive database (MySQL) directly in a
transactional manner just like we updated our
own metastore.
2. Refresh presto/hive caches to refresh the
metastore
Retrospective
● We’re able to “check” all the required items from our requirements
○ Separation of compute and storage, MPP query engine, ANSI SQL, JDBC, Transactional, Cloud
Vendor Agnostic & Linearly Scaled
● Data is stored in ORC files (due to the nature of our queries it was a big performance boost)
● Everybody is happy :)
We’re Hiring!
Contact us at
http://datorama.com/join-us
https://engineering.datorama.com/
Mutable data @ scale

More Related Content

PPTX
Presto for apps deck varada prestoconf
PPTX
Dynamic filtering for presto join optimisation
PDF
Building Robust Production Data Pipelines with Databricks Delta
PDF
Virtual Flink Forward 2020: Netflix Data Mesh: Composable Data Processing - J...
PPTX
Streaming data in the cloud with Confluent and MongoDB Atlas | Robert Waters,...
PDF
Presto Summit 2018 - 03 - Starburst CBO
PDF
The State of the Data Warehouse in 2017 and Beyond
PPTX
Data quality patterns in the cloud with ADF
Presto for apps deck varada prestoconf
Dynamic filtering for presto join optimisation
Building Robust Production Data Pipelines with Databricks Delta
Virtual Flink Forward 2020: Netflix Data Mesh: Composable Data Processing - J...
Streaming data in the cloud with Confluent and MongoDB Atlas | Robert Waters,...
Presto Summit 2018 - 03 - Starburst CBO
The State of the Data Warehouse in 2017 and Beyond
Data quality patterns in the cloud with ADF

What's hot (20)

PDF
Presto: Fast SQL-on-Anything (including Delta Lake, Snowflake, Elasticsearch ...
PDF
Unifying Streaming and Historical Telemetry Data For Real-time Performance Re...
PDF
Building a Machine Learning Recommendation Engine in SQL
PDF
Personalization Journey: From Single Node to Cloud Streaming
PDF
Moving eBay’s Data Warehouse Over to Apache Spark – Spark as Core ETL Platfor...
PPTX
Real-Time Analytics with Spark and MemSQL
PDF
Converging Database Transactions and Analytics
PDF
Unlocking Value in Device Data Using Spark: Spark Summit East talk by John La...
PDF
Presto Summit 2018 - 08 - FINRA
PPTX
How Kafka and Modern Databases Benefit Apps and Analytics
PPTX
Whoops, The Numbers Are Wrong! Scaling Data Quality @ Netflix
PPTX
ADF Mapping Data Flows Training Slides V1
PPTX
Building the Foundation for a Latency-Free Life
PPTX
Achieving Real-Time Analytics at Hermes | Zulf Qureshi, HVR and Dr. Stefan Ro...
PDF
Monitoring Half a Million ML Models, IoT Streaming Data, and Automated Qualit...
PDF
How to Rebuild an End-to-End ML Pipeline with Databricks and Upwork with Than...
PDF
Streaming Data in the Cloud with Confluent and MongoDB Atlas | Robert Walters...
PPTX
Dealing with Drift: Building an Enterprise Data Lake
PDF
How a Data Mesh is Driving our Platform | Trey Hicks, Gloo
PDF
Building Pinterest Real-Time Ads Platform Using Kafka Streams
Presto: Fast SQL-on-Anything (including Delta Lake, Snowflake, Elasticsearch ...
Unifying Streaming and Historical Telemetry Data For Real-time Performance Re...
Building a Machine Learning Recommendation Engine in SQL
Personalization Journey: From Single Node to Cloud Streaming
Moving eBay’s Data Warehouse Over to Apache Spark – Spark as Core ETL Platfor...
Real-Time Analytics with Spark and MemSQL
Converging Database Transactions and Analytics
Unlocking Value in Device Data Using Spark: Spark Summit East talk by John La...
Presto Summit 2018 - 08 - FINRA
How Kafka and Modern Databases Benefit Apps and Analytics
Whoops, The Numbers Are Wrong! Scaling Data Quality @ Netflix
ADF Mapping Data Flows Training Slides V1
Building the Foundation for a Latency-Free Life
Achieving Real-Time Analytics at Hermes | Zulf Qureshi, HVR and Dr. Stefan Ro...
Monitoring Half a Million ML Models, IoT Streaming Data, and Automated Qualit...
How to Rebuild an End-to-End ML Pipeline with Databricks and Upwork with Than...
Streaming Data in the Cloud with Confluent and MongoDB Atlas | Robert Walters...
Dealing with Drift: Building an Enterprise Data Lake
How a Data Mesh is Driving our Platform | Trey Hicks, Gloo
Building Pinterest Real-Time Ads Platform Using Kafka Streams
Ad

Similar to Mutable data @ scale (20)

PDF
World2016_T5_S7_TeradataFunctionalOverview
DOCX
Ajith_kumar_4.3 Years_Informatica_ETL
PPTX
Informaticapowercenter pennon soft
PPTX
Informatica PowerCenter
PDF
Data exposure in Azure - production use-case
PDF
The Lyft data platform: Now and in the future
PDF
Lyft data Platform - 2019 slides
DOC
Mohd_Shaukath_5_Exp_Datastage
PDF
Using Databricks as an Analysis Platform
DOCX
Resume
PPTX
Proposed Solution Design (PD) - Reporting & Analytics Solution_v1.0.pptx
DOC
Shaik Niyas Ahamed M Resume
PPT
introduction to datawarehouse
PDF
Analyti x mapping manager product overview presentation
PDF
Smartsheet’s Transition to Snowflake and Databricks: The Why and Immediate Im...
PPTX
How to transport PeopleSoft Crystal to BIP via automation_M.... (1).pptx
PDF
Speeding Time to Insight with a Modern ELT Approach
PPTX
HBaseCon 2013: ETL for Apache HBase
PPTX
HBaseCon 2013: Evolving a First-Generation Apache HBase Deployment to Second...
PPT
Technical Presentation - TimeWIzard
World2016_T5_S7_TeradataFunctionalOverview
Ajith_kumar_4.3 Years_Informatica_ETL
Informaticapowercenter pennon soft
Informatica PowerCenter
Data exposure in Azure - production use-case
The Lyft data platform: Now and in the future
Lyft data Platform - 2019 slides
Mohd_Shaukath_5_Exp_Datastage
Using Databricks as an Analysis Platform
Resume
Proposed Solution Design (PD) - Reporting & Analytics Solution_v1.0.pptx
Shaik Niyas Ahamed M Resume
introduction to datawarehouse
Analyti x mapping manager product overview presentation
Smartsheet’s Transition to Snowflake and Databricks: The Why and Immediate Im...
How to transport PeopleSoft Crystal to BIP via automation_M.... (1).pptx
Speeding Time to Insight with a Modern ELT Approach
HBaseCon 2013: ETL for Apache HBase
HBaseCon 2013: Evolving a First-Generation Apache HBase Deployment to Second...
Technical Presentation - TimeWIzard
Ad

Recently uploaded (20)

PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
Understanding_Digital_Forensics_Presentation.pptx
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
cuic standard and advanced reporting.pdf
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Approach and Philosophy of On baking technology
PDF
Electronic commerce courselecture one. Pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Spectral efficient network and resource selection model in 5G networks
Unlocking AI with Model Context Protocol (MCP)
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Understanding_Digital_Forensics_Presentation.pptx
The AUB Centre for AI in Media Proposal.docx
Mobile App Security Testing_ A Comprehensive Guide.pdf
cuic standard and advanced reporting.pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Review of recent advances in non-invasive hemoglobin estimation
Encapsulation_ Review paper, used for researhc scholars
Approach and Philosophy of On baking technology
Electronic commerce courselecture one. Pdf
Network Security Unit 5.pdf for BCA BBA.
NewMind AI Weekly Chronicles - August'25 Week I
Digital-Transformation-Roadmap-for-Companies.pptx
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Spectral efficient network and resource selection model in 5G networks

Mutable data @ scale

  • 1. Mutable Data @ Scale afinkelstein@salesforce.com Alexey Finkelstein, Software Engineer
  • 2. Private & Confidential Datorama At-A-Glance Founded in Employees & growing quickly Acquired in October 2018 Brands Agencies Publishers Industry verticalsBy Ran Sarig, Efi Cohen & Katrin Ribant 450+ 2012 192018 Offices worldwide 2000+ 300+ 23 50+ Private & Confidential
  • 3. Private & Confidential +20 Verticals Broad blue-chip customer base +23 Verticals 300 Agencies +2000 Brands Every agency holding group that has run an RFP for a global client reporting solution in the last 3 years has selected Datorama as their platform of record.
  • 4. Datorama Connect & Unify Marketing Data Sources Integrate, cleanse, and classify data into a unified view using AI Visualize AI-Powered Insights Surface insights to optimize channel and campaign performance in real-time Report Across Channels and Campaigns Powerful one-click dashboards, custom visualizations, and shareable reports Collaborate and Act to Drive ROI Make every insight actionable with cross-platform alerts and activations Enable cross-platform marketing intelligence +
  • 5. Spend Your Time Wisely +80% On Insights +80% On Preparations Time to Insight
  • 6. Data To Insights In Minutes
  • 7. Scale - in numbers • 3.5M interactive analytical queries served per day • 700,000 Data Stream processed daily • 100,000 Reports generated daily • 25,000 Workspaces • 30,000 Users • 1.5 PB of Data available for interactive querying • 99.9% Uptime • 4 Different fully redundant geographical deployments • ~600 Servers • >50 microservices
  • 10. DatoLakes (Datorama Data Lakes) Granular data support in reduced cost ● Your granular data together with your aggregated data in one view ● Aimed for Raw data, including ETL, storage, SQL access and reporting. ● Aimed to support data which is accessed less frequently and in low concurrency, in lower cost. ● Raw data can later on be aggregated and joined with the rest of the data model.
  • 11. DatoLakes (Datorama Data Lakes) ● Managing a data lake is a big hassle. (ETL, queries & other controls) ● Merging between granular and aggregate sources is a must ● Datorama to provide “lake as a service” Challenges
  • 12. Data is NOT immutable ● External vendors have windows of reconciliations (up to 6 months) ● Our users want to update/delete specific rows/set ● Our users love to backdate ● Most (if not all) big data solutions are append only and updating the data is considered a heavy process ● Transactional updates required
  • 14. Requirements ● Separation of compute and storage - MUST ● MPP query engine - MUST ● ANSI SQL - MUST ● JDBC (for external clients) - MUST ● Transactional and not append only - MUST ● Cloud Vendor Agnostic - MUST ● Linear Scale - MUST The solution we decided on was Presto and S3/Azure Storage
  • 15. High Level Update Flow 1. Read the input file 2. Determine what data segments it operates on 3. Read the corresponding segments of the table from storage 4. Update the segments with input data 5. Store to a new location with the new version number 6. Add the updated partitions to Hive 7. Outdated partitions are cleaned in the background A B C A B A* B* C*
  • 16. Mutable Data - Swap Partition Requirements ● The ETL process should trigger a swap partition(s) at the end of the process ● We need the swap to be transactional (to avoid dirty reads) ● It needs to support transactional change of multiple partitions in multiple tables at the same time
  • 18. Solution #1 - First Attempt (Past) 1. Partition the table by “key_version” field a. key = actual column value b. version = incremental number c. e.g. 20190101_009 2. Create an external metastore that holds the active versions of each partition (per table) 3. Commit the changes at the end of the ETL (cross partition/ cross tables) to support a transactional process 4. Connect the metastore table into hive and include a subquery in every generated query.
  • 19. Solution #2 - Present Inline SQL didn’t initiate partition pruning by Presto 1. Query the meta store while generating the query to get the list of the relevant partitions for the query 2. Inline the filter in the query
  • 20. Solution #3 - Future Process requires 2 steps (query meta + query presto) and does not support direct SQL access to clients 1. Update hive database (MySQL) directly in a transactional manner just like we updated our own metastore. 2. Refresh presto/hive caches to refresh the metastore
  • 21. Retrospective ● We’re able to “check” all the required items from our requirements ○ Separation of compute and storage, MPP query engine, ANSI SQL, JDBC, Transactional, Cloud Vendor Agnostic & Linearly Scaled ● Data is stored in ORC files (due to the nature of our queries it was a big performance boost) ● Everybody is happy :)
  • 22. We’re Hiring! Contact us at http://datorama.com/join-us https://engineering.datorama.com/

Editor's Notes

  • #3: Talk Track: (added by Idit) Started Datorama 6 years ago, in 2012 (by Ran, Efi and Kathryn). Focusing on Marketers and Marketers only Datorama is a SaaS (software as a service) platform that gives marketers everything they need to connect all of their data sources together into a single source of truth for analysis and insights. Has 17 offices around the globe and over 380 employees and keep growing Let’s talk about the challenge we solve. If you’re a modern marketer you’re engaging audiences with your brand across different regions, using different campaigns. By definition you’re using a lot of different technologies to do that. Bringing everything together – all the data that is extremely siloed across those different technologies – is a real operational problem.
  • #4: Talk track for this Flash slide: We have a lot of great customers even before joining Salesforce We solve a painful problem that exists at scale Call out IBM, Salesforce, EA, Ticketmaster etc Agency groups have been quick to adopt the platform at scale – we are the preferred supplier for 4 top 5 groups… This is not a coincidence – we are the best at solving this 70-30 split but evolving….
  • #5: This is where the power of Datorama comes in. Datorama enables cross-platform marketing intelligence. What does that mean? It means one single place to: •Connect and unify all of your marketing data and insights in one centralized place across Marketing Cloud technologies and any tools and technologies in the market – all clicks, no code. •Visualize AI-powered insights across all your data so you can take action at scale to achieve your KPIs •Easily report across all your channels and campaigns so every stakeholder in your organization has the right information at their fingertips •And collaborate and take action to drive ROI to bring your organization together towards common goals This helps marketers hold every investment and activity accountable!
  • #9: Talk Track: (added by Idit) Scalable - horizontal scale in every module / service Biggest challenge for all growing channels, customers, processing jobs is to have a scalable solution Multi-tenancy is a big challenge S3 TB usage Total Row - is customer Data API steams - connection to external customers accounts with updated data