SlideShare a Scribd company logo
Rim Zaydullin (zaydullinr@seagroup.com)

Platform Engineering Group (PEG), Shopee 2018
BUILDING DATA PIPELINES IN SHOPEE
WITH
WHY?
*LONG INTRO
Before diving in we need to
understand context and reasoning

As some of you are from the outside
the company, I need to give a bit more
details on how things work

So, bear with me
Behind the scenes of any internet company
Any projects begins with real life

And real life shows that every
company has a mess of a various
scale



Separate parts or subsystems can be
very clean and pretty, but we never
stop our progress and even clean
systems deteriorate with time, due to
project evolution and new features,
nigher loads that require new
architectures, etc.

Engineers are the creators and the
cleaners of this mess, today we’ll talk
about cleaning-up

One example would be:
Shopee app We’re Shopee, we’re doing e-
commerse :D

People buy and sell stuff, and when
they do, they have this useful info
numbers on their “orders” page

To ship, etc. Now, we found out that
having these numbers can cause
some nasty pain during sale events

Why?



CORE SERVER
Mobile app
&
Web clients
DB1 DB2 DB3
CORE SERVER*
*INSANELY SIMPLIFIEDVIEW
Some intro about core server
create transaction
commit transaction
query

query

query

query

query

SLOW (locking) query

query
query
CORE SERVER
Mobile app
&
Web clients
DB1 DB2 DB3
BOTTLENECKS!
When ppl buy stuff, number “to_ship”
changes for the seller.

All those numbers, to_ship, to_receive
(returns), etc are bunch of values in a
single row in a table.

When a lot of ppl buy stuff, this row
has many simultaneous updates
which leads to row locks, which leads
to transactions being timed out and
we have this avalanche effect

avalanche, when users can’t make a
purchase, they retry this whole big
transactions again and again, we
can’t serve new users, they
accumulate, everyone’s retrying to
make a purchase again and the whole
system is bought to a crawl

Shopee users are not happy, out DBA
are not happy, we gotta do something
CORE SERVER
Mobile app
&
Web clients
DB1 DB2 DB3
Let’s process slow (locking)
queries in background,
asynchronously
?
?
??
?
This info numbers are not absolutely
important in the big scheme of things,
they can be processed in background. 

They can even be a bit delayed, it’s no
problem.

So, we need some new system
outside of core server, that could
handle these requests in background
DB3
Let’s process slow (locking)
queries in background,
asynchronously
?
?
??
?
This info numbers are not absolutely
important in the big scheme of things,
they can be processed in background. 

They can even be a bit delayed, it’s no
problem.

So, we need some new system
outside of core server, that could
handle these requests in background

In fact we don’t need core server to
care about this logic at all. External
system could track buyer actions from
DB changes and update seller records
accordingly
Source
DB
Destination
DB
Magic Data
Pipeline??
Looks like we need something like
this? General solution
CORE SERVER
Mobile app
&
Web clients
SERVICE
redis queue A
redis queue B
transformation
server
DB1 DB2 DB3
CODE / INFRA BLOAT!
Let’s continue cleaning things up!Another
example!

Explain what’s going on.

It’s already outside the core server, but
requires core server to have additional code
(that needs support, monitoring and is not a
general solution)

External system can be a complicated mess
that’s reinvented over and over again by
different teams
CORE SERVER
Mobile app
&
Web clients
DB1 DB2 DB3
CODE / INFRA BLOAT!
CORE SERVER
CORE SERVER
Mobile app
&
Web clients
DB1 DB2 DB3
CODE / INFRA BLOAT
This magic piece of infra is a bicycle
reinvention every time. It needs
servers, maintenance and it’s a
custom solution every time
SERVICE
redis queue A
redis queue B
transformation
server
CORE SERVER
Mobile app
&
Web clients
SERVICE
DB1 DB2 DB3
CODE / INFRA BLOAT
?
?
??
?
Data transformation
Source
DB
Destination
Service
Magic Data
Pipeline??
Again, looks like we need something
like this? General solution
HOW?
EXISTING DBTOOLS?
TRIGGERS?

FUNCTIONS?
- Triggers allows to modify only storage itself using set of
predefined functions. React to insert/update/delete queries,
executes before or after the query
- Works only on DB host itself
- Limited in data processing capabilities
- Are bound to specific DB (mysql, oracle, etc)
- Can not send request to outside systems, queues
- Extending functionality is pretty much impossible
All problems in computer science can be solved
by another level of indirection. © David Wheeler
* the guy who invented
subroutines in software
He knows a lot about indirection!
Data
Source
Data
Destination
Magic Data
Pipeline??
Again, looks like we need something
like this? General solution
- Works as independent service
- Has flexible data processing capabilities
- Not bound to specific data sources or destinations
- Connects completely unrelated systems in generic way
- Is easily extensible to support new systems
- It’s like DB functions/triggers taken to another level
Data
Source
Data
Destination
Magic Data
Pipeline??
But, the requirements are tough
- No additional point of failure
- Source consistency preservation
- Zero loss, low latency
- Highly available, scalable
• REPLICATION
• SIMPLE TRANSFORMS
INITIAL IDEA(S)
SIMPLE WEB INTERFACE
source transformation destination
Table Type Sharding Key Operations
Description
source
Enabled Source column Destination column
Source table
Destination table
Columns mapping
Operations mapping
OTHERS
LinkedIn's Change Data Capture Pipeline
SOCC 2012
We looked at other systems there are
not many in open source. It’s all
mostly internal systems never shared
with the outside world

This specific system is closely
connected with Oracle DB that’s used
at linkedin
DEC
DATABASE
QUEUE
DATABASE
QUEUE
SOURCE
MAPPING &
SIMPLE TRANSFORM
K /V
DESTINATION
• HARDCODED FUNCTIONS
• SIMPLE JSON CONFIGS
INITIAL DESIGN EXPLANATION

WE NEED SIMPLE SYSTEM

OH WAIT…

By nature, the more complex the
system is, the more prone it is to
breaking.

BUT
REAL CASES ARE MORE COMPLEX
WAY MORE COMPLEXSometimes we need a transformation
function, that generated a request to
celery, for example. How are we going
to do that?
DEC
DATABASE
QUEUE
DATABASE
QUEUE
SOURCE
K /V
DESTINATION
• HARDCODED FUNCIONS
• SIMPLE JSON CONFIGS
• SCRIPTABLE ENGINE
MAPPING &TRANSFORM
• REPLICATION + SHARDING
• MAPPING + SIMPLE TRANSFORMS
• SCRIPTABLE ENGINE (LUA!)
• HA, LOW LATENCY, ZERO DATA LOSS
TRACKING DB EVENTS
TRACKING DB EVENTS
- GDS connects directly to MySQL instance as slave

- Receives logical replication log (modifications only)

- Converts received events to json

- Pushes those json events onto Kafka topic(s)

- Highly configurable
Event!
TRACKING DB EVENTS
SOURCE MAPPING &TRANSFORM DESTINATION
DEC ARCHITECTURE:
1) Reads events from datasource

2) Applies transformations to events using simple transforms or (LUA script)

3) Serializes resulting queries to internal format using msgpack

4) Writes resulting binary queries to configured kafka topics
1) Reads binary queries

2) Deserializes queries and sends them to specified destination

3) Takes care of retry logic and events deduplication
CONFIGURATION
Make sure your data source is configured
(we have a DB replication stream from GDS)
Step 1
Make sure DEC configuration has

correct data source and data destination
Step 2
CONFIGURATION
Implement and deploy necessary data
transformation scripts.
Step 3
CONFIGURATION
CONSUMER
EVENT TRANSFORMATIONS
1) DEC Consumer takes event from GDS queue
2) Filters event by table/event type (insert/update/delete)
3) Process with corresponding LUA script

CONSUMER EVENT TRANSFORMATIONS
CONSUMER EVENT TRANSFORMATIONS
CONSUMER EVENT TRANSFORMATIONS
2018/12/17 16:41:20.882435 [INFO] [buyer_seller_count.dec_shopee_order_details.order_2_seller]
[4097018849][619930454]

SQL: UPDATE order_cnt_seller_tab_00000002 SET `mtime` = 1545036080, `seller_toreceice` =
`seller_toreceice` + 1 , `seller_toship` = `seller_toship` - 1 WHERE `shopid` = 27045752;
rows affected: 1
SO,WHERE WE’RE AT?
LIVE FOR ALL 7 COUNTRIES
USED BY 3TEAMS,
MORE COMING ON BOARD
create transaction
commit transaction
query

query

query

query

query

SLOW (locking) query

query
query
CORE SERVER
Mobile app
&
Web clients
DB1 DB2 DB3
BOTTLENECKS!
CORE SERVER
Mobile app
&
Web clients
DB1 DB2 DB3
DEC
NO BOTTLENECKS!
CORE SERVER
Mobile app
&
Web clients
SERVICE
redis queue A
redis queue B
transformation
DB1 DB2 DB3
CODE / INFRA BLOAT
CORE SERVER
Mobile app
&
Web clients
SERVICE
DB1 DB2 DB3
NO CODE / INFRA BLOAT
DEC
CORE SERVER
Mobile app
&
Web clients
SERVICE
DB1 DB2 DB3
DEC
SERVICE
SERVICE
CONCLUDING
All software projects are evolving and it’s always a mess

but we need to create decent tools to keep the entropy at bay
and DEC is one such attempt in this never ending battle :)
THANKYOU!
Q&A
Rim Zaydullin (zaydullinr@seagroup.com)

Platform Engineering Group (PEG), Shopee 2018

More Related Content

PDF
Yolo v2 urop 발표자료
PDF
Automatic Attendance System using CNN
PPTX
CNN 초보자가 만드는 초보자 가이드 (VGG 약간 포함)
PPTX
Photography
PPTX
A Complete Guide to Manual DSLR Photography
PDF
Object tracking final
PPT
Digital Photography I
PPTX
DB 모니터링 신규 & 개선 기능 (박명규)
Yolo v2 urop 발표자료
Automatic Attendance System using CNN
CNN 초보자가 만드는 초보자 가이드 (VGG 약간 포함)
Photography
A Complete Guide to Manual DSLR Photography
Object tracking final
Digital Photography I
DB 모니터링 신규 & 개선 기능 (박명규)

Similar to Building data pipelines at Shopee with DEC (20)

PPTX
CQRS recipes or how to cook your architecture
PDF
SQL in the Hybrid World
PPTX
Move a successful onpremise oltp application to the cloud
PDF
Dynamics of Leading Legacy Databases
PDF
DB2 Performance Tuning Z/OS - email me please for more details
DOC
Final project cafe coffe
DOC
Ibm redbook
PPT
Assessing technology landscape
PPT
Ebook10
PPT
Sql interview question part 10
PDF
DATABASE AUTOMATION with Thousands of database, monitoring and backup
PDF
IMS to DB2 Migration: How a Fortune 500 Company Made the Move in Record Time ...
PDF
Reliable Data Replication by Cameron Morgan
PDF
Lessons from Building Large-Scale, Multi-Cloud, SaaS Software at Databricks
PPTX
La creación de una capa operacional con MongoDB
DOCX
Informatica
PDF
From ddd to DDD : My journey from data-driven development to Domain-Driven De...
PDF
Idera live 2021: Managing Databases in the Cloud - the First Step, a Succes...
PDF
Denodo 6.0: Self Service Search, Discovery & Governance using an Universal Se...
PPTX
Database Virtualization: The Next Wave of Big Data
CQRS recipes or how to cook your architecture
SQL in the Hybrid World
Move a successful onpremise oltp application to the cloud
Dynamics of Leading Legacy Databases
DB2 Performance Tuning Z/OS - email me please for more details
Final project cafe coffe
Ibm redbook
Assessing technology landscape
Ebook10
Sql interview question part 10
DATABASE AUTOMATION with Thousands of database, monitoring and backup
IMS to DB2 Migration: How a Fortune 500 Company Made the Move in Record Time ...
Reliable Data Replication by Cameron Morgan
Lessons from Building Large-Scale, Multi-Cloud, SaaS Software at Databricks
La creación de una capa operacional con MongoDB
Informatica
From ddd to DDD : My journey from data-driven development to Domain-Driven De...
Idera live 2021: Managing Databases in the Cloud - the First Step, a Succes...
Denodo 6.0: Self Service Search, Discovery & Governance using an Universal Se...
Database Virtualization: The Next Wave of Big Data
Ad

Recently uploaded (20)

PPTX
Sustainable Sites - Green Building Construction
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
DOCX
573137875-Attendance-Management-System-original
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
UNIT 4 Total Quality Management .pptx
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PDF
Digital Logic Computer Design lecture notes
PDF
Well-logging-methods_new................
PPTX
Construction Project Organization Group 2.pptx
PPTX
Current and future trends in Computer Vision.pptx
PPT
introduction to datamining and warehousing
PDF
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
PPTX
additive manufacturing of ss316l using mig welding
PDF
PPT on Performance Review to get promotions
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PPTX
Internet of Things (IOT) - A guide to understanding
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PPTX
Lecture Notes Electrical Wiring System Components
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Sustainable Sites - Green Building Construction
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
573137875-Attendance-Management-System-original
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
UNIT 4 Total Quality Management .pptx
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
Digital Logic Computer Design lecture notes
Well-logging-methods_new................
Construction Project Organization Group 2.pptx
Current and future trends in Computer Vision.pptx
introduction to datamining and warehousing
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
additive manufacturing of ss316l using mig welding
PPT on Performance Review to get promotions
R24 SURVEYING LAB MANUAL for civil enggi
Internet of Things (IOT) - A guide to understanding
Automation-in-Manufacturing-Chapter-Introduction.pdf
Lecture Notes Electrical Wiring System Components
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Ad

Building data pipelines at Shopee with DEC

  • 1. Rim Zaydullin (zaydullinr@seagroup.com)
 Platform Engineering Group (PEG), Shopee 2018 BUILDING DATA PIPELINES IN SHOPEE WITH
  • 2. WHY? *LONG INTRO Before diving in we need to understand context and reasoning As some of you are from the outside the company, I need to give a bit more details on how things work So, bear with me
  • 3. Behind the scenes of any internet company Any projects begins with real life And real life shows that every company has a mess of a various scale
 
 Separate parts or subsystems can be very clean and pretty, but we never stop our progress and even clean systems deteriorate with time, due to project evolution and new features, nigher loads that require new architectures, etc. Engineers are the creators and the cleaners of this mess, today we’ll talk about cleaning-up One example would be:
  • 4. Shopee app We’re Shopee, we’re doing e- commerse :D People buy and sell stuff, and when they do, they have this useful info numbers on their “orders” page To ship, etc. Now, we found out that having these numbers can cause some nasty pain during sale events Why?
 

  • 5. CORE SERVER Mobile app & Web clients DB1 DB2 DB3 CORE SERVER* *INSANELY SIMPLIFIEDVIEW Some intro about core server
  • 6. create transaction commit transaction query
 query
 query
 query
 query
 SLOW (locking) query
 query query CORE SERVER Mobile app & Web clients DB1 DB2 DB3 BOTTLENECKS! When ppl buy stuff, number “to_ship” changes for the seller. All those numbers, to_ship, to_receive (returns), etc are bunch of values in a single row in a table. When a lot of ppl buy stuff, this row has many simultaneous updates which leads to row locks, which leads to transactions being timed out and we have this avalanche effect avalanche, when users can’t make a purchase, they retry this whole big transactions again and again, we can’t serve new users, they accumulate, everyone’s retrying to make a purchase again and the whole system is bought to a crawl Shopee users are not happy, out DBA are not happy, we gotta do something
  • 7. CORE SERVER Mobile app & Web clients DB1 DB2 DB3 Let’s process slow (locking) queries in background, asynchronously ? ? ?? ? This info numbers are not absolutely important in the big scheme of things, they can be processed in background. They can even be a bit delayed, it’s no problem. So, we need some new system outside of core server, that could handle these requests in background
  • 8. DB3 Let’s process slow (locking) queries in background, asynchronously ? ? ?? ? This info numbers are not absolutely important in the big scheme of things, they can be processed in background. They can even be a bit delayed, it’s no problem. So, we need some new system outside of core server, that could handle these requests in background In fact we don’t need core server to care about this logic at all. External system could track buyer actions from DB changes and update seller records accordingly
  • 9. Source DB Destination DB Magic Data Pipeline?? Looks like we need something like this? General solution
  • 10. CORE SERVER Mobile app & Web clients SERVICE redis queue A redis queue B transformation server DB1 DB2 DB3 CODE / INFRA BLOAT! Let’s continue cleaning things up!Another example! Explain what’s going on. It’s already outside the core server, but requires core server to have additional code (that needs support, monitoring and is not a general solution) External system can be a complicated mess that’s reinvented over and over again by different teams
  • 11. CORE SERVER Mobile app & Web clients DB1 DB2 DB3 CODE / INFRA BLOAT!
  • 13. CORE SERVER Mobile app & Web clients DB1 DB2 DB3 CODE / INFRA BLOAT This magic piece of infra is a bicycle reinvention every time. It needs servers, maintenance and it’s a custom solution every time SERVICE redis queue A redis queue B transformation server
  • 14. CORE SERVER Mobile app & Web clients SERVICE DB1 DB2 DB3 CODE / INFRA BLOAT ? ? ?? ? Data transformation
  • 15. Source DB Destination Service Magic Data Pipeline?? Again, looks like we need something like this? General solution
  • 16. HOW?
  • 17. EXISTING DBTOOLS? TRIGGERS?
 FUNCTIONS? - Triggers allows to modify only storage itself using set of predefined functions. React to insert/update/delete queries, executes before or after the query - Works only on DB host itself - Limited in data processing capabilities - Are bound to specific DB (mysql, oracle, etc) - Can not send request to outside systems, queues - Extending functionality is pretty much impossible
  • 18. All problems in computer science can be solved by another level of indirection. © David Wheeler * the guy who invented subroutines in software He knows a lot about indirection!
  • 19. Data Source Data Destination Magic Data Pipeline?? Again, looks like we need something like this? General solution - Works as independent service - Has flexible data processing capabilities - Not bound to specific data sources or destinations - Connects completely unrelated systems in generic way - Is easily extensible to support new systems - It’s like DB functions/triggers taken to another level
  • 20. Data Source Data Destination Magic Data Pipeline?? But, the requirements are tough - No additional point of failure - Source consistency preservation - Zero loss, low latency - Highly available, scalable
  • 21. • REPLICATION • SIMPLE TRANSFORMS INITIAL IDEA(S)
  • 22. SIMPLE WEB INTERFACE source transformation destination Table Type Sharding Key Operations Description
  • 23. source Enabled Source column Destination column Source table Destination table Columns mapping Operations mapping
  • 24. OTHERS LinkedIn's Change Data Capture Pipeline SOCC 2012 We looked at other systems there are not many in open source. It’s all mostly internal systems never shared with the outside world This specific system is closely connected with Oracle DB that’s used at linkedin
  • 25. DEC DATABASE QUEUE DATABASE QUEUE SOURCE MAPPING & SIMPLE TRANSFORM K /V DESTINATION • HARDCODED FUNCTIONS • SIMPLE JSON CONFIGS INITIAL DESIGN EXPLANATION WE NEED SIMPLE SYSTEM OH WAIT… By nature, the more complex the system is, the more prone it is to breaking. BUT
  • 26. REAL CASES ARE MORE COMPLEX
  • 27. WAY MORE COMPLEXSometimes we need a transformation function, that generated a request to celery, for example. How are we going to do that?
  • 28. DEC DATABASE QUEUE DATABASE QUEUE SOURCE K /V DESTINATION • HARDCODED FUNCIONS • SIMPLE JSON CONFIGS • SCRIPTABLE ENGINE MAPPING &TRANSFORM
  • 29. • REPLICATION + SHARDING • MAPPING + SIMPLE TRANSFORMS • SCRIPTABLE ENGINE (LUA!) • HA, LOW LATENCY, ZERO DATA LOSS
  • 31. TRACKING DB EVENTS - GDS connects directly to MySQL instance as slave
 - Receives logical replication log (modifications only)
 - Converts received events to json
 - Pushes those json events onto Kafka topic(s)
 - Highly configurable
  • 34. DEC ARCHITECTURE: 1) Reads events from datasource 2) Applies transformations to events using simple transforms or (LUA script) 3) Serializes resulting queries to internal format using msgpack 4) Writes resulting binary queries to configured kafka topics 1) Reads binary queries 2) Deserializes queries and sends them to specified destination 3) Takes care of retry logic and events deduplication
  • 35. CONFIGURATION Make sure your data source is configured (we have a DB replication stream from GDS) Step 1
  • 36. Make sure DEC configuration has
 correct data source and data destination Step 2 CONFIGURATION
  • 37. Implement and deploy necessary data transformation scripts. Step 3 CONFIGURATION
  • 38. CONSUMER EVENT TRANSFORMATIONS 1) DEC Consumer takes event from GDS queue 2) Filters event by table/event type (insert/update/delete) 3) Process with corresponding LUA script

  • 41. CONSUMER EVENT TRANSFORMATIONS 2018/12/17 16:41:20.882435 [INFO] [buyer_seller_count.dec_shopee_order_details.order_2_seller] [4097018849][619930454]
 SQL: UPDATE order_cnt_seller_tab_00000002 SET `mtime` = 1545036080, `seller_toreceice` = `seller_toreceice` + 1 , `seller_toship` = `seller_toship` - 1 WHERE `shopid` = 27045752; rows affected: 1
  • 42. SO,WHERE WE’RE AT? LIVE FOR ALL 7 COUNTRIES USED BY 3TEAMS, MORE COMING ON BOARD
  • 43. create transaction commit transaction query
 query
 query
 query
 query
 SLOW (locking) query
 query query CORE SERVER Mobile app & Web clients DB1 DB2 DB3 BOTTLENECKS!
  • 44. CORE SERVER Mobile app & Web clients DB1 DB2 DB3 DEC NO BOTTLENECKS!
  • 45. CORE SERVER Mobile app & Web clients SERVICE redis queue A redis queue B transformation DB1 DB2 DB3 CODE / INFRA BLOAT
  • 46. CORE SERVER Mobile app & Web clients SERVICE DB1 DB2 DB3 NO CODE / INFRA BLOAT DEC
  • 47. CORE SERVER Mobile app & Web clients SERVICE DB1 DB2 DB3 DEC SERVICE SERVICE
  • 48. CONCLUDING All software projects are evolving and it’s always a mess
 but we need to create decent tools to keep the entropy at bay and DEC is one such attempt in this never ending battle :)