SlideShare a Scribd company logo
Tsahi Glik
Sep 12, 2019
ML Infra @ Dropbox
Overview
ML @ dropbox
Our signal sources:
Files
Multi-exabyte data
File Metadata
Trillions
User interactions
Billions / day
ML @ dropbox
ML Impact at Dropbox:
● Smart Sync
● Content Suggestions
● Team Activity Ranking
● Search Ranking
● OCR
And many more …
ML Platform
Challenges:
● Huge data sources that are isolated in various system across production
● Multiple privacy levels of data
● Custom work and build dedicated services for each new use case
● Manual training which is hard to reproduce
● Wide variety of development processes and ML frameworks
ML Platform
Mission:
Accelerate intelligent product development at Dropbox
By:
● Scalable access to data for offline and online
● Ensures sensitive data is protected and accessed only in approved ways
● Easy model deployment & experimentation
● Automate workflows
● Standardize the process, frameworks and tools
Platform Architecture
Online Data Collection
Antenna
What is Antenna?
● User activity service
● Provides various ways to query activity events
● Support aggregations for simple summaries and histograms of activity data
Example usage of Antenna
Antenna Architecture
Content Ingestion pipeline
Read more in our blog post:
OCR
Content Ingestion Architecture
Offline Data Preparation
Data Preparation - ETL pipeline
Data Preparation - Predict logger
- Converting raw logs into labeled datasets
- Logging partial information from different services at different times
- Eliminate discrepancies between online and offline
Offline Training &
Evaluation
Prototyping
HDFS
Signal and training
data store
Spark
Zeppelin Notebooks
Multi-user notebook environment
Workbench
40 cores, 400GB ram
dbxlearn
Elastic ML training and
hyperparameter tuning
dbxlearn
What is dbxlearn?
● dbxlearn provides an easy way to use computing at scale for training
● Core problems dbxlearn is addressing:
○ Elasticity
○ Standard way to train on different hw configurations (GPU, TPU) on
different cloud platforms.
● Hybrid cloud architecture - Interface with private cluster and well as public
clouds
● Currently integrated with AWS and use SageMaker
dbxlearn Architecture
dbxlearn
Datasets
Training script
bazelized binary
Dropbox Data Center
Public Cloud (AWS)
S3
Data and code store
AWS Sagemaker
Trainers cluster
Training Instances
Training Instances
S3
Model store
deploy
train/tuneexport
dbxlearn workflow
$ dbxlearn train --py-binary <script>
--train_uri <...> --validation_uri <...> [--local]
$ dbxlearn tune --py-binary <script> --train_uri <...> --validation_uri <...>
$ dbxlearn query --tuning_job_id <id> print_top_summary
$ dbxlearn deploy-model --tuning-job_id <id> <experiment-group>
Model Deployment
Predict service
Live experimentation - Suggest backend
Shadow experimentation - Suggest backend
● Send live traffic to shadow cluster with a different experiment variant
● Results are logged for experiment analysis
● Useful to collect labeled datasets using Predict Logger
Example
Campaign Ranker - Using Multi Arm Bandits
Campaign Ranker - Using Multi Arm Bandits
Summary
● End-to-End platform that supports all steps in ML development
workflow
● Deep integration with Dropbox large scale data sources
● Flexible APIs to support wide variety of use cases
● Hybrid cloud architecture for elasticity and early adoption of new
technologies
Next Challenges
● Better representation of data relations across multiple systems
● Democratize ML at dropbox, extending our tools from ML
developers to more engineers
Thank You

More Related Content

PPTX
Netflix talk at ML Platform meetup Sep 2019
PPTX
LinkedIn talk at Netflix ML Platform meetup Sep 2019
PDF
Facebook Talk at Netflix ML Platform meetup Sep 2019
PDF
ML Infra for Netflix Recommendations - AI NEXTCon talk
PDF
Pinterest - Big Data Machine Learning Platform at Pinterest
PDF
RealTime Recommendations @Netflix - Spark
PPTX
Dropbox Talk at Netflix ML Platform Meetup Spe 2019
PDF
Deeper Things: How Netflix Leverages Deep Learning in Recommendations and Se...
Netflix talk at ML Platform meetup Sep 2019
LinkedIn talk at Netflix ML Platform meetup Sep 2019
Facebook Talk at Netflix ML Platform meetup Sep 2019
ML Infra for Netflix Recommendations - AI NEXTCon talk
Pinterest - Big Data Machine Learning Platform at Pinterest
RealTime Recommendations @Netflix - Spark
Dropbox Talk at Netflix ML Platform Meetup Spe 2019
Deeper Things: How Netflix Leverages Deep Learning in Recommendations and Se...

What's hot (20)

PDF
Context Aware Recommendations at Netflix
PDF
Past, Present & Future of Recommender Systems: An Industry Perspective
PDF
Contextualization at Netflix
PDF
Deep Learning for Recommender Systems
PDF
User behavior analytics
PDF
Ehtsham Elahi, Senior Research Engineer, Personalization Science and Engineer...
PDF
A Multi-Armed Bandit Framework For Recommendations at Netflix
PDF
Recommending for the World
PDF
Recent Trends in Personalization: A Netflix Perspective
PDF
Recsys 2016 tutorial: Lessons learned from building real-life recommender sys...
PDF
Artwork Personalization at Netflix
PDF
Explainability for Natural Language Processing
PDF
Kaggleのテクニック
PDF
分散型強化学習手法の最近の動向と分散計算フレームワークRayによる実装の試み
PDF
General Tips for participating Kaggle Competitions
PDF
MLFlow: Platform for Complete Machine Learning Lifecycle
PPTX
Recommendation at Netflix Scale
PPTX
APACHE KAFKA / Kafka Connect / Kafka Streams
PDF
Qcon SF 2013 - Machine Learning & Recommender Systems @ Netflix Scale
PDF
MLOps Using MLflow
Context Aware Recommendations at Netflix
Past, Present & Future of Recommender Systems: An Industry Perspective
Contextualization at Netflix
Deep Learning for Recommender Systems
User behavior analytics
Ehtsham Elahi, Senior Research Engineer, Personalization Science and Engineer...
A Multi-Armed Bandit Framework For Recommendations at Netflix
Recommending for the World
Recent Trends in Personalization: A Netflix Perspective
Recsys 2016 tutorial: Lessons learned from building real-life recommender sys...
Artwork Personalization at Netflix
Explainability for Natural Language Processing
Kaggleのテクニック
分散型強化学習手法の最近の動向と分散計算フレームワークRayによる実装の試み
General Tips for participating Kaggle Competitions
MLFlow: Platform for Complete Machine Learning Lifecycle
Recommendation at Netflix Scale
APACHE KAFKA / Kafka Connect / Kafka Streams
Qcon SF 2013 - Machine Learning & Recommender Systems @ Netflix Scale
MLOps Using MLflow
Ad

Similar to ML Infrastracture @ Dropbox (20)

PDF
IMCSummit 2015 - Day 2 Developer Track - Implementing a Highly Scalable In-Me...
PPTX
Machine Learning Models in Production
PDF
How to Productionize Your Machine Learning Models Using Apache Spark MLlib 2....
PPTX
Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...
PPTX
Open, Secure & Transparent AI Pipelines
PDF
Data ops: Machine Learning in production
PDF
Data Summer Conf 2018, “Build, train, and deploy machine learning models at s...
PDF
ApacheCon 2015 - A Stock Prediction System Using OSS
PDF
A Stock Prediction System using Open-Source Software
PDF
GDG Cloud Southlake #16: Priyanka Vergadia: Scalable Data Analytics in Google...
PPTX
Where ml ai_heavy
PPTX
Yarn spark next_gen_hadoop_8_jan_2014
PDF
Infrastructure Agnostic Machine Learning Workload Deployment
PDF
Machine Learning for Developers
PDF
Model Monitoring at Scale with Apache Spark and Verta
PDF
2014-10-20 Large-Scale Machine Learning with Apache Spark at Internet of Thin...
PPTX
CNCF-Istanbul-MLOps for Devops Engineers.pptx
PPTX
Apache Spark Model Deployment
PPTX
AI Stack on AWS: Amazon SageMaker and Beyond
PDF
Apache ® Spark™ MLlib 2.x: How to Productionize your Machine Learning Models
IMCSummit 2015 - Day 2 Developer Track - Implementing a Highly Scalable In-Me...
Machine Learning Models in Production
How to Productionize Your Machine Learning Models Using Apache Spark MLlib 2....
Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...
Open, Secure & Transparent AI Pipelines
Data ops: Machine Learning in production
Data Summer Conf 2018, “Build, train, and deploy machine learning models at s...
ApacheCon 2015 - A Stock Prediction System Using OSS
A Stock Prediction System using Open-Source Software
GDG Cloud Southlake #16: Priyanka Vergadia: Scalable Data Analytics in Google...
Where ml ai_heavy
Yarn spark next_gen_hadoop_8_jan_2014
Infrastructure Agnostic Machine Learning Workload Deployment
Machine Learning for Developers
Model Monitoring at Scale with Apache Spark and Verta
2014-10-20 Large-Scale Machine Learning with Apache Spark at Internet of Thin...
CNCF-Istanbul-MLOps for Devops Engineers.pptx
Apache Spark Model Deployment
AI Stack on AWS: Amazon SageMaker and Beyond
Apache ® Spark™ MLlib 2.x: How to Productionize your Machine Learning Models
Ad

Recently uploaded (20)

PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
Geodesy 1.pptx...............................................
PPTX
Artificial Intelligence
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PDF
composite construction of structures.pdf
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PPTX
web development for engineering and engineering
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
Lecture Notes Electrical Wiring System Components
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
Internet of Things (IOT) - A guide to understanding
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PPTX
Construction Project Organization Group 2.pptx
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
Geodesy 1.pptx...............................................
Artificial Intelligence
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
composite construction of structures.pdf
Automation-in-Manufacturing-Chapter-Introduction.pdf
web development for engineering and engineering
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Lecture Notes Electrical Wiring System Components
Operating System & Kernel Study Guide-1 - converted.pdf
UNIT-1 - COAL BASED THERMAL POWER PLANTS
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Foundation to blockchain - A guide to Blockchain Tech
Internet of Things (IOT) - A guide to understanding
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
Construction Project Organization Group 2.pptx
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks

ML Infrastracture @ Dropbox

Editor's Notes

  • #3: Understand the scale of our data sources that we use for ml features, not necessarily for training huge file repository. probably one of the largest in the world. containing exabytes of data with millions of new files added every day. And we have file system trees which provides our systems valuable signals on content organization and grouping. billions of events every day of the users interaction with these files. Which provide our systems valuable signals on how users are working with files and collaborate with one another in their workplace. These are huge data sources that present lots of potential for ML, but also lots of challenges to ML developers.
  • #4: Go through some of the use cases Smart sync is trying to predict which files you will need on each device so we can make it available locally and vacuum unneeded files. Content suggestion is trying to simplify retrieval process and predict which file you are looking for Team Activity ranking is trying to increase awareness of what others are doing while filtering the noise Search ranking - use a ml model for ranking search results OCR - we extract text from each image which is uploaded to dropbox So these are some of the highlights, but there are many more.
  • #5: So what are the challenges that we are trying to solve? Our huge data sources are isolated in various system across production, which makes it challenging to access for training. Multiple privacy levels of data, some of which is not reviewable by engineers Teams are doing custom work and build new services for solve their problems There are manual complex training workflows which are hard to reproduce Wide variety of ml frameworks and tools in use across teams
  • #6: So Our mission: Accelerate intelligence products development at Dropbox And we trying to solve it by: provide scalable access to context to models both offline and online Ensures sensitive data is protected and accessed only in approved ways Make it easy to deploy models without building new services and make it easy to experiment with new models Automate development workflows Standardize the process, frameworks and tools for Intelligent product development and release
  • #7: All ml development has a common basic workflow that includes: Data collection, Data preparation, Training & evaluation, And model deployment to production. We have developed components to support each step in this basic workflow. Online components integrates with DBX production systems to provide data in realtime. And make it easy to deploy and experiment with new models. Our offline components capture historical data and make it easy to access it for training and prototyping. And Management components are making it easy to automate these workflows.
  • #9: Serves user activity to online production services Given a user what are all the all the files that the user have interacted on. Given a file who are all the users that have interacted with that file. We also support simple aggregations like how many edits a user had on a specific file or creating histograms of number of events across different days of week
  • #10: In this example we are creating suggestions for a user that try to predict which file the user will open next To do that it will first query Antenna our user activity service for the user activity from the last few months. It will then use that to generate a candidate list of files for suggestions and aggregations of the user activity on each file candidate. Then an ml model will rank them and return the top N. It is important that the data will be fresh. The user recent activity, even from the last few minutes, is relevant for what the user want to do next.
  • #11: Online ingestion hosts will process events in realtime , updating indexes and aggregations and provide fresh data. Offline components persist the raw events in a durable store and rebuild the indexes and aggregations periodically ml developers can define new aggregators and indexes that will be run in both in online ingestion and in offline workers and get automatically backfilled
  • #12: Antenna is our ingestion infrastructure for user activity, lets now talk about content ingestion infrastructure OCR is a good example that will demonstrate how an ML model will run in our content ingestion infrastructure It is actually a multistep process of not one but several models . Every image that passes through our ingestion pipeline classified whether it contain OCR-able content, image is rectified to align the text , deep net model is use to extract word boxes, LSTM model is used to convert each word box to a sequence of characters, and finally a lexicon based algorithm is converting these sequence of characters to actual words.
  • #13: Every File update in ingested by our indexers that call a plugin framework to do a transform on the user content, which is in our case an OCR model, and then it store the results in Doc Store. Which contain all derived data for each file. When running a transform on raw user content, there are security concerns for exploits and vulnerabilities, so the plugin framework run each plugin in a sandboxed environment we call jail. This can allow us to be sure that any exploits in the frameworks that we are using, like ImageMagick and TF, cannot be used to to gain access to our systems. One of the challenges that we are dealing with here is how to simplify model deployment to this jailed environment and enable easy experimentation with different model variants.
  • #15: Our ETL pipeline makes it easy to generate training data and signals from our data sources We maintain interfaces for spark jobs to import data from our data lake and Antenna Using periodical Spark jobs orchestrated by Airflow to generate signals and training data Then ML developer can access the output signal and training data for training.
  • #16: We provide a more specialized pipeline that help automate the generation of labeled datasets from live traffic. Which we call the predict logger The predict logger define an api with a set of predict events that define the life cycle of online predictions Like : requested, predicted, viewed, actions This events are being logged from different services at different time. And the predict logger merge them in a consistent way that help developers avoid incidental complexities The result labeled dataset is ready for use in training with all the signals and context as was seen in serving time, so this help us avoid the discrepancies between online and offline data
  • #18: Before full scale training, developers first need to prototype For prototyping they need access to training and signal data from the etl pipeline which they use for exploration and offline evaluations We use workbenches in production that are integretage with a Spark cluster to provide them access to all the offline data After prototyping, they use dbxlearn for large scale training and hyperparameter tuning.
  • #19: Training jobs require lots of computing power dbxlearn provides our developers an easy way to use computing at scale for training by enabling them to submit training jobs to remote clusters. It provides us elasticy, make sure that each job will get all the resources that it needs when it needs them Also a standard way to train on different hw configuration, so we can use specialized hw configuration in training like GPUs. We have build a hybrid cloud architecture that interface with our private cluster as well as public clouds. We have integrated currently with AWS and use SageMaker for training.
  • #21: Typical workflow with dbxlearn Use dbxlearn train --local to test the training script locally Remove --local to test the training script in the cloud. If it runs well in the cloud, use dbxlearn tune to find the optimal hyperparameters automatically Use dbxlearn query to check the status and the results of all training jobs If the results are good, use dbxlearn deploy-model to deploy the best model to the model store
  • #23: We have a central service to host models in production called predict service Loads models from the model store and provide an standard API to do realtime inference. Support multiple model inference partitions for resources isolation. The inference api can also provide a proxy for running inference on public cloud services
  • #24: Help reduce boilerplate of running live experiments Simple config defines which signals to collect and which model to run . Client send requests with target experiment variant Suggest backend run the signal collector that was defined in the config And then run configured model Standard logging to monitor experiments results Provide signal collector abstraction that developers can customize
  • #27: The example is a campaign ranker that was implemented as a contextual multi-arm bandits problem At dropbox we have a campaign framework that makes it easy to define campaigns that can be displayed in various ui surfaces for various populations. In this case one of our web pages is displaying a campaign for Dropbox business. For each impression there are many competing campaigns, that can be modeled as a multi-arm bandits problem, where we need to choose which arm to play. And we use ui features and user features as context for this decision.
  • #28: Impressions are being logged by UI surfaces and by predict service from the backend, using the predict logger This generates a batch of new training data periodically by our ETL pipeline A training job then update the policies in the contextual bandits model and store it in the model store The predict service then load the new model and use it for future decisions And this repeats itself forever This demonstrate the use of our ETL pipeline to automate the full workflow of labeled data generation, training and model deployment
  • #29: We have built an End-to-End platform that supports all steps in ML development workflow We provide Deep integration with Dropbox large scale data sources to make this data accessible for offline training and online inference We built our components with flexible APIs to support wide variety of use cases We chose a hybrid cloud architecture for easier elasticity and early adoption of new technologies
  • #30: Currently the exabytes of data are in multiple systems. But there are relationships that would be useful to know across these systems. So our challenge is how can we present the interaction among data across systems so the ML team can use them for features? Currently our tools are used mainly by ml developers with high ML expertise. We would like to make our tools accessible to more engineers with better and simpler interfaces