SlideShare a Scribd company logo
A Multi-Armed Bandit Framework
for Recommendations
at Netflix
Jaya Kawale & Fernando Amat
PRS Workshop, June 2018
A Multi-Armed Bandit Framework For Recommendations at Netflix
Quickly help members discover content they’ll love
Global Members, Personalized Tastes
125 Million Members
~200 Countries
98% Match
Spot the
Algorithms!
98% Match
Spot the
Algorithms!
98% Match
Case Study I: Artwork Optimization
Goal: Recommend a personalized
artwork or imagery for a title to help
members decide if they will enjoy the
title or not.
Case Study II: Billboard Recommendation
Goal: Successfully introduce content
to the right members.
Traditional Approaches for
Recommendation
Collaborative Filtering
● Idea is to use the “wisdom of the
crowd” to recommend items
● Well understood and various
algorithms exist (e.g. Matrix
Factorization)
Collaborative Filtering
0 1 0 1 0
0 0 1 1 0
1 0 0 1 1
0 1 0 0 0
0 0 0 0 1
Users
Items
Challenges for Traditional Approaches
● Scarce feedback
● Dynamic catalog
● Country availability
● Non-stationary member base
● Time sensitivity
○ Content popularity changes
○ Member interests evolves
○ Respond quickly to member feedback
Challenges for Traditional Approaches
Continuous and fast
learning needed
● Scarce feedback
● Dynamic catalog
● Country availability
● Non-stationary member base
● Time sensitivity
○ Content popularity changes
○ Member interests evolves
○ Respond quickly to member feedback
Multi-Armed Bandits
Increasingly successful in various practical settings where these challenges occur
Clinical Trials Network Routing
Online Advertising
AI for Games Hyperparameter Optimization
Multi-Armed Bandits
● A gambler playing multiple slot machines with
unknown reward distribution
● Which machine to play to maximize reward?
Multi-Armed Bandit For Recommendation
Exploration-Exploitation tradeoff :
Recommend the optimal title given the evidence i.e. exploit
OR
Recommend other titles to gather feedback i.e. explore.
Numerous Variants
● Different Strategies: ε-Greedy, Thompson Sampling (TS), Upper Confidence
Bound (UCB), etc.
● Different Environments:
○ Stochastic and stationary: Reward is generated i.i.d. from a distribution
specific to the action. No payoff drift.
○ Adversarial: No assumptions on how rewards are generated.
● Different objectives: Cumulative regret, tracking the best expert
● Continuous or discrete set of actions, finite vs infinite
● Extensions: Varying set of arms, Contextual Bandits, etc.
Case Study I: Artwork
Personalization
Bandit Algorithms Setting
For each (user, show) request:
● Actions: set of candidate images available
● Reward: how many minutes did the user play from that impression
● Environment: Netflix homepage in user’s device
● Learner: its goal is to maximize the cumulative reward after N requests
Learner Environment
Action
Reward
Context
Specific challenges
● Play attribution and reward assignment
○ Incremental effect of the image on top of recommender system
● Only one image per title can be presented
○ Although inherently it is a ranking problem
Would you play because the movie is recommended or because of the artwork? Or both?
Specific challenges
● Change effect
○ Can changing images too often make users confused?
Session 1 Session 2 Session 3 ... Session N
Sequence A
Sequence B
● We have control over the set of actions
○ How many images per show
○ Image design
● What makes a good asset?
○ Representative (no clickbait)
○ Differential
○ Informative
○ Engaging
Actions
Personal (i.e. contextual)
Intuition for Personalized Assets
● Emphasize themes through different artwork according to some
context (user, viewing history, country, etc.)
Preferences in genre
Intuition for Personalized Assets
● Emphasize themes through different artwork according to some
context (user, viewing history, country, etc)
Preferences in cast members
Epsilon Greedy for MABs
● Unbiased
training data
● Like AB test
across actions
● Greedy
● Select optimal
action
Explore
ε 1-ε
Exploit
● Learn a binary classifier per image to predict probability of play
● Pick the winner (arg max)
Member
(context)
Features
Image Pool
Model 1
Winner
arg
max
Model 2
Model 3
Model 4
Greedy Exploit Policy
Take Fraction Example: Luke Cage
Take Fraction = 1 / 3
Play
No play
User A
User B
User C
● Unbiased offline evaluation from explore data
Offline metric: Replay [Li et al, 2010]
Offline Take Fraction = 2 / 3
User 1 User 2 User 3 User 4 User 5 User 6
Random Assignment
Play?
Model Assignment
Offline Replay
● Context matters
● Artwork diversity matters
● Personalization wiggles
around most popular images
Lift in Replay in the various algorithms as
compared to the Random baseline
Online results
● Rollout to our >125M member base
● Most beneficial for less known titles
● Compression from title -level offline metrics due to cannibalization
between titles
Case Study II:
Billboard
Recommendation
Considerations for the greedy policy
● Explore
○ Bandwidth allocation and cost of exploration
○ New vs existing titles
● Exploit
○ Model synchronisation
○ Title availability
○ Frequency of model update
○ Incremental updates vs batch training
■ Stationarity of title popularities
?
?
?
? ??
?
Greedy Exploit Policy
Member
Features
Candidate Pool
Model 1
Winner
Probability Of Play
Model 2
Model 3
Model 4
Would the member have played the title
anyway?
Netflix Promotions
Netflix homepage is an expensive real-estate (opportunity cost):
- so many titles to promote
- so few opportunities to win a “moment of truth”
D1 D2 D3 D4 D5
Promote?▶ ▶ ▶ ▶
Probability of
Play
Days
Netflix Promotions
Netflix homepage is an expensive real-estate (opportunity cost):
- so many titles to promote
- so few opportunities to win a “moment of truth”
Traditional (correlational) ML systems:
- take action if probability of positive reward is high, irrespective of reward
base rate
- don’t model incremental effect of taking action
D1 D2 D3 D4 D5
Promote?▶ ▶ ▶ ▶
Probability of
Play
Days
Incrementality from Advertising
● Goal: Measure ad effectiveness.
● Incrementality: The difference
in the outcome because the ad
was shown; the causal effect of
the ad.
$1.1M
$1.0M
$100k
Other
Advertisers’
Ads
Control Treatment
Revenue
Random Assignment*
*Johnson, Garrett A. and Lewis, Randall A. and Nubbemeyer, Elmar I, Ghost Ads: Improving the Economics of Measuring Online Ad Effectiveness (January 12, 2017).
Simon Business School Working Paper No. FR 15-21. Available at SSRN: https://ssrn.com/abstract=2620078
Incrementality Based Policy
● Goal: Select title for promotion that benefits most from being
shown in billboard
○ Member can play title from other sections on the homepage or search
○ Popular titles likely to appear on homepage anyway: Trending Now
○ Better utilize most expensive real-estate on the homepage!
● Define policy to be incremental with respect to probability of play
Incrementality Based Policy on Billboard
● Goal: Recommend title which has the largest additional benefit from
being presented on the Billboard
○ Recommend titles with argmax of
Which titles benefit from Billboard?
Title A benefits much more
than Title C by being shown
on the Billboard
Scatter plot of incremental vs baseline probability of
play for various members.
Offline & Online Results
● Incrementality based policy
sacrifices replay by selecting a
lesser known title that would
benefit from being shown on the
Billboard.
● Our implementation of
incrementality is able to shift
engagement within the candidate
pool.
Lift in Replay in the various algorithms as
compared to a random baseline
Research
Directions
Action selection orchestration
● Neighboring image selection influences result
● Title-level optimization is not enough
Row A
(diverse
images)
Row B
(the
microphone
row)
Stand-up comedy
Automatic image selection
● Generating new artwork is costly and time consuming
● Develop algorithm to predict asset quality from raw image
Raw image Box-art
Long-term Reward: Road to RL
● Maximize long term reward: reinforcement learning
○ User long term joy rather than play clicks or duration.
Thank you.
Jaya Kawale (jkawale@netflix.com)
Fernando Amat (famat@netflix.com)

More Related Content

PPTX
Recommendation Modeling with Impression Data at Netflix
PDF
Personalizing "The Netflix Experience" with Deep Learning
PDF
Context Aware Recommendations at Netflix
PDF
Contextualization at Netflix
PDF
Recent Trends in Personalization at Netflix
PDF
Recent Trends in Personalization at Netflix
PDF
バンディットアルゴリズム入門と実践
PDF
Deep Learning for Recommender Systems
Recommendation Modeling with Impression Data at Netflix
Personalizing "The Netflix Experience" with Deep Learning
Context Aware Recommendations at Netflix
Contextualization at Netflix
Recent Trends in Personalization at Netflix
Recent Trends in Personalization at Netflix
バンディットアルゴリズム入門と実践
Deep Learning for Recommender Systems

What's hot (20)

PDF
Recent Trends in Personalization: A Netflix Perspective
PDF
Deeper Things: How Netflix Leverages Deep Learning in Recommendations and Se...
PDF
Artwork Personalization at Netflix
PDF
Sequential Decision Making in Recommendations
PDF
Tutorial on Deep Learning in Recommender System, Lars summer school 2019
PDF
Past, Present & Future of Recommender Systems: An Industry Perspective
PDF
Missing values in recommender models
PDF
Déjà Vu: The Importance of Time and Causality in Recommender Systems
PPTX
Learning a Personalized Homepage
PDF
Crafting Recommenders: the Shallow and the Deep of it!
PDF
Rishabh Mehrotra - Recommendations in a Marketplace: Personalizing Explainabl...
PDF
Time, Context and Causality in Recommender Systems
PDF
RecSys 2020 A Human Perspective on Algorithmic Similarity Schendel 9-2020
PPTX
Personalized Page Generation for Browsing Recommendations
PPTX
Netflix talk at ML Platform meetup Sep 2019
PDF
Data council SF 2020 Building a Personalized Messaging System at Netflix
PDF
Netflix Recommendations - Beyond the 5 Stars
PPTX
Recommender system introduction
PDF
Calibrated Recommendations
PDF
Homepage Personalization at Spotify
Recent Trends in Personalization: A Netflix Perspective
Deeper Things: How Netflix Leverages Deep Learning in Recommendations and Se...
Artwork Personalization at Netflix
Sequential Decision Making in Recommendations
Tutorial on Deep Learning in Recommender System, Lars summer school 2019
Past, Present & Future of Recommender Systems: An Industry Perspective
Missing values in recommender models
Déjà Vu: The Importance of Time and Causality in Recommender Systems
Learning a Personalized Homepage
Crafting Recommenders: the Shallow and the Deep of it!
Rishabh Mehrotra - Recommendations in a Marketplace: Personalizing Explainabl...
Time, Context and Causality in Recommender Systems
RecSys 2020 A Human Perspective on Algorithmic Similarity Schendel 9-2020
Personalized Page Generation for Browsing Recommendations
Netflix talk at ML Platform meetup Sep 2019
Data council SF 2020 Building a Personalized Messaging System at Netflix
Netflix Recommendations - Beyond the 5 Stars
Recommender system introduction
Calibrated Recommendations
Homepage Personalization at Spotify
Ad

Similar to A Multi-Armed Bandit Framework For Recommendations at Netflix (20)

PDF
Artworks personalization on Netflix
PDF
Artwork Personalization at Netflix Fernando Amat RecSys2018
PDF
Strata 2016 - Lessons Learned from building real-life Machine Learning Systems
PDF
Recsys 2016 tutorial: Lessons learned from building real-life recommender sys...
PDF
Sprezzatura - Roelof van Zwol - May 2018
PDF
BIG2016- Lessons Learned from building real-life user-focused Big Data systems
PDF
Strata NYC: Building turn-key recommendations for 5% of internet video
PDF
Machine Learning Product Managers Meetup Event
PDF
Correlation, causation and incrementally recommendation problems at netflix ...
PDF
Past, present, and future of Recommender Systems: an industry perspective
PPT
National Wildlife Federation- OMS- Dreamcore 2011
PDF
[DSC Europe 23] Rein Zhang - Improving YouTube Recommender systems for big sc...
PDF
Recommender systems
PDF
Marketplace in motion - AdKDD keynote - 2020
PDF
Video Recommendation Engines as a Service
PDF
Big & Personal: the data and the models behind Netflix recommendations by Xa...
PDF
Qcon SF 2013 - Machine Learning & Recommender Systems @ Netflix Scale
PDF
Jaya WWW talk 2023.pdf
PDF
Role of Data Science in eCommerce
PDF
Recommender Systems In Industry
Artworks personalization on Netflix
Artwork Personalization at Netflix Fernando Amat RecSys2018
Strata 2016 - Lessons Learned from building real-life Machine Learning Systems
Recsys 2016 tutorial: Lessons learned from building real-life recommender sys...
Sprezzatura - Roelof van Zwol - May 2018
BIG2016- Lessons Learned from building real-life user-focused Big Data systems
Strata NYC: Building turn-key recommendations for 5% of internet video
Machine Learning Product Managers Meetup Event
Correlation, causation and incrementally recommendation problems at netflix ...
Past, present, and future of Recommender Systems: an industry perspective
National Wildlife Federation- OMS- Dreamcore 2011
[DSC Europe 23] Rein Zhang - Improving YouTube Recommender systems for big sc...
Recommender systems
Marketplace in motion - AdKDD keynote - 2020
Video Recommendation Engines as a Service
Big & Personal: the data and the models behind Netflix recommendations by Xa...
Qcon SF 2013 - Machine Learning & Recommender Systems @ Netflix Scale
Jaya WWW talk 2023.pdf
Role of Data Science in eCommerce
Recommender Systems In Industry
Ad

Recently uploaded (20)

PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
cuic standard and advanced reporting.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
MYSQL Presentation for SQL database connectivity
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Approach and Philosophy of On baking technology
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
A Presentation on Artificial Intelligence
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Modernizing your data center with Dell and AMD
PPT
Teaching material agriculture food technology
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Spectral efficient network and resource selection model in 5G networks
Understanding_Digital_Forensics_Presentation.pptx
cuic standard and advanced reporting.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
MYSQL Presentation for SQL database connectivity
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Approach and Philosophy of On baking technology
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
A Presentation on Artificial Intelligence
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Diabetes mellitus diagnosis method based random forest with bat algorithm
Modernizing your data center with Dell and AMD
Teaching material agriculture food technology
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
NewMind AI Monthly Chronicles - July 2025
Advanced methodologies resolving dimensionality complications for autism neur...
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Digital-Transformation-Roadmap-for-Companies.pptx

A Multi-Armed Bandit Framework For Recommendations at Netflix

  • 1. A Multi-Armed Bandit Framework for Recommendations at Netflix Jaya Kawale & Fernando Amat PRS Workshop, June 2018
  • 3. Quickly help members discover content they’ll love
  • 4. Global Members, Personalized Tastes 125 Million Members ~200 Countries
  • 8. Case Study I: Artwork Optimization Goal: Recommend a personalized artwork or imagery for a title to help members decide if they will enjoy the title or not.
  • 9. Case Study II: Billboard Recommendation Goal: Successfully introduce content to the right members.
  • 10. Traditional Approaches for Recommendation Collaborative Filtering ● Idea is to use the “wisdom of the crowd” to recommend items ● Well understood and various algorithms exist (e.g. Matrix Factorization) Collaborative Filtering 0 1 0 1 0 0 0 1 1 0 1 0 0 1 1 0 1 0 0 0 0 0 0 0 1 Users Items
  • 11. Challenges for Traditional Approaches ● Scarce feedback ● Dynamic catalog ● Country availability ● Non-stationary member base ● Time sensitivity ○ Content popularity changes ○ Member interests evolves ○ Respond quickly to member feedback
  • 12. Challenges for Traditional Approaches Continuous and fast learning needed ● Scarce feedback ● Dynamic catalog ● Country availability ● Non-stationary member base ● Time sensitivity ○ Content popularity changes ○ Member interests evolves ○ Respond quickly to member feedback
  • 13. Multi-Armed Bandits Increasingly successful in various practical settings where these challenges occur Clinical Trials Network Routing Online Advertising AI for Games Hyperparameter Optimization
  • 14. Multi-Armed Bandits ● A gambler playing multiple slot machines with unknown reward distribution ● Which machine to play to maximize reward?
  • 15. Multi-Armed Bandit For Recommendation Exploration-Exploitation tradeoff : Recommend the optimal title given the evidence i.e. exploit OR Recommend other titles to gather feedback i.e. explore.
  • 16. Numerous Variants ● Different Strategies: ε-Greedy, Thompson Sampling (TS), Upper Confidence Bound (UCB), etc. ● Different Environments: ○ Stochastic and stationary: Reward is generated i.i.d. from a distribution specific to the action. No payoff drift. ○ Adversarial: No assumptions on how rewards are generated. ● Different objectives: Cumulative regret, tracking the best expert ● Continuous or discrete set of actions, finite vs infinite ● Extensions: Varying set of arms, Contextual Bandits, etc.
  • 17. Case Study I: Artwork Personalization
  • 18. Bandit Algorithms Setting For each (user, show) request: ● Actions: set of candidate images available ● Reward: how many minutes did the user play from that impression ● Environment: Netflix homepage in user’s device ● Learner: its goal is to maximize the cumulative reward after N requests Learner Environment Action Reward Context
  • 19. Specific challenges ● Play attribution and reward assignment ○ Incremental effect of the image on top of recommender system ● Only one image per title can be presented ○ Although inherently it is a ranking problem Would you play because the movie is recommended or because of the artwork? Or both?
  • 20. Specific challenges ● Change effect ○ Can changing images too often make users confused? Session 1 Session 2 Session 3 ... Session N Sequence A Sequence B
  • 21. ● We have control over the set of actions ○ How many images per show ○ Image design ● What makes a good asset? ○ Representative (no clickbait) ○ Differential ○ Informative ○ Engaging Actions Personal (i.e. contextual)
  • 22. Intuition for Personalized Assets ● Emphasize themes through different artwork according to some context (user, viewing history, country, etc.) Preferences in genre
  • 23. Intuition for Personalized Assets ● Emphasize themes through different artwork according to some context (user, viewing history, country, etc) Preferences in cast members
  • 24. Epsilon Greedy for MABs ● Unbiased training data ● Like AB test across actions ● Greedy ● Select optimal action Explore ε 1-ε Exploit
  • 25. ● Learn a binary classifier per image to predict probability of play ● Pick the winner (arg max) Member (context) Features Image Pool Model 1 Winner arg max Model 2 Model 3 Model 4 Greedy Exploit Policy
  • 26. Take Fraction Example: Luke Cage Take Fraction = 1 / 3 Play No play User A User B User C
  • 27. ● Unbiased offline evaluation from explore data Offline metric: Replay [Li et al, 2010] Offline Take Fraction = 2 / 3 User 1 User 2 User 3 User 4 User 5 User 6 Random Assignment Play? Model Assignment
  • 28. Offline Replay ● Context matters ● Artwork diversity matters ● Personalization wiggles around most popular images Lift in Replay in the various algorithms as compared to the Random baseline
  • 29. Online results ● Rollout to our >125M member base ● Most beneficial for less known titles ● Compression from title -level offline metrics due to cannibalization between titles
  • 31. Considerations for the greedy policy ● Explore ○ Bandwidth allocation and cost of exploration ○ New vs existing titles ● Exploit ○ Model synchronisation ○ Title availability ○ Frequency of model update ○ Incremental updates vs batch training ■ Stationarity of title popularities ? ? ? ? ?? ?
  • 32. Greedy Exploit Policy Member Features Candidate Pool Model 1 Winner Probability Of Play Model 2 Model 3 Model 4
  • 33. Would the member have played the title anyway?
  • 34. Netflix Promotions Netflix homepage is an expensive real-estate (opportunity cost): - so many titles to promote - so few opportunities to win a “moment of truth” D1 D2 D3 D4 D5 Promote?▶ ▶ ▶ ▶ Probability of Play Days
  • 35. Netflix Promotions Netflix homepage is an expensive real-estate (opportunity cost): - so many titles to promote - so few opportunities to win a “moment of truth” Traditional (correlational) ML systems: - take action if probability of positive reward is high, irrespective of reward base rate - don’t model incremental effect of taking action D1 D2 D3 D4 D5 Promote?▶ ▶ ▶ ▶ Probability of Play Days
  • 36. Incrementality from Advertising ● Goal: Measure ad effectiveness. ● Incrementality: The difference in the outcome because the ad was shown; the causal effect of the ad. $1.1M $1.0M $100k Other Advertisers’ Ads Control Treatment Revenue Random Assignment* *Johnson, Garrett A. and Lewis, Randall A. and Nubbemeyer, Elmar I, Ghost Ads: Improving the Economics of Measuring Online Ad Effectiveness (January 12, 2017). Simon Business School Working Paper No. FR 15-21. Available at SSRN: https://ssrn.com/abstract=2620078
  • 37. Incrementality Based Policy ● Goal: Select title for promotion that benefits most from being shown in billboard ○ Member can play title from other sections on the homepage or search ○ Popular titles likely to appear on homepage anyway: Trending Now ○ Better utilize most expensive real-estate on the homepage! ● Define policy to be incremental with respect to probability of play
  • 38. Incrementality Based Policy on Billboard ● Goal: Recommend title which has the largest additional benefit from being presented on the Billboard ○ Recommend titles with argmax of
  • 39. Which titles benefit from Billboard? Title A benefits much more than Title C by being shown on the Billboard Scatter plot of incremental vs baseline probability of play for various members.
  • 40. Offline & Online Results ● Incrementality based policy sacrifices replay by selecting a lesser known title that would benefit from being shown on the Billboard. ● Our implementation of incrementality is able to shift engagement within the candidate pool. Lift in Replay in the various algorithms as compared to a random baseline
  • 42. Action selection orchestration ● Neighboring image selection influences result ● Title-level optimization is not enough Row A (diverse images) Row B (the microphone row) Stand-up comedy
  • 43. Automatic image selection ● Generating new artwork is costly and time consuming ● Develop algorithm to predict asset quality from raw image Raw image Box-art
  • 44. Long-term Reward: Road to RL ● Maximize long term reward: reinforcement learning ○ User long term joy rather than play clicks or duration.
  • 45. Thank you. Jaya Kawale (jkawale@netflix.com) Fernando Amat (famat@netflix.com)