SlideShare a Scribd company logo
RESPONSE PREDICTION FOR
DISPLAY ADVERTISING
WSDM ’14

Olivier Chapelle
OUTLINE
1.
2.
3.
4.

Criteo & Display advertising
Modeling
Large scale learning
Explore / exploit

2
DISPLAY ADVERTISING

Display Ad

3
DISPLAY ADVERTISING
• Rapidly growing multi-billion dollar business (30% of internet
advertising revenue in 2013).
• Marketplace between:
– Publishers: sell display opportunities
– Advertisers: pay for showing their ad

• Real Time Bidding:
– Auction amongst advertisers is held at the moment when a user generates a
display opportunity by visiting a publisher‟s web page.

4
BRANDING VS PERFORMANCE
PRICING TYPE
• CPM (Cost Per Mille): advertiser pays per thousand impressions
• CPC (Cost Per Click): advertiser pays only when the user clicks
• CPA (Cost Per Action): advertiser pays only when the user performs a
predefined action such as a purchase.

CAMPAIGN TYPE
• Branding

CPM

• Performance based advertising (retargeting)

CPC, CPA

CONVERSIONS
eCPM = CPC * predicted clickthrough rate
eCPM = CPA * predicted conversion rate
5
CRITEO
Bill on
CPC

Advertisers

Pay on
CPM

Publishers

Criteo’s success depends highly on how precisely we can predict
Click Through Rates (CTR) and Conversion Rates (CR)

6
RECOMMENDATION

Collaborative filtering
Propose related items to a user based on
historical interactions from other users

Over 50% of Criteo driven sales come
from recommended products the user had
never viewed on advertiser websites

7
CRITEO NUMBERS

750
10 PB

EMPLOYEES

42

COUNTRIES

18
BILLION

600
$ MILLION

2013 REVENUE

BANNERS
PER DAY

RT BIDS
PER DAY

7500 CORES CLUSTER

2.1

BILLION

250K
AD CALLS
PER SECOND

8
8
OUTLINE
1.
2.
3.
4.

Criteo & Display advertising
Modeling
Large scale learning
Explore / exploit

9
FEATURES
• Three sources of features: user, ad, page
• In this talk: categorical features on ad and page.
Advertiser network

Publisher network

Advertiser

Publisher

Campaign

Site

Publisher hierarchy

Url

Ad

Advertiser hierarchy

10
HASHING TRICK
• Standard representation of categorical features: “one-hot” encoding
For instance, site feature

0

0

1

cnn.com

0

0

0

0
news.yahoo.com

• Dimensionality equal to the number of different values
– can be very large

• Hashing to reduce dimensionality (made popular by John Langford in VW)

• Dimensionality now independent of number of values

11
HASHING VS FEATURE SELECTION
• “Small” problem with 35M different values.
• Methods that require a dictionary have a larger model.

12
QUADRATIC FEATURES
• Outer product between two features.
• Example: between site and advertiser,
Feature is 1

site=finance.yahoo.com & advertiser=bank of america
Advertiser network

Publisher network
Publisher

Advertiser

Site

Campaign

Url

Ad

 Similar to a polynomial kernel of degree 2
 Large number of values
hashing trick
13
ADVANTAGES OF HASHING
• Practical
– Straightforward implement; no need to maintain dictionaries

• Statistical
– Regularization (infrequent values are washed away by frequent ones)

• Most powerful when combined with quadratic features
Quote of John Langford about hashing
At first it‟s scary, then you love it

14
LEARNING
• Regularized logistic regression
– Vowpal Wabbit open source package

• Regularization with hierarchical features

Well estimated

backoff smoothing

Small if rare value

• Negative data subsampled for computational reason

15
EVALUATION
• Comparison with (Agarwal et al. ‟10)
– Probabilistic model for the same display advertising prediction problem
– Leverages the hierarchical structures on the ad and publisher sides
– Sparse prior for smoothing

• Model trained on three weeks of data, tested on the 3 following days

auROC

auPRC

Log likelihood

+ 3.1%

+ 10.0%

+ 7.1%

D. Agarwal et al., Estimating Rates of Rare Events with Multiple Hierarchies through Scalable Log-linear Models, KDD, 2010

16
BAYESIAN LOGISTIC REGRESSION
• Regularized logistic regression = MAP solution
(Gaussian prior, logistic likelihood)

• Posterior is not Gaussian
• Diagonal Laplace approximation:

with:

and:

17
MODEL UPDATE
• Needed because ads / campaigns keep changing.
• The posterior distribution of a previously trained model can be used as
the prior for training a new model with a new batch of data
Day 1 Day 2 Day 3

Day 4

Day 5

M0

M1

M2

• Influence of the update frequency (auPRC):
1 day

6 hours

2 hours

+3.7%

+5.1%

+5.8%

18
OUTLINE
1.
2.
3.
4.

Criteo & Display advertising
Modeling
Large scale learning
Explore / exploit

19
PARALLEL LEARNING
• Large training set
– 2B training samples; 16M parameters
– 400GB (compressed)

• Proposed method: less than one hour with 500 machines

• Optimize:
• SGD is fast on a single machine, but difficult to parallelize.
• Batch (quasi-Newton) methods are straightforward to parallelize
– L-BFGS with distributed gradient computation.

m
20
ALLREDUCE
• Aggregate and broadcast across nodes
9
13

37

37

15

1
7

7

37 37

8
5

3

5

3

37

37

4

4

• Very few modification to existing code: just insert several AllReduce op.
• Compatible with Hadoop / MapReduce
– Build a spanning tree on the gateway
– Single MapReduce job
– Leverage speculative execution to alleviate the slow node issue
21
ONLINE INITIALIZATION
• Hybrid approach:
– One pass of online learning on each node
– Average the weights from each node to get a warm start for batch
optimization

• Best of both (online / batch) worlds.

Splice site prediction (Sonnenburg et al. ‘10)

S. Sonnenburg and V. Franc, COFFIN: A Computational Framework for Linear SVMs, ICML 2010

Display advertising

22
OUTLINE
1.
2.
3.
4.

Criteo & Display advertising
Modeling
Large scale learning
Explore / exploit

23
THOMPSON SAMPLING
• Heuristic to address the Explore / Exploit problem, dating back to
Thompson (1933)
• Simple to implement
• Good performance in practice (Graepel et al. „10, Chapelle and Li „11)
• Rarely used, maybe because of lack of theoretical guarantee.

Draw model parameter
according to P( D)

t

Select best action
according to t

T. Graepel et al., Web-scale Bayesian click-through rate prediction for sponsored search advertising in
Microsoft’s Bing search engine, ICML 2010
O. Chapelle and L. Li, An Empirical Evaluation of Thompson Sampling, NIPS 2011

Observe reward
and update model

24
E/E SIMULATIONS
MAB with K arms. Best arm has mean reward = 0.5, others have 0.5 − ε.

25
EVALUATION
• Semi-simulated environment: real input features, but labels generated.
• Set of eligible ads varies from 1 to 5,910. Total ads = 66,373
• Comparison of E/E algorithms:
– 4 days of data
– Cold start

• Algorithms:
X candidates

– UCB: mean + std. dev.
– -greedy
– Thompson sampling

X selected

Random w
(ground truth)

Generated Y

Learned w

model update

26
RESULTS
• CTR regret (in percentage):
Thompson

UCB

e-greedy

Exploit-only

Random

3.72

4.14

4.98

5.00

31.95

• Regret over time:

27
OPEN QUESTIONS
• Hashing
– Theoretical performance guarantees

• Low rank matrix factorization
– Better predict on unseen pairs (publisher, advertiser)

• Sample selection bias
– System is trained only on selected ads, but all ads are scored.
– Possible solution: inverse propensity scoring
– But we still need to bias the training data toward good ads.

• Explore / exploit
– Evaluation framework
– Regret analysis of Thompson‟s sampling
– E/E with a budget; with multiple slots; with a delayed feedback

28
CONCLUSION
• Simple yet efficient techniques for click prediction

• Main difficulty in applied machine learning: avoid the bias (because of
academic papers) toward complex systems
– It‟s easy to get lured into building a complex system
– It‟s difficult to keep it simple

See paper for more details
Simple and scalable response prediction for display advertising
O. Chapelle, E. Manavoglu, R. Rosales, 2014

29

More Related Content

PDF
Ad Click Prediction - Paper review
PDF
Large-Scale Ads CTR Prediction with Spark and Deep Learning: Lessons Learned ...
PDF
Scaling machine learning as a service at Uber — Li Erran Li at #papis2016
PDF
Building a Graph of all US Businesses Using Spark Technologies by Alexis Roos
PDF
Thomas Jensen. Machine Learning
PDF
Ed Snelson. Counterfactual Analysis
PDF
Extracting information from images using deep learning and transfer learning ...
PPTX
An introduction to Machine Learning with scikit-learn (October 2018)
Ad Click Prediction - Paper review
Large-Scale Ads CTR Prediction with Spark and Deep Learning: Lessons Learned ...
Scaling machine learning as a service at Uber — Li Erran Li at #papis2016
Building a Graph of all US Businesses Using Spark Technologies by Alexis Roos
Thomas Jensen. Machine Learning
Ed Snelson. Counterfactual Analysis
Extracting information from images using deep learning and transfer learning ...
An introduction to Machine Learning with scikit-learn (October 2018)

What's hot (20)

PDF
Airfare prediction using Machine Learning with Apache Spark on 1 billion obse...
PDF
Building machine learning service in your business — Eric Chen (Uber) @PAPIs ...
PPTX
Production ready big ml workflows from zero to hero daniel marcous @ waze
PPTX
AutoML for user segmentation: how to match millions of users with hundreds of...
PDF
Netflix Recommendations Feature Engineering with Time Travel
PDF
Building Intelligent Applications, Experimental ML with Uber’s Data Science W...
PDF
SigOpt for Hedge Funds
PDF
SigOpt for Machine Learning and AI
PPTX
MATLAB Based Projects for ECE Research Guidance
PDF
Applied Machine Learning for Ranking Products in an Ecommerce Setting
PPTX
AWS re:Invent 2018 - ENT321 - SageMaker Workshop
PDF
Porting R Models into Scala Spark
PDF
Advanced Neo4j Use Cases with the GraphAware Framework
PDF
Zipline - A Declarative Feature Engineering Framework
PDF
Scalable Time Series Forecasting and Monitoring using Apache Spark and Elasti...
PDF
AWS Machine Learning & Google Cloud Machine Learning
PPTX
MATLAB Small Projects Research Assistance
PPTX
An Introduction to Amazon SageMaker (October 2018)
PPTX
Source Code for MATLAB Projects Research Topics
PPTX
Project Based on MATLAB For Final Year Research Guidance
Airfare prediction using Machine Learning with Apache Spark on 1 billion obse...
Building machine learning service in your business — Eric Chen (Uber) @PAPIs ...
Production ready big ml workflows from zero to hero daniel marcous @ waze
AutoML for user segmentation: how to match millions of users with hundreds of...
Netflix Recommendations Feature Engineering with Time Travel
Building Intelligent Applications, Experimental ML with Uber’s Data Science W...
SigOpt for Hedge Funds
SigOpt for Machine Learning and AI
MATLAB Based Projects for ECE Research Guidance
Applied Machine Learning for Ranking Products in an Ecommerce Setting
AWS re:Invent 2018 - ENT321 - SageMaker Workshop
Porting R Models into Scala Spark
Advanced Neo4j Use Cases with the GraphAware Framework
Zipline - A Declarative Feature Engineering Framework
Scalable Time Series Forecasting and Monitoring using Apache Spark and Elasti...
AWS Machine Learning & Google Cloud Machine Learning
MATLAB Small Projects Research Assistance
An Introduction to Amazon SageMaker (October 2018)
Source Code for MATLAB Projects Research Topics
Project Based on MATLAB For Final Year Research Guidance
Ad

Similar to Response prediction for display advertising - WSDM 2014 (20)

PDF
Criteo TektosData Meetup
PDF
Damien Lefortier, Senior Machine Learning Engineer and Tech Lead in the Predi...
PPTX
Big Data & Machine Learning - TDC2013 São Paulo - 12/0713
PPTX
Big Data & Machine Learning - TDC2013 Sao Paulo
PPTX
PDF
RTBMA ECIR 2016 tutorial
PDF
Data Analysis - Making Big Data Work
PDF
New challenges for scalable machine learning in online advertising
ODP
Online advertising and large scale model fitting
PDF
Machine learning and big data
PPTX
Essential of ML 1st Lecture IIT Kharagpur
PPTX
Deepak-Computational Advertising-The LinkedIn Way
PDF
Predictive Conversion Modeling - Lifting Web Analytics to the next level
PDF
Mastering Predictive Analytics with R 2nd edition Edition Forte
ODP
Challenges in Large Scale Machine Learning
PDF
Machine learning for profit: Computational advertising landscape
PDF
Machine Learning in Customer Analytics
PPTX
20141209 meetup hassan
PDF
H2O World - Solving Customer Churn with Machine Learning - Julian Bharadwaj
Criteo TektosData Meetup
Damien Lefortier, Senior Machine Learning Engineer and Tech Lead in the Predi...
Big Data & Machine Learning - TDC2013 São Paulo - 12/0713
Big Data & Machine Learning - TDC2013 Sao Paulo
RTBMA ECIR 2016 tutorial
Data Analysis - Making Big Data Work
New challenges for scalable machine learning in online advertising
Online advertising and large scale model fitting
Machine learning and big data
Essential of ML 1st Lecture IIT Kharagpur
Deepak-Computational Advertising-The LinkedIn Way
Predictive Conversion Modeling - Lifting Web Analytics to the next level
Mastering Predictive Analytics with R 2nd edition Edition Forte
Challenges in Large Scale Machine Learning
Machine learning for profit: Computational advertising landscape
Machine Learning in Customer Analytics
20141209 meetup hassan
H2O World - Solving Customer Churn with Machine Learning - Julian Bharadwaj
Ad

Recently uploaded (20)

PPTX
Web Crawler for Trend Tracking Gen Z Insights.pptx
PPT
Geologic Time for studying geology for geologist
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PDF
A review of recent deep learning applications in wood surface defect identifi...
PDF
sustainability-14-14877-v2.pddhzftheheeeee
PDF
Getting started with AI Agents and Multi-Agent Systems
PPTX
Chapter 5: Probability Theory and Statistics
PDF
Five Habits of High-Impact Board Members
PDF
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
PDF
STKI Israel Market Study 2025 version august
PDF
Hybrid model detection and classification of lung cancer
PDF
NewMind AI Weekly Chronicles – August ’25 Week III
PDF
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
PPTX
The various Industrial Revolutions .pptx
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
A Late Bloomer's Guide to GenAI: Ethics, Bias, and Effective Prompting - Boha...
PPTX
Tartificialntelligence_presentation.pptx
PPTX
Modernising the Digital Integration Hub
PDF
DP Operators-handbook-extract for the Mautical Institute
PDF
A contest of sentiment analysis: k-nearest neighbor versus neural network
Web Crawler for Trend Tracking Gen Z Insights.pptx
Geologic Time for studying geology for geologist
Group 1 Presentation -Planning and Decision Making .pptx
A review of recent deep learning applications in wood surface defect identifi...
sustainability-14-14877-v2.pddhzftheheeeee
Getting started with AI Agents and Multi-Agent Systems
Chapter 5: Probability Theory and Statistics
Five Habits of High-Impact Board Members
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
STKI Israel Market Study 2025 version august
Hybrid model detection and classification of lung cancer
NewMind AI Weekly Chronicles – August ’25 Week III
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
The various Industrial Revolutions .pptx
A novel scalable deep ensemble learning framework for big data classification...
A Late Bloomer's Guide to GenAI: Ethics, Bias, and Effective Prompting - Boha...
Tartificialntelligence_presentation.pptx
Modernising the Digital Integration Hub
DP Operators-handbook-extract for the Mautical Institute
A contest of sentiment analysis: k-nearest neighbor versus neural network

Response prediction for display advertising - WSDM 2014

  • 1. RESPONSE PREDICTION FOR DISPLAY ADVERTISING WSDM ’14 Olivier Chapelle
  • 2. OUTLINE 1. 2. 3. 4. Criteo & Display advertising Modeling Large scale learning Explore / exploit 2
  • 4. DISPLAY ADVERTISING • Rapidly growing multi-billion dollar business (30% of internet advertising revenue in 2013). • Marketplace between: – Publishers: sell display opportunities – Advertisers: pay for showing their ad • Real Time Bidding: – Auction amongst advertisers is held at the moment when a user generates a display opportunity by visiting a publisher‟s web page. 4
  • 5. BRANDING VS PERFORMANCE PRICING TYPE • CPM (Cost Per Mille): advertiser pays per thousand impressions • CPC (Cost Per Click): advertiser pays only when the user clicks • CPA (Cost Per Action): advertiser pays only when the user performs a predefined action such as a purchase. CAMPAIGN TYPE • Branding CPM • Performance based advertising (retargeting) CPC, CPA CONVERSIONS eCPM = CPC * predicted clickthrough rate eCPM = CPA * predicted conversion rate 5
  • 6. CRITEO Bill on CPC Advertisers Pay on CPM Publishers Criteo’s success depends highly on how precisely we can predict Click Through Rates (CTR) and Conversion Rates (CR) 6
  • 7. RECOMMENDATION Collaborative filtering Propose related items to a user based on historical interactions from other users Over 50% of Criteo driven sales come from recommended products the user had never viewed on advertiser websites 7
  • 8. CRITEO NUMBERS 750 10 PB EMPLOYEES 42 COUNTRIES 18 BILLION 600 $ MILLION 2013 REVENUE BANNERS PER DAY RT BIDS PER DAY 7500 CORES CLUSTER 2.1 BILLION 250K AD CALLS PER SECOND 8 8
  • 9. OUTLINE 1. 2. 3. 4. Criteo & Display advertising Modeling Large scale learning Explore / exploit 9
  • 10. FEATURES • Three sources of features: user, ad, page • In this talk: categorical features on ad and page. Advertiser network Publisher network Advertiser Publisher Campaign Site Publisher hierarchy Url Ad Advertiser hierarchy 10
  • 11. HASHING TRICK • Standard representation of categorical features: “one-hot” encoding For instance, site feature 0 0 1 cnn.com 0 0 0 0 news.yahoo.com • Dimensionality equal to the number of different values – can be very large • Hashing to reduce dimensionality (made popular by John Langford in VW) • Dimensionality now independent of number of values 11
  • 12. HASHING VS FEATURE SELECTION • “Small” problem with 35M different values. • Methods that require a dictionary have a larger model. 12
  • 13. QUADRATIC FEATURES • Outer product between two features. • Example: between site and advertiser, Feature is 1 site=finance.yahoo.com & advertiser=bank of america Advertiser network Publisher network Publisher Advertiser Site Campaign Url Ad  Similar to a polynomial kernel of degree 2  Large number of values hashing trick 13
  • 14. ADVANTAGES OF HASHING • Practical – Straightforward implement; no need to maintain dictionaries • Statistical – Regularization (infrequent values are washed away by frequent ones) • Most powerful when combined with quadratic features Quote of John Langford about hashing At first it‟s scary, then you love it 14
  • 15. LEARNING • Regularized logistic regression – Vowpal Wabbit open source package • Regularization with hierarchical features Well estimated backoff smoothing Small if rare value • Negative data subsampled for computational reason 15
  • 16. EVALUATION • Comparison with (Agarwal et al. ‟10) – Probabilistic model for the same display advertising prediction problem – Leverages the hierarchical structures on the ad and publisher sides – Sparse prior for smoothing • Model trained on three weeks of data, tested on the 3 following days auROC auPRC Log likelihood + 3.1% + 10.0% + 7.1% D. Agarwal et al., Estimating Rates of Rare Events with Multiple Hierarchies through Scalable Log-linear Models, KDD, 2010 16
  • 17. BAYESIAN LOGISTIC REGRESSION • Regularized logistic regression = MAP solution (Gaussian prior, logistic likelihood) • Posterior is not Gaussian • Diagonal Laplace approximation: with: and: 17
  • 18. MODEL UPDATE • Needed because ads / campaigns keep changing. • The posterior distribution of a previously trained model can be used as the prior for training a new model with a new batch of data Day 1 Day 2 Day 3 Day 4 Day 5 M0 M1 M2 • Influence of the update frequency (auPRC): 1 day 6 hours 2 hours +3.7% +5.1% +5.8% 18
  • 19. OUTLINE 1. 2. 3. 4. Criteo & Display advertising Modeling Large scale learning Explore / exploit 19
  • 20. PARALLEL LEARNING • Large training set – 2B training samples; 16M parameters – 400GB (compressed) • Proposed method: less than one hour with 500 machines • Optimize: • SGD is fast on a single machine, but difficult to parallelize. • Batch (quasi-Newton) methods are straightforward to parallelize – L-BFGS with distributed gradient computation. m 20
  • 21. ALLREDUCE • Aggregate and broadcast across nodes 9 13 37 37 15 1 7 7 37 37 8 5 3 5 3 37 37 4 4 • Very few modification to existing code: just insert several AllReduce op. • Compatible with Hadoop / MapReduce – Build a spanning tree on the gateway – Single MapReduce job – Leverage speculative execution to alleviate the slow node issue 21
  • 22. ONLINE INITIALIZATION • Hybrid approach: – One pass of online learning on each node – Average the weights from each node to get a warm start for batch optimization • Best of both (online / batch) worlds. Splice site prediction (Sonnenburg et al. ‘10) S. Sonnenburg and V. Franc, COFFIN: A Computational Framework for Linear SVMs, ICML 2010 Display advertising 22
  • 23. OUTLINE 1. 2. 3. 4. Criteo & Display advertising Modeling Large scale learning Explore / exploit 23
  • 24. THOMPSON SAMPLING • Heuristic to address the Explore / Exploit problem, dating back to Thompson (1933) • Simple to implement • Good performance in practice (Graepel et al. „10, Chapelle and Li „11) • Rarely used, maybe because of lack of theoretical guarantee. Draw model parameter according to P( D) t Select best action according to t T. Graepel et al., Web-scale Bayesian click-through rate prediction for sponsored search advertising in Microsoft’s Bing search engine, ICML 2010 O. Chapelle and L. Li, An Empirical Evaluation of Thompson Sampling, NIPS 2011 Observe reward and update model 24
  • 25. E/E SIMULATIONS MAB with K arms. Best arm has mean reward = 0.5, others have 0.5 − ε. 25
  • 26. EVALUATION • Semi-simulated environment: real input features, but labels generated. • Set of eligible ads varies from 1 to 5,910. Total ads = 66,373 • Comparison of E/E algorithms: – 4 days of data – Cold start • Algorithms: X candidates – UCB: mean + std. dev. – -greedy – Thompson sampling X selected Random w (ground truth) Generated Y Learned w model update 26
  • 27. RESULTS • CTR regret (in percentage): Thompson UCB e-greedy Exploit-only Random 3.72 4.14 4.98 5.00 31.95 • Regret over time: 27
  • 28. OPEN QUESTIONS • Hashing – Theoretical performance guarantees • Low rank matrix factorization – Better predict on unseen pairs (publisher, advertiser) • Sample selection bias – System is trained only on selected ads, but all ads are scored. – Possible solution: inverse propensity scoring – But we still need to bias the training data toward good ads. • Explore / exploit – Evaluation framework – Regret analysis of Thompson‟s sampling – E/E with a budget; with multiple slots; with a delayed feedback 28
  • 29. CONCLUSION • Simple yet efficient techniques for click prediction • Main difficulty in applied machine learning: avoid the bias (because of academic papers) toward complex systems – It‟s easy to get lured into building a complex system – It‟s difficult to keep it simple See paper for more details Simple and scalable response prediction for display advertising O. Chapelle, E. Manavoglu, R. Rosales, 2014 29