SlideShare a Scribd company logo
@benhamnerPhoto by mikebaird, www.flickr.com/photos/mikebaird
Lessons from ML
Competitions
Ben Hamner
ben.hamner@kaggle.com
November 13, 2015
@benhamner
Kaggle runs machine learning competitions
@benhamner
We release challenging machine learning problems to our community of 410,000 data scientists
@benhamner
0
20,000
40,000
60,000
80,000
100,000
120,000
140,000
160,000
Sep-10 Sep-11 Sep-12 Sep-13 Sep-14 Sep-15
Our community makes 100k submissions per month on these competitions
@benhamner@benhamner
Examples of Machine Learning Competitions
@benhamner
Automatically grading student-written essays
197 entrants
155 teams
2,499 submissions
over 80 days
$100,000 in prizes
Human-level performance
www.kaggle.com/c/asap-aes
21,000+ essays
@benhamner
Predicting compounds toxicity given its molecular structure
796 entrants
703 teams
8,841 submissions
over 91 days
$20,000 in prizes
25.6% improvement over
previous accuracy benchmark
www.kaggle.com/c/BioResponse
@benhamner
Personalizing web search results
261 entrants
194 teams
3570 submissions
over 91 days
$9,000 in prizes
www.kaggle.com/c/yandex-personalized-web-search-challenge
167,000,000+ logs
@benhamner
Detecting diabetic retinopathy
www.kaggle.com/c/diabetic-retinopathy-detection
88,000+ retina images
854 entrants
661 teams
6999 submissions
Over 160 days
$100,000 in prizes
85% agreement with a human
rater (quadratic weighted kappa)
@benhamner@benhamner
How do machine learning competitions work?
@benhamner
We take a dataset with a target variable – something we’re trying to predict
SalePrice SquareFeet Type LotAcres Beds Baths
$88k 719 HOME 1.64 1 1
$164k 2017 APT 3 2
$72k 697 APT 1 1
$85k 948 HOME 1.02 2 3
$271k 3375 APT 3 4
$482k 3968 APT 4 4
$88k 790 APT 1 2
$128k 1341 HOME 0.66 3 3
$235k 2379 APT 3 3
$309k 2495 HOME 0.21 3 4
$163k 1356 APT 1 1
$375k 3361 HOME 1.64 3 4
$98k 1060 HOME 0.05 1 1
$50k 582 HOME 0.61 1 1
$145k 1640 APT 2 3
$394k 3546 HOME 0.4 4 4
$82k 903 APT 2 2
$105k 1096 HOME 0.04 3 4
$129k 1280 HOME 0.15 2 2
$106k 1139 APT 1 1
Predicting the sale
price of a home
@benhamner
Training
Test
Split the data into two sets, a training set and a test set
Solution
“Ground Truth”
SalePrice SquareFeet Type LotAcres Beds Baths
$88k 719 HOME 1.64 1 1
$164k 2017 APT 3 2
$72k 697 APT 1 1
$85k 948 HOME 1.02 2 3
$271k 3375 APT 3 4
$482k 3968 APT 4 4
$88k 790 APT 1 2
$128k 1341 HOME 0.66 3 3
$235k 2379 APT 3 3
$309k 2495 HOME 0.21 3 4
$163k 1356 APT 1 1
$375k 3361 HOME 1.64 3 4
$98k 1060 HOME 0.05 1 1
$50k 582 HOME 0.61 1 1
$145k 1640 APT 2 3
$394k 3546 HOME 0.4 4 4
$82k 903 APT 2 2
$105k 1096 HOME 0.04 3 4
$129k 1280 HOME 0.15 2 2
$106k 1139 APT 1 1
@benhamner
Training
Test
Our community gets everything but the solution on the test set
Solution
“Ground Truth”
SalePrice SquareFeet Type LotAcres Beds Baths
$88k 719 HOME 1.64 1 1
$164k 2017 APT 3 2
$72k 697 APT 1 1
$85k 948 HOME 1.02 2 3
$271k 3375 APT 3 4
$482k 3968 APT 4 4
$88k 790 APT 1 2
$128k 1341 HOME 0.66 3 3
$235k 2379 APT 3 3
$309k 2495 HOME 0.21 3 4
$163k 1356 APT 1 1
$375k 3361 HOME 1.64 3 4
$98k 1060 HOME 0.05 1 1
??? 582 HOME 0.61 1 1
??? 1640 APT 2 3
??? 3546 HOME 0.4 4 4
??? 903 APT 2 2
??? 1096 HOME 0.04 3 4
??? 1280 HOME 0.15 2 2
??? 1139 APT 1 1
@benhamner
Competition participants use the training set to learn the relation between the data and the target
@benhamner
Training
Test
Competition participants apply their models to make predictions on the test set
SalePrice SquareFeet Type LotAcres Beds Baths
$88k 719 HOME 1.64 1 1
$164k 2017 APT 3 2
$72k 697 APT 1 1
$85k 948 HOME 1.02 2 3
$271k 3375 APT 3 4
$482k 3968 APT 4 4
$88k 790 APT 1 2
$128k 1341 HOME 0.66 3 3
$235k 2379 APT 3 3
$309k 2495 HOME 0.21 3 4
$163k 1356 APT 1 1
$375k 3361 HOME 1.64 3 4
$98k 1060 HOME 0.05 1 1
??? 582 HOME 0.61 1 1
??? 1640 APT 2 3
??? 3546 HOME 0.4 4 4
??? 903 APT 2 2
??? 1096 HOME 0.04 3 4
??? 1280 HOME 0.15 2 2
??? 1139 APT 1 1
Submission
Predicted
$41k
$165k
$280k
$76k
$128k
$115k
$94k
@benhamner
Training
Test
Kaggle compares the submission to the ground truth
SalePrice SquareFeet Type LotAcres Beds Baths
$88k 719 HOME 1.64 1 1
$164k 2017 APT 3 2
$72k 697 APT 1 1
$85k 948 HOME 1.02 2 3
$271k 3375 APT 3 4
$482k 3968 APT 4 4
$88k 790 APT 1 2
$128k 1341 HOME 0.66 3 3
$235k 2379 APT 3 3
$309k 2495 HOME 0.21 3 4
$163k 1356 APT 1 1
$375k 3361 HOME 1.64 3 4
$98k 1060 HOME 0.05 1 1
$50k 582 HOME 0.61 1 1
$145k 1640 APT 2 3
$394k 3546 HOME 0.4 4 4
$82k 903 APT 2 2
$105k 1096 HOME 0.04 3 4
$129k 1280 HOME 0.15 2 2
$106k 1139 APT 1 1
Submission
Predicted
$41k
$165k
$380k
$76k
$128k
$115k
$94k
Delta
-$9k
$20k
-$14k
-$6k
$13k
-$14k
-$12k
@benhamner
Training
Test
Kaggle calculates two scores, one for the public leaderboard and one for the private leaderboard
SalePrice SquareFeet Type LotAcres Beds Baths
$88k 719 HOME 1.64 1 1
$164k 2017 APT 3 2
$72k 697 APT 1 1
$85k 948 HOME 1.02 2 3
$271k 3375 APT 3 4
$482k 3968 APT 4 4
$88k 790 APT 1 2
$128k 1341 HOME 0.66 3 3
$235k 2379 APT 3 3
$309k 2495 HOME 0.21 3 4
$163k 1356 APT 1 1
$375k 3361 HOME 1.64 3 4
$98k 1060 HOME 0.05 1 1
$50k 582 HOME 0.61 1 1
$145k 1640 APT 2 3
$394k 3546 HOME 0.4 4 4
$82k 903 APT 2 2
$105k 1096 HOME 0.04 3 4
$129k 1280 HOME 0.15 2 2
$106k 1139 APT 1 1
Submission
Predicted
$41k
$165k
$380k
$76k
$128k
$115k
$94k
MeanError
Public Leaderboard $14k
Private Leaderboard $15k
Delta
-$9k
$20k
-$14k
-$6k
$13k
-$14k
-$12k
@benhamner
The participant immediately sees their public score on the public leaderboard
@benhamner
Participants explore the problem and iterate on their models to improve them
@benhamner
At the end, the participant with the best score on the private leaderboard wins
@benhamner@benhamner
Competition leaderboards
@benhamner
The leaderboard is a powerful mechanism to drive competition
@benhamner
The leaderboard is objective and meritocratic
@benhamner
The leaderboard encourages leapfrogging
@benhamner
The leaderboard encourages iterative improvements over many submissions
@benhamner
This causes the competition to approach the frontier of what’s possible given the data
@benhamner
Many competitions quickly approach a frontier; the most challenging ones take longer
@benhamner
Some applied ML research looks like competitions running over years instead of months
www.kaggle.com/c/BioResponse/leaderboardyann.lecun.com/exdb/mnist/
@benhamner
One long-running research competition is ImageNet (not hosted on Kaggle)
www.image-net.org
@benhamner
We see a similar progression in ImageNet performance over time as we do in Kaggle competitions
www.image-net.org
@benhamner
Can we do better than competition results?
@benhamner@benhamner
Looking holistically across all the competitions
@benhamner
At Kaggle, we’ve run hundreds of public machine learning competitions
@benhamner
And over 600 in-class competitions for university students
@benhamner
These competitions have generated over 2,000,000 submissions from around the world
@benhamner
Most of the competitions we’ve run have involved supervised classification or regression
@benhamner@benhamner
Doing well in competitions
@benhamner
Setup your environment to enable rapid iteration and experimentation
Extract and
Select Features
Train Models
Evaluate and
Visualize
Results
Identify &
Handle Data
Oddities
Data
Preprocessing
@benhamner
As an example, here’s a dashboard one user created to evaluate Diabetic Retinopathy models
http://jeffreydf.github.io/diabetic-retinopathy-detection/
@benhamner
Successful users invest time, thought, and creativity in problem structure and feature extraction
@benhamner
Random Forests / GBM’s work very well for many common classification and regression tasks
(Verikas et al. 2011)
@benhamner
Deep learning has been very effective in computer vision competitions we’ve hosted
caffe, theano, torch7, and keras are four popular open source libraries that facilitate this
@benhamner
XGBoost and Keras — two ML libraries with great power:effort ratios
Competition Type Winning ML Algorithm
Liberty Mutual Regression XGBoost
Caterpillar Tubes Regression Keras + XGBoost + Reg. Forest
Diabetic Retinopathy Image SparseConvNet + RF
Avito CTR XGBoost
Taxi Trajectory 2 Geostats Classic neural net
Grasp and Lift EEG Keras + XGBoost + other CNN
Otto Group Classification Stacked ensemble of 35 models
Facebook IV Classification sklearn GBM
@benhamner
XGBoost and Keras — two ML libraries with great power:effort ratios
Competition Type Winning ML Algorithm
Liberty Mutual Regression XGBoost
Caterpillar Tubes Regression Keras + XGBoost + Reg. Forest
Diabetic Retinopathy Image SparseConvNet + RF
Avito CTR XGBoost
Taxi Trajectory 2 Geostats Classic neural net
Grasp and Lift EEG Keras + XGBoost + other CNN
Otto Group Classification Stacked ensemble of 35 models
Facebook IV Classification sklearn GBM
@benhamner
The Boruta feature selection algorithm is robust and reliable
• Wrapper method around Random Forest and its calculated variable
importance
• Iteratively trains RF’s and runs statistical tests to identify features as
important or not important
• Widely used in competition-winning models to select a small subset
of features for use in training more complex models
• library(boruta) in R
@benhamner
Model ensembling usually results in marginal but significant performance gains
@benhamner
Data leakage is our (and our user’s) #1 challenge
http://www.navy.mil/view_image.asp?id=12495
@benhamner@benhamner
We’ve also seen some things that competitions aren’t effective at
@benhamner
Competitions don’t typically yield simple and theoretically elegant solutions
*exception – Factorization Machines in KDD Cup 2012
@benhamner
Competitions don’t typically yield production code
http://ora-00001.blogspot.ru/2011/07/mythbusters-stored-procedures-edition.html
@benhamner
Competitions don’t always yield computationally efficient solutions
• Rewards performance without computational and complexity
constraints
http://iinustechtips.com/main/topic/193045-need-help-underclocking-d/
@benhamner@benhamner
Competitions tend to be highly effective at
@benhamner
Optimizing a quantifiable evaluation metric by exploring an enormously broad range of approaches
@benhamner
Fairly and consistently evaluating a variety of approaches on the same problem
• Implementation details matter, which can make it tough to
reproduce results in other settings where data and/or code is
not open source
• “A quick, simple way to apply machine learning successfully?
In your domain, find the stupid baseline that new methods
consistently claim to beat. Implement that stupid baseline”
@benhamner
Identifying data quality and leakage issues
Check that ID
column isn’t
informative
“Deemed ‘one of the top ten data mining mistakes’, leakage is essentially the
introduction of information about the data mining target, which should not be
legitimately available to mine from.”
- “Leakage in Data Mining: formulation, detection, and avoidance” S Kaufman et al
Time
series
are
tricky
Essay: “This essay got good marks, but as far as I can tell, it's gibberish.”
Human Scores: 5/5, 4/5
@benhamner
Exposing a specific domain problem to many new communities around the world
@benhamner@benhamner
Where Kaggle’s going
@benhamner
Kaggle’s mission is to help the world learn from data
http://data-arts.appspot.com/globe/
@benhamner
We’re building a public platform for collaborating on data and analytics results
People
CodeData
@benhamner
An early alpha version of this is released as Kaggle Scripts
@benhamner
It enables users to immediately access R/Python/Julia environments with data preloaded
@benhamner
Everything created on Kaggle Scripts is published as soon as it’s run
www.kaggle.com/scripts
@benhamner
Reproducing and building on another’s work is simply a click away
@benhamner
We’re starting to enable users to do this on non-competition datasets
@benhamner
Soon, any user will be able to publish data through Kaggle for analysis
@benhamner@benhamner
Thank you!
head to www.kaggle.com/scripts to check out code,
visualizations, and results from our community

More Related Content

PDF
Winning Kaggle 101: Introduction to Stacking
PDF
Kaggle presentation
PDF
Tips for data science competitions
PPTX
How to get into Kaggle? by Philipp Singer and Dmitry Gordeev
PPT
kaggle_meet_up
PPTX
Kaggle Days Milan - March 2019
PDF
My First Attempt on Kaggle - Higgs Machine Learning Challenge: 755st and Proud!
PPTX
Starting data science with kaggle.com
Winning Kaggle 101: Introduction to Stacking
Kaggle presentation
Tips for data science competitions
How to get into Kaggle? by Philipp Singer and Dmitry Gordeev
kaggle_meet_up
Kaggle Days Milan - March 2019
My First Attempt on Kaggle - Higgs Machine Learning Challenge: 755st and Proud!
Starting data science with kaggle.com

Similar to Lessons Learned from Running Hundreds of Kaggle Competitions (20)

PDF
AutoML for Data Science Productivity and Toward Better Digital Decisions
PDF
Winning Data Science Competitions
PDF
Data Science Competition
PPTX
Deep Learning for Folks Without (or With!) a Ph.D.
PDF
Strata 2016 - Lessons Learned from building real-life Machine Learning Systems
PDF
DutchMLSchool. Models, Evaluations, and Ensembles
PDF
Winning data science competitions, presented by Owen Zhang
PDF
BSSML17 - Introduction, Models, Evaluations
PDF
R user group meeting 25th jan 2017
PDF
DutchMLSchool. Introduction to Machine Learning with the BigML Platform
PDF
Limits of Machine Learning
PDF
Kaggle and data science
PDF
Lessons learned from building practical deep learning systems
PDF
Demystifying Machine Learning
PDF
Winning Data Science Competitions (Owen Zhang) - 2014 Boston Data Festival
PDF
Winning data science competitions
PDF
A Kaggle Talk
PPTX
Machine Learning, Deep Learning and Data Analysis Introduction
PDF
BSSML16 L1. Introduction, Models, and Evaluations
AutoML for Data Science Productivity and Toward Better Digital Decisions
Winning Data Science Competitions
Data Science Competition
Deep Learning for Folks Without (or With!) a Ph.D.
Strata 2016 - Lessons Learned from building real-life Machine Learning Systems
DutchMLSchool. Models, Evaluations, and Ensembles
Winning data science competitions, presented by Owen Zhang
BSSML17 - Introduction, Models, Evaluations
R user group meeting 25th jan 2017
DutchMLSchool. Introduction to Machine Learning with the BigML Platform
Limits of Machine Learning
Kaggle and data science
Lessons learned from building practical deep learning systems
Demystifying Machine Learning
Winning Data Science Competitions (Owen Zhang) - 2014 Boston Data Festival
Winning data science competitions
A Kaggle Talk
Machine Learning, Deep Learning and Data Analysis Introduction
BSSML16 L1. Introduction, Models, and Evaluations
Ad

Recently uploaded (20)

PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PDF
Fluorescence-microscope_Botany_detailed content
PPTX
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
PPT
Chapter 2 METAL FORMINGhhhhhhhjjjjmmmmmmmmm
PDF
Clinical guidelines as a resource for EBP(1).pdf
PPT
Reliability_Chapter_ presentation 1221.5784
PPTX
Computer network topology notes for revision
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PDF
Lecture1 pattern recognition............
PPT
Chapter 3 METAL JOINING.pptnnnnnnnnnnnnn
PPTX
IB Computer Science - Internal Assessment.pptx
PPTX
1_Introduction to advance data techniques.pptx
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PPTX
Moving the Public Sector (Government) to a Digital Adoption
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PPTX
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
PDF
Mega Projects Data Mega Projects Data
PPTX
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
Fluorescence-microscope_Botany_detailed content
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
Chapter 2 METAL FORMINGhhhhhhhjjjjmmmmmmmmm
Clinical guidelines as a resource for EBP(1).pdf
Reliability_Chapter_ presentation 1221.5784
Computer network topology notes for revision
Acceptance and paychological effects of mandatory extra coach I classes.pptx
Lecture1 pattern recognition............
Chapter 3 METAL JOINING.pptnnnnnnnnnnnnn
IB Computer Science - Internal Assessment.pptx
1_Introduction to advance data techniques.pptx
Data_Analytics_and_PowerBI_Presentation.pptx
Moving the Public Sector (Government) to a Digital Adoption
Galatica Smart Energy Infrastructure Startup Pitch Deck
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
Mega Projects Data Mega Projects Data
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
Ad

Lessons Learned from Running Hundreds of Kaggle Competitions

  • 1. @benhamnerPhoto by mikebaird, www.flickr.com/photos/mikebaird Lessons from ML Competitions Ben Hamner ben.hamner@kaggle.com November 13, 2015
  • 2. @benhamner Kaggle runs machine learning competitions
  • 3. @benhamner We release challenging machine learning problems to our community of 410,000 data scientists
  • 4. @benhamner 0 20,000 40,000 60,000 80,000 100,000 120,000 140,000 160,000 Sep-10 Sep-11 Sep-12 Sep-13 Sep-14 Sep-15 Our community makes 100k submissions per month on these competitions
  • 6. @benhamner Automatically grading student-written essays 197 entrants 155 teams 2,499 submissions over 80 days $100,000 in prizes Human-level performance www.kaggle.com/c/asap-aes 21,000+ essays
  • 7. @benhamner Predicting compounds toxicity given its molecular structure 796 entrants 703 teams 8,841 submissions over 91 days $20,000 in prizes 25.6% improvement over previous accuracy benchmark www.kaggle.com/c/BioResponse
  • 8. @benhamner Personalizing web search results 261 entrants 194 teams 3570 submissions over 91 days $9,000 in prizes www.kaggle.com/c/yandex-personalized-web-search-challenge 167,000,000+ logs
  • 9. @benhamner Detecting diabetic retinopathy www.kaggle.com/c/diabetic-retinopathy-detection 88,000+ retina images 854 entrants 661 teams 6999 submissions Over 160 days $100,000 in prizes 85% agreement with a human rater (quadratic weighted kappa)
  • 10. @benhamner@benhamner How do machine learning competitions work?
  • 11. @benhamner We take a dataset with a target variable – something we’re trying to predict SalePrice SquareFeet Type LotAcres Beds Baths $88k 719 HOME 1.64 1 1 $164k 2017 APT 3 2 $72k 697 APT 1 1 $85k 948 HOME 1.02 2 3 $271k 3375 APT 3 4 $482k 3968 APT 4 4 $88k 790 APT 1 2 $128k 1341 HOME 0.66 3 3 $235k 2379 APT 3 3 $309k 2495 HOME 0.21 3 4 $163k 1356 APT 1 1 $375k 3361 HOME 1.64 3 4 $98k 1060 HOME 0.05 1 1 $50k 582 HOME 0.61 1 1 $145k 1640 APT 2 3 $394k 3546 HOME 0.4 4 4 $82k 903 APT 2 2 $105k 1096 HOME 0.04 3 4 $129k 1280 HOME 0.15 2 2 $106k 1139 APT 1 1 Predicting the sale price of a home
  • 12. @benhamner Training Test Split the data into two sets, a training set and a test set Solution “Ground Truth” SalePrice SquareFeet Type LotAcres Beds Baths $88k 719 HOME 1.64 1 1 $164k 2017 APT 3 2 $72k 697 APT 1 1 $85k 948 HOME 1.02 2 3 $271k 3375 APT 3 4 $482k 3968 APT 4 4 $88k 790 APT 1 2 $128k 1341 HOME 0.66 3 3 $235k 2379 APT 3 3 $309k 2495 HOME 0.21 3 4 $163k 1356 APT 1 1 $375k 3361 HOME 1.64 3 4 $98k 1060 HOME 0.05 1 1 $50k 582 HOME 0.61 1 1 $145k 1640 APT 2 3 $394k 3546 HOME 0.4 4 4 $82k 903 APT 2 2 $105k 1096 HOME 0.04 3 4 $129k 1280 HOME 0.15 2 2 $106k 1139 APT 1 1
  • 13. @benhamner Training Test Our community gets everything but the solution on the test set Solution “Ground Truth” SalePrice SquareFeet Type LotAcres Beds Baths $88k 719 HOME 1.64 1 1 $164k 2017 APT 3 2 $72k 697 APT 1 1 $85k 948 HOME 1.02 2 3 $271k 3375 APT 3 4 $482k 3968 APT 4 4 $88k 790 APT 1 2 $128k 1341 HOME 0.66 3 3 $235k 2379 APT 3 3 $309k 2495 HOME 0.21 3 4 $163k 1356 APT 1 1 $375k 3361 HOME 1.64 3 4 $98k 1060 HOME 0.05 1 1 ??? 582 HOME 0.61 1 1 ??? 1640 APT 2 3 ??? 3546 HOME 0.4 4 4 ??? 903 APT 2 2 ??? 1096 HOME 0.04 3 4 ??? 1280 HOME 0.15 2 2 ??? 1139 APT 1 1
  • 14. @benhamner Competition participants use the training set to learn the relation between the data and the target
  • 15. @benhamner Training Test Competition participants apply their models to make predictions on the test set SalePrice SquareFeet Type LotAcres Beds Baths $88k 719 HOME 1.64 1 1 $164k 2017 APT 3 2 $72k 697 APT 1 1 $85k 948 HOME 1.02 2 3 $271k 3375 APT 3 4 $482k 3968 APT 4 4 $88k 790 APT 1 2 $128k 1341 HOME 0.66 3 3 $235k 2379 APT 3 3 $309k 2495 HOME 0.21 3 4 $163k 1356 APT 1 1 $375k 3361 HOME 1.64 3 4 $98k 1060 HOME 0.05 1 1 ??? 582 HOME 0.61 1 1 ??? 1640 APT 2 3 ??? 3546 HOME 0.4 4 4 ??? 903 APT 2 2 ??? 1096 HOME 0.04 3 4 ??? 1280 HOME 0.15 2 2 ??? 1139 APT 1 1 Submission Predicted $41k $165k $280k $76k $128k $115k $94k
  • 16. @benhamner Training Test Kaggle compares the submission to the ground truth SalePrice SquareFeet Type LotAcres Beds Baths $88k 719 HOME 1.64 1 1 $164k 2017 APT 3 2 $72k 697 APT 1 1 $85k 948 HOME 1.02 2 3 $271k 3375 APT 3 4 $482k 3968 APT 4 4 $88k 790 APT 1 2 $128k 1341 HOME 0.66 3 3 $235k 2379 APT 3 3 $309k 2495 HOME 0.21 3 4 $163k 1356 APT 1 1 $375k 3361 HOME 1.64 3 4 $98k 1060 HOME 0.05 1 1 $50k 582 HOME 0.61 1 1 $145k 1640 APT 2 3 $394k 3546 HOME 0.4 4 4 $82k 903 APT 2 2 $105k 1096 HOME 0.04 3 4 $129k 1280 HOME 0.15 2 2 $106k 1139 APT 1 1 Submission Predicted $41k $165k $380k $76k $128k $115k $94k Delta -$9k $20k -$14k -$6k $13k -$14k -$12k
  • 17. @benhamner Training Test Kaggle calculates two scores, one for the public leaderboard and one for the private leaderboard SalePrice SquareFeet Type LotAcres Beds Baths $88k 719 HOME 1.64 1 1 $164k 2017 APT 3 2 $72k 697 APT 1 1 $85k 948 HOME 1.02 2 3 $271k 3375 APT 3 4 $482k 3968 APT 4 4 $88k 790 APT 1 2 $128k 1341 HOME 0.66 3 3 $235k 2379 APT 3 3 $309k 2495 HOME 0.21 3 4 $163k 1356 APT 1 1 $375k 3361 HOME 1.64 3 4 $98k 1060 HOME 0.05 1 1 $50k 582 HOME 0.61 1 1 $145k 1640 APT 2 3 $394k 3546 HOME 0.4 4 4 $82k 903 APT 2 2 $105k 1096 HOME 0.04 3 4 $129k 1280 HOME 0.15 2 2 $106k 1139 APT 1 1 Submission Predicted $41k $165k $380k $76k $128k $115k $94k MeanError Public Leaderboard $14k Private Leaderboard $15k Delta -$9k $20k -$14k -$6k $13k -$14k -$12k
  • 18. @benhamner The participant immediately sees their public score on the public leaderboard
  • 19. @benhamner Participants explore the problem and iterate on their models to improve them
  • 20. @benhamner At the end, the participant with the best score on the private leaderboard wins
  • 22. @benhamner The leaderboard is a powerful mechanism to drive competition
  • 23. @benhamner The leaderboard is objective and meritocratic
  • 25. @benhamner The leaderboard encourages iterative improvements over many submissions
  • 26. @benhamner This causes the competition to approach the frontier of what’s possible given the data
  • 27. @benhamner Many competitions quickly approach a frontier; the most challenging ones take longer
  • 28. @benhamner Some applied ML research looks like competitions running over years instead of months www.kaggle.com/c/BioResponse/leaderboardyann.lecun.com/exdb/mnist/
  • 29. @benhamner One long-running research competition is ImageNet (not hosted on Kaggle) www.image-net.org
  • 30. @benhamner We see a similar progression in ImageNet performance over time as we do in Kaggle competitions www.image-net.org
  • 31. @benhamner Can we do better than competition results?
  • 33. @benhamner At Kaggle, we’ve run hundreds of public machine learning competitions
  • 34. @benhamner And over 600 in-class competitions for university students
  • 35. @benhamner These competitions have generated over 2,000,000 submissions from around the world
  • 36. @benhamner Most of the competitions we’ve run have involved supervised classification or regression
  • 38. @benhamner Setup your environment to enable rapid iteration and experimentation Extract and Select Features Train Models Evaluate and Visualize Results Identify & Handle Data Oddities Data Preprocessing
  • 39. @benhamner As an example, here’s a dashboard one user created to evaluate Diabetic Retinopathy models http://jeffreydf.github.io/diabetic-retinopathy-detection/
  • 40. @benhamner Successful users invest time, thought, and creativity in problem structure and feature extraction
  • 41. @benhamner Random Forests / GBM’s work very well for many common classification and regression tasks (Verikas et al. 2011)
  • 42. @benhamner Deep learning has been very effective in computer vision competitions we’ve hosted caffe, theano, torch7, and keras are four popular open source libraries that facilitate this
  • 43. @benhamner XGBoost and Keras — two ML libraries with great power:effort ratios Competition Type Winning ML Algorithm Liberty Mutual Regression XGBoost Caterpillar Tubes Regression Keras + XGBoost + Reg. Forest Diabetic Retinopathy Image SparseConvNet + RF Avito CTR XGBoost Taxi Trajectory 2 Geostats Classic neural net Grasp and Lift EEG Keras + XGBoost + other CNN Otto Group Classification Stacked ensemble of 35 models Facebook IV Classification sklearn GBM
  • 44. @benhamner XGBoost and Keras — two ML libraries with great power:effort ratios Competition Type Winning ML Algorithm Liberty Mutual Regression XGBoost Caterpillar Tubes Regression Keras + XGBoost + Reg. Forest Diabetic Retinopathy Image SparseConvNet + RF Avito CTR XGBoost Taxi Trajectory 2 Geostats Classic neural net Grasp and Lift EEG Keras + XGBoost + other CNN Otto Group Classification Stacked ensemble of 35 models Facebook IV Classification sklearn GBM
  • 45. @benhamner The Boruta feature selection algorithm is robust and reliable • Wrapper method around Random Forest and its calculated variable importance • Iteratively trains RF’s and runs statistical tests to identify features as important or not important • Widely used in competition-winning models to select a small subset of features for use in training more complex models • library(boruta) in R
  • 46. @benhamner Model ensembling usually results in marginal but significant performance gains
  • 47. @benhamner Data leakage is our (and our user’s) #1 challenge http://www.navy.mil/view_image.asp?id=12495
  • 48. @benhamner@benhamner We’ve also seen some things that competitions aren’t effective at
  • 49. @benhamner Competitions don’t typically yield simple and theoretically elegant solutions *exception – Factorization Machines in KDD Cup 2012
  • 50. @benhamner Competitions don’t typically yield production code http://ora-00001.blogspot.ru/2011/07/mythbusters-stored-procedures-edition.html
  • 51. @benhamner Competitions don’t always yield computationally efficient solutions • Rewards performance without computational and complexity constraints http://iinustechtips.com/main/topic/193045-need-help-underclocking-d/
  • 53. @benhamner Optimizing a quantifiable evaluation metric by exploring an enormously broad range of approaches
  • 54. @benhamner Fairly and consistently evaluating a variety of approaches on the same problem • Implementation details matter, which can make it tough to reproduce results in other settings where data and/or code is not open source • “A quick, simple way to apply machine learning successfully? In your domain, find the stupid baseline that new methods consistently claim to beat. Implement that stupid baseline”
  • 55. @benhamner Identifying data quality and leakage issues Check that ID column isn’t informative “Deemed ‘one of the top ten data mining mistakes’, leakage is essentially the introduction of information about the data mining target, which should not be legitimately available to mine from.” - “Leakage in Data Mining: formulation, detection, and avoidance” S Kaufman et al Time series are tricky Essay: “This essay got good marks, but as far as I can tell, it's gibberish.” Human Scores: 5/5, 4/5
  • 56. @benhamner Exposing a specific domain problem to many new communities around the world
  • 58. @benhamner Kaggle’s mission is to help the world learn from data http://data-arts.appspot.com/globe/
  • 59. @benhamner We’re building a public platform for collaborating on data and analytics results People CodeData
  • 60. @benhamner An early alpha version of this is released as Kaggle Scripts
  • 61. @benhamner It enables users to immediately access R/Python/Julia environments with data preloaded
  • 62. @benhamner Everything created on Kaggle Scripts is published as soon as it’s run www.kaggle.com/scripts
  • 63. @benhamner Reproducing and building on another’s work is simply a click away
  • 64. @benhamner We’re starting to enable users to do this on non-competition datasets
  • 65. @benhamner Soon, any user will be able to publish data through Kaggle for analysis
  • 66. @benhamner@benhamner Thank you! head to www.kaggle.com/scripts to check out code, visualizations, and results from our community