SlideShare a Scribd company logo
Managing Machine Learning
David Murgatroyd - VP, Engineering
@dmurga
Managing machine learning
3
4
5
Problem: I can’t fully specify the behavior I want.
6
Problem: I can’t fully specify the behavior I want.
Solution: Machine Learning
Where does
machine learning fit in
the technology
universe?
Valuable
... a star of the Data Science orchestra.
- John Mount, Win-Vector
Central
... the new algorithms ... at the heart of most of
what computer science does.
- Hal Daumé III, U. Maryland Professor
Last Resort
… for cases when the desired behavior cannot
be effectively expressed in software logic
without dependency on external data.
- D. Sculley et al., Google
7
Where does machine learning fit in developing
technology?
8
Stuff to do Demonstrable ValueStuff to do now
How does machine
learning affect value
demonstration?
Distill business goal into a repeatable,
balanced metric.
Measure on the most representative data you
can get.
Distinguish intrinsic errors from
implementation bugs.
Let your customer override the model when
they absolutely must get some answer.
9
Demonstrable Value
Distill business goal into
a repeatable, balanced
metric.
10
Demonstrable Value
Business goals in our example:
● fewer incorrect candidates sent to
analysts for review
● no increased volume of work for
analysts
● confidence to help analysts prioritize
Example metric: area under an error trade-off
curve based on confidence, constrained to
max volume. Sometimes called an ‘overall
evaluation criteria’ (OEC).
Note that the more skewed the OEC (e.g., if #
of positives varies by day and season) the
more samples are required to be sure of
statistical significance.
Measure on the most
representative data you
can get.
11
Demonstrable Value
Considerations when selecting data:
● online v offline: A/B test in production with
feature flags (one or two variables at a time,
agile-y) vs. stable data set
● implicit v explicit: implicit can correlate
more with value but omits unseen states
● broad v targeted: if explicitly annotating
consider targeting based on diagnostic
value or where systems disagree
Resist the temptation to ‘clean’ data -- you may
kill it. Instead include normalization in your
model.
Distinguish intrinsic
errors from
implementation bugs.
12
Demonstrable Value
Distinction
● Error: incorrect output from a model
despite the model being correctly
implemented.
● Bug: incorrect implementation, doing
something other than what was
intended
Useful to manage expectations about quality
and effort required to improve/fix.
Providing an explanation for output can help
make this distinction.
Bug Error
Let your customer
override the model
when they absolutely
must get some answer.
13
Demonstrable Value
Varieties of overrides:
● Always give this answer.
● Never give this answer.
Can apply for sub-models or overall.
Beware of potential toward ‘whack a mole’.
Feel sad every time they use it.
Where does machine learning fit in developing
technology?
14
Stuff to do Demonstrable ValueStuff to do now
How does machine
learning affect team
organization?
15
Machine Learning Expert
Spectrum of options between:
Integrate machine learning expertise in every
team that needs it.
Separate it in an independent, specialist
team.
Option 1: integrated
teams with cross-team
interest groups
16
Encourages alignment with
business goals.
Challenges machine learning
collaboration, depth and reuse.
Best for small, diverse products.
Option 2: independent
machine learning team
delivering models
17
Encourages machine learning
collaboration, depth and reuse.
Challenges alignment with
business goals.
Best for products with large,
complex model(s).
How does machine
learning affect iteration
structure?
18
Pros for shorter:
● More simple experiments are better
than fewer complex ones
● The value of machine learning leads to
high cost of delay
Pros for longer:
● Innovation takes deep thinking
● More time to control technical debt
creation
Where does machine learning fit in developing
technology?
19
Stuff to do Demonstrable ValueStuff to do now
How does machine
learning affect chunks
of work?
Focus on experiments following the
scientific-method: hypothesis, measurement
and error analysis.
Continuously test for regression versus
expected measurements.
Decouple functional tests from model
variations.
20
Stuff to do now
Focus on experiments
with hypothesis,
measurement and
analysis.
21
Stuff to do now
Continuously test for
regression versus
expected
measurements.
22
Stuff to do now
With machine learning’s dependence on data
changing anything changes everything. This
makes it the “high-interest credit card of
technical debt”.
Determine what’s a significant change,
including looking at aggregate effect across
different data sets.
Decouple functional
tests from model
variations.
23
Stuff to do now
Options:
Black-box style: enforce “can’t be wrong”
(“earmark”) input/output pairs. Might lead to
spurious test failures.
Clear-box style: use a mock implementation
of the model that produces expected answers.
Decouple functional
tests from model
variations.
24
Stuff to do now
Options:
Black-box style: ensure “can’t be wrong”
(“earmark”) input/output pairs. Might lead to
spurious test failures.
Clear-box style: use a mock implementation
of the model that produces expected answers.
Decouple functional
tests from model
variations.
25
Stuff to do now
Options:
Black-box style: ensure “can’t be wrong”
(“earmark”) input/output pairs. Might lead to
spurious test failures.
Clear-box style: use a mock implementation
of the model that produces expected answers.
42
Where does machine learning fit in developing
technology?
26
Stuff to do Demonstrable ValueStuff to do now
How does machine
learning affect
prioritization?
27
Stuff to do
Do we need more training data?
Do we need a richer representation of our
data?
Do we need a combination of models?
How much could improving a sub-component
of the model help?
What development milestones should we
target?
Do we need more
training data?
28
Stuff to do
The learning curve implies adding training
data should bring down the test error closer
to the desired level.
Do we need a richer
representation of our
data?
29
Stuff to do
The learning curve implies adding data won’t
help but a richer data representation may.
Could be more features identified by
someone with domain expertise analyzing
errors. Though remember more features
often means less speed.
Could require a new model if the domain
information identified is not representable in
the existing one.
Do we need a
combination of models?
30
Stuff to do
The learning curve implies the model is
overfitting the training set.
Consider training multiple models on random
subsets of the data and combine them at
runtime to decrease the variance while
retaining a low bias. Presuming you can spend
the compute.
How much could
improving a
sub-component of the
model help?
31
Stuff to do
Build an ‘oracle’ for the sub-component --
something that takes perfect output from
data.
Annotate to get that perfect output on some
test data to feed the oracle.
Measure the overall system with the oracle
turned on.
What development
milestones should we
target?
32
Stuff to do
Make it…
● Glued-together with some rules
(Prototype)
● Function (Alpha)
● Measurable & inspectable (early Beta)
● Accurate, not slow, nice demo,
documented & configurable (late Beta)
● Simple & fast (GA)
● Handle new kinds of input (post-GA)
Questions?
33
Stuff to do Demonstrable ValueStuff to do now
Suggested questions:
Say more about integrating domain expertise?
Say more about online vs. offline testing?
How to manage acquiring data?
How to recruit machine learning folks?
What bad habits can ML enable?
Where can I try your stuff? api.rosette.com
You hiring? Yes - basistech.com/careers/
@dmurga
Appendix
@dmurga
Manage Technical Debt
35
Data is even less visible than code -- blurs
boundaries.
Recruiting machine learning experts
36
who
◦ expertise in sequence models > in domain
◦ depth in specific model > breadth over many
where to find them
◦ local network: meet-ups, LinkedIn
◦ academic conferences
◦ communities (e.g., Kaggle, users of ML tools)
how to attract them
◦ explain purpose & uniqueness of the problem
Online vs. offline evaluation
37
Online (e.g., A/B)
● Individual decisions need to not be mission critical
● Enough use to get sufficient statistics in short time
● Helps motivate aligning production and development environments
● If the model is updated online, validate it against offline data periodically to
watch out for drift
● Usually focused on extrinsic or distant measures
Offline
● Always have some of this to for long-term protection against regression
● May be required for intrinsic measurement
38
Epistemology Exact
sciences
Experimental
sciences
Engineering Art
Example ... Theoretical C.S. Physics Software Management
Deals with ... Theorems Theories Artifacts People
Truth is ... Forever Temporary “It works” In the eye of
the beholder
Parts of
machine
learning fit all
four...
Learning theory Model &
measure
Systems Users
This is great, as long as we don’t confuse one kind of work for another.
(This table is an expansion of one in Bottou’s ICML 2015 talk.)

More Related Content

PDF
Introduction to machine learning and deep learning
ODP
Introduction to Machine learning
PDF
Introduction to machine learning
PDF
Machine Learning for Dummies
PDF
Barga Data Science lecture 9
PPTX
Introduction to Machine Learning
PDF
Module 1 introduction to machine learning
PDF
Data Science, Machine Learning and Neural Networks
Introduction to machine learning and deep learning
Introduction to Machine learning
Introduction to machine learning
Machine Learning for Dummies
Barga Data Science lecture 9
Introduction to Machine Learning
Module 1 introduction to machine learning
Data Science, Machine Learning and Neural Networks

What's hot (20)

PDF
Barga Data Science lecture 1
PDF
Barga DIDC'14 Invited Talk
PDF
Barga Data Science lecture 4
PDF
Agile Deep Learning
PPTX
Introduction to machine learning
PDF
Top 10 Data Science Practitioner Pitfalls
PPTX
End-to-End Machine Learning Project
PPTX
H2O World - Top 10 Data Science Pitfalls - Mark Landry
PDF
Barga Data Science lecture 5
PDF
Machine Learning Algorithms (Part 1)
PDF
Machine learning the next revolution or just another hype
PDF
Introduction to Data Science
PDF
Barga Data Science lecture 10
PDF
Machine Learning: Understanding the Invisible Force Changing Our World
PDF
Barga Data Science lecture 7
PDF
Fairly Measuring Fairness In Machine Learning
PPTX
Machine learning basics
PPTX
Machine learning_ Replicating Human Brain
PDF
10 Lessons Learned from Building Machine Learning Systems
PPTX
Day 2 (Lecture 5): A Practitioner's Perspective on Building Machine Product i...
Barga Data Science lecture 1
Barga DIDC'14 Invited Talk
Barga Data Science lecture 4
Agile Deep Learning
Introduction to machine learning
Top 10 Data Science Practitioner Pitfalls
End-to-End Machine Learning Project
H2O World - Top 10 Data Science Pitfalls - Mark Landry
Barga Data Science lecture 5
Machine Learning Algorithms (Part 1)
Machine learning the next revolution or just another hype
Introduction to Data Science
Barga Data Science lecture 10
Machine Learning: Understanding the Invisible Force Changing Our World
Barga Data Science lecture 7
Fairly Measuring Fairness In Machine Learning
Machine learning basics
Machine learning_ Replicating Human Brain
10 Lessons Learned from Building Machine Learning Systems
Day 2 (Lecture 5): A Practitioner's Perspective on Building Machine Product i...
Ad

Viewers also liked (11)

PDF
キーボードアプリとSketchのススメ
PDF
Lessons learned
PDF
BAHSICアルゴリズムによる非線形データからの特徴選択
PDF
Linguistic Considerations of Identity Resolution (2008)
PDF
Machine Learning : The high interest credit card of technical debt
PDF
C/C++プログラマのための開発ツール
PDF
汎用性と高速性を目指したペアリング暗号ライブラリ mcl
PPTX
「機械学習:技術的負債の高利子クレジットカード」のまとめ
PDF
Let's Make the PAIN Visible!
PDF
A Programmer's Guide to Humans
PDF
GoogleのSHA-1のはなし
キーボードアプリとSketchのススメ
Lessons learned
BAHSICアルゴリズムによる非線形データからの特徴選択
Linguistic Considerations of Identity Resolution (2008)
Machine Learning : The high interest credit card of technical debt
C/C++プログラマのための開発ツール
汎用性と高速性を目指したペアリング暗号ライブラリ mcl
「機械学習:技術的負債の高利子クレジットカード」のまとめ
Let's Make the PAIN Visible!
A Programmer's Guide to Humans
GoogleのSHA-1のはなし
Ad

Similar to Managing machine learning (20)

PDF
Course 2 Machine Learning Data LifeCycle in Production - Week 1
PPTX
2024-02-24_Session 1 - PMLE_UPDATED.pptx
PDF
Data Science for Business Managers - An intro to ROI for predictive analytics
PDF
Demystifying ML/AI
PPTX
The 4 Machine Learning Models Imperative for Business Transformation
PPTX
Machine Learning vs Decision Optimization comparison
PDF
(In)convenient truths about applied machine learning
PPTX
Afternoons with Azure - Azure Machine Learning
 
PDF
Lead AI incubations as a Product manager
PPTX
AI-900 - Fundamental Principles of ML.pptx
PDF
Putting data science in your business a first utility feedback
PDF
How Will Your ML Project Fail
PDF
AI TESTING: ENSURING A GOOD DATA SPLIT BETWEEN DATA SETS (TRAINING AND TEST) ...
PDF
AI Testing: Ensuring a Good Data Split Between Data Sets (Training and Test) ...
PDF
PPTX
Managing uncertainty in ai performance target setting
PPTX
Managing uncertainty in ai performance target setting
PPTX
Continuous Intelligence Workshop
PPTX
Machine intelligence data science methodology 060420
PPTX
Machine learning: A Walk Through School Exams
Course 2 Machine Learning Data LifeCycle in Production - Week 1
2024-02-24_Session 1 - PMLE_UPDATED.pptx
Data Science for Business Managers - An intro to ROI for predictive analytics
Demystifying ML/AI
The 4 Machine Learning Models Imperative for Business Transformation
Machine Learning vs Decision Optimization comparison
(In)convenient truths about applied machine learning
Afternoons with Azure - Azure Machine Learning
 
Lead AI incubations as a Product manager
AI-900 - Fundamental Principles of ML.pptx
Putting data science in your business a first utility feedback
How Will Your ML Project Fail
AI TESTING: ENSURING A GOOD DATA SPLIT BETWEEN DATA SETS (TRAINING AND TEST) ...
AI Testing: Ensuring a Good Data Split Between Data Sets (Training and Test) ...
Managing uncertainty in ai performance target setting
Managing uncertainty in ai performance target setting
Continuous Intelligence Workshop
Machine intelligence data science methodology 060420
Machine learning: A Walk Through School Exams

More from David Murgatroyd (12)

PPTX
Mission-Driven Machine Learning
PDF
Leveraging AI the Right Way (for Product Managers)
PDF
Managing Your Machine Learning Portfolio
PDF
How to train your product owner
PDF
Technology & Faith: from Coding to Culture
PPTX
Choosing a Job for the Right Reasons
PPTX
NLP in the Real World
PPTX
System combination for HLT
PPTX
HltCon overview
PPTX
Simple fuzzy name matching in solr
PDF
Moving beyond-entity-extraction-to-entity-resolution-david-murgatroyd-human-l...
PPTX
From Research to Reality: Advances in HLT 2013
Mission-Driven Machine Learning
Leveraging AI the Right Way (for Product Managers)
Managing Your Machine Learning Portfolio
How to train your product owner
Technology & Faith: from Coding to Culture
Choosing a Job for the Right Reasons
NLP in the Real World
System combination for HLT
HltCon overview
Simple fuzzy name matching in solr
Moving beyond-entity-extraction-to-entity-resolution-david-murgatroyd-human-l...
From Research to Reality: Advances in HLT 2013

Recently uploaded (20)

PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PPTX
Cloud computing and distributed systems.
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
A Presentation on Artificial Intelligence
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
cuic standard and advanced reporting.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Machine learning based COVID-19 study performance prediction
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPT
Teaching material agriculture food technology
Agricultural_Statistics_at_a_Glance_2022_0.pdf
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Cloud computing and distributed systems.
Reach Out and Touch Someone: Haptics and Empathic Computing
A Presentation on Artificial Intelligence
Digital-Transformation-Roadmap-for-Companies.pptx
Diabetes mellitus diagnosis method based random forest with bat algorithm
cuic standard and advanced reporting.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Machine learning based COVID-19 study performance prediction
Per capita expenditure prediction using model stacking based on satellite ima...
The AUB Centre for AI in Media Proposal.docx
Building Integrated photovoltaic BIPV_UPV.pdf
Chapter 3 Spatial Domain Image Processing.pdf
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Teaching material agriculture food technology

Managing machine learning

  • 1. Managing Machine Learning David Murgatroyd - VP, Engineering @dmurga
  • 3. 3
  • 4. 4
  • 5. 5 Problem: I can’t fully specify the behavior I want.
  • 6. 6 Problem: I can’t fully specify the behavior I want. Solution: Machine Learning
  • 7. Where does machine learning fit in the technology universe? Valuable ... a star of the Data Science orchestra. - John Mount, Win-Vector Central ... the new algorithms ... at the heart of most of what computer science does. - Hal Daumé III, U. Maryland Professor Last Resort … for cases when the desired behavior cannot be effectively expressed in software logic without dependency on external data. - D. Sculley et al., Google 7
  • 8. Where does machine learning fit in developing technology? 8 Stuff to do Demonstrable ValueStuff to do now
  • 9. How does machine learning affect value demonstration? Distill business goal into a repeatable, balanced metric. Measure on the most representative data you can get. Distinguish intrinsic errors from implementation bugs. Let your customer override the model when they absolutely must get some answer. 9 Demonstrable Value
  • 10. Distill business goal into a repeatable, balanced metric. 10 Demonstrable Value Business goals in our example: ● fewer incorrect candidates sent to analysts for review ● no increased volume of work for analysts ● confidence to help analysts prioritize Example metric: area under an error trade-off curve based on confidence, constrained to max volume. Sometimes called an ‘overall evaluation criteria’ (OEC). Note that the more skewed the OEC (e.g., if # of positives varies by day and season) the more samples are required to be sure of statistical significance.
  • 11. Measure on the most representative data you can get. 11 Demonstrable Value Considerations when selecting data: ● online v offline: A/B test in production with feature flags (one or two variables at a time, agile-y) vs. stable data set ● implicit v explicit: implicit can correlate more with value but omits unseen states ● broad v targeted: if explicitly annotating consider targeting based on diagnostic value or where systems disagree Resist the temptation to ‘clean’ data -- you may kill it. Instead include normalization in your model.
  • 12. Distinguish intrinsic errors from implementation bugs. 12 Demonstrable Value Distinction ● Error: incorrect output from a model despite the model being correctly implemented. ● Bug: incorrect implementation, doing something other than what was intended Useful to manage expectations about quality and effort required to improve/fix. Providing an explanation for output can help make this distinction. Bug Error
  • 13. Let your customer override the model when they absolutely must get some answer. 13 Demonstrable Value Varieties of overrides: ● Always give this answer. ● Never give this answer. Can apply for sub-models or overall. Beware of potential toward ‘whack a mole’. Feel sad every time they use it.
  • 14. Where does machine learning fit in developing technology? 14 Stuff to do Demonstrable ValueStuff to do now
  • 15. How does machine learning affect team organization? 15 Machine Learning Expert Spectrum of options between: Integrate machine learning expertise in every team that needs it. Separate it in an independent, specialist team.
  • 16. Option 1: integrated teams with cross-team interest groups 16 Encourages alignment with business goals. Challenges machine learning collaboration, depth and reuse. Best for small, diverse products.
  • 17. Option 2: independent machine learning team delivering models 17 Encourages machine learning collaboration, depth and reuse. Challenges alignment with business goals. Best for products with large, complex model(s).
  • 18. How does machine learning affect iteration structure? 18 Pros for shorter: ● More simple experiments are better than fewer complex ones ● The value of machine learning leads to high cost of delay Pros for longer: ● Innovation takes deep thinking ● More time to control technical debt creation
  • 19. Where does machine learning fit in developing technology? 19 Stuff to do Demonstrable ValueStuff to do now
  • 20. How does machine learning affect chunks of work? Focus on experiments following the scientific-method: hypothesis, measurement and error analysis. Continuously test for regression versus expected measurements. Decouple functional tests from model variations. 20 Stuff to do now
  • 21. Focus on experiments with hypothesis, measurement and analysis. 21 Stuff to do now
  • 22. Continuously test for regression versus expected measurements. 22 Stuff to do now With machine learning’s dependence on data changing anything changes everything. This makes it the “high-interest credit card of technical debt”. Determine what’s a significant change, including looking at aggregate effect across different data sets.
  • 23. Decouple functional tests from model variations. 23 Stuff to do now Options: Black-box style: enforce “can’t be wrong” (“earmark”) input/output pairs. Might lead to spurious test failures. Clear-box style: use a mock implementation of the model that produces expected answers.
  • 24. Decouple functional tests from model variations. 24 Stuff to do now Options: Black-box style: ensure “can’t be wrong” (“earmark”) input/output pairs. Might lead to spurious test failures. Clear-box style: use a mock implementation of the model that produces expected answers.
  • 25. Decouple functional tests from model variations. 25 Stuff to do now Options: Black-box style: ensure “can’t be wrong” (“earmark”) input/output pairs. Might lead to spurious test failures. Clear-box style: use a mock implementation of the model that produces expected answers. 42
  • 26. Where does machine learning fit in developing technology? 26 Stuff to do Demonstrable ValueStuff to do now
  • 27. How does machine learning affect prioritization? 27 Stuff to do Do we need more training data? Do we need a richer representation of our data? Do we need a combination of models? How much could improving a sub-component of the model help? What development milestones should we target?
  • 28. Do we need more training data? 28 Stuff to do The learning curve implies adding training data should bring down the test error closer to the desired level.
  • 29. Do we need a richer representation of our data? 29 Stuff to do The learning curve implies adding data won’t help but a richer data representation may. Could be more features identified by someone with domain expertise analyzing errors. Though remember more features often means less speed. Could require a new model if the domain information identified is not representable in the existing one.
  • 30. Do we need a combination of models? 30 Stuff to do The learning curve implies the model is overfitting the training set. Consider training multiple models on random subsets of the data and combine them at runtime to decrease the variance while retaining a low bias. Presuming you can spend the compute.
  • 31. How much could improving a sub-component of the model help? 31 Stuff to do Build an ‘oracle’ for the sub-component -- something that takes perfect output from data. Annotate to get that perfect output on some test data to feed the oracle. Measure the overall system with the oracle turned on.
  • 32. What development milestones should we target? 32 Stuff to do Make it… ● Glued-together with some rules (Prototype) ● Function (Alpha) ● Measurable & inspectable (early Beta) ● Accurate, not slow, nice demo, documented & configurable (late Beta) ● Simple & fast (GA) ● Handle new kinds of input (post-GA)
  • 33. Questions? 33 Stuff to do Demonstrable ValueStuff to do now Suggested questions: Say more about integrating domain expertise? Say more about online vs. offline testing? How to manage acquiring data? How to recruit machine learning folks? What bad habits can ML enable? Where can I try your stuff? api.rosette.com You hiring? Yes - basistech.com/careers/ @dmurga
  • 35. Manage Technical Debt 35 Data is even less visible than code -- blurs boundaries.
  • 36. Recruiting machine learning experts 36 who ◦ expertise in sequence models > in domain ◦ depth in specific model > breadth over many where to find them ◦ local network: meet-ups, LinkedIn ◦ academic conferences ◦ communities (e.g., Kaggle, users of ML tools) how to attract them ◦ explain purpose & uniqueness of the problem
  • 37. Online vs. offline evaluation 37 Online (e.g., A/B) ● Individual decisions need to not be mission critical ● Enough use to get sufficient statistics in short time ● Helps motivate aligning production and development environments ● If the model is updated online, validate it against offline data periodically to watch out for drift ● Usually focused on extrinsic or distant measures Offline ● Always have some of this to for long-term protection against regression ● May be required for intrinsic measurement
  • 38. 38 Epistemology Exact sciences Experimental sciences Engineering Art Example ... Theoretical C.S. Physics Software Management Deals with ... Theorems Theories Artifacts People Truth is ... Forever Temporary “It works” In the eye of the beholder Parts of machine learning fit all four... Learning theory Model & measure Systems Users This is great, as long as we don’t confuse one kind of work for another. (This table is an expansion of one in Bottou’s ICML 2015 talk.)

Editor's Notes

  • #10: Balance: Consistency v correctness Extrinsic v intrinsic Interpretability v correctness Precision v recall (volume) Exploitation v exploration Data: Historic Diagnostic Online v offline
  • #11: Balance: Consistency v correctness Extrinsic v intrinsic Interpretability v correctness Precision v recall (volume) Exploitation v exploration Data: Historic Diagnostic Online v offline
  • #12: Balance: Consistency v correctness Extrinsic v intrinsic Interpretability v correctness Precision v recall (volume) Exploitation v exploration Data: Historic Diagnostic Online v offline
  • #13: Balance: Consistency v correctness Extrinsic v intrinsic Interpretability v correctness Precision v recall (volume) Exploitation v exploration Data: Historic Diagnostic Online v offline
  • #14: Balance: Consistency v correctness Extrinsic v intrinsic Interpretability v correctness Precision v recall (volume) Exploitation v exploration Data: Historic Diagnostic Online v offline
  • #20: For online A/B tests, choose control. Oracle Experiments both for data collection and speed (esp of adding caches)
  • #32: “Changing Anything Changes Everything”