SlideShare a Scribd company logo
Crash Course in
A/B testing
A statistical perspective
Wayne Tai Lee
Roadmap
• What is A/B testing?
•
•
•
•
•
•

Good experiments and the role of statistics
Similar to proof by contradiction
“Tests”
Big data meets classic asymptotics
Complaints with classical hypothesis testing
Alternatives?
What is A/B Testing
• An industry term for controlled and randomized experiment
between treatment/control groups.
• Age old problem….especially with humans
What most people know:
Gather samples

Apply treatments

Compare

Measure Outcome

Assign treatments

A
?

B
What most people know:
Only difference is in the treatment!

A
?

B
Reality:
Variability from
Samples/Inputs

Variability from
Treatment/function

Variability from
Measurement

A
??????

B
How do we account
for all that?
Confounding:
• If there are variabilities in addition to the treatment effect,
how can we identify/isolate the effect from the treatment?
3 Types of Variability:
• Controlled variability
• Systematic and desired
• i.e. our treatment
• Bias
• Systematic but not desired
• Anything that can confound our study
• Noise
• Random error but not desired
• Won’t confound the study but makes it hard to
make a decision.
How do we categorize each?
Variability from
Samples/Inputs

Variability from
Treatment/function

Variability from
Measurement

A
??????

B
Reality:
Good instrumentation!

A
??????

B
Reality:
Randomize assignment!
Convert bias to noise

A
??????

B
Reality:
Randomize assignment!
Convert bias to noise

A
??????

B
Your population can be skewed or biased….but
that only restricts the generalizability of the results
Reality:
Think about what you want to measure and how!
Minimize the noise level/variability in the metric.

A
?

B
A good experiment in general:
- Good design and implementation should be used to avoid bias.
- For unavoidable biases, use randomization to turn it into noise.
- Good planning to minimize noise in data.
How do we deal with noise?
- Bread and butter of statisticians!
- Quantify the magnitude of the treatment
- Quantify the magnitude of the noise
- Just compare…..most of the time
Formalizing the Comparison
Similar to proof by contradiction
- You assume the difference is by chance (noise)
Formalizing the Comparison
Similar to proof by contradiction
- You assume the difference is by chance (noise)
- See how the data contradicts the assumption
Formalizing the Comparison
Similar to proof by contradiction
- You assume the difference is by chance (noise)
- See how the data contradicts the assumption
- If the surprise surpasses a threshold, we reject
the assumption.
- ….nothing is “100%”
Difference due to chance?
Red -> treatment; Black -> control

ID
Person 1
Person 2
Person 3
Person 4
Person 5
Person 6

PV
39
209
31
98
9
151
Difference due to chance?
Red -> treatment; Black -> control

ID
Person 1
Person 2
Person 3
Person 4
Person 5
Person 6

PV
39
209
31
98
9
151

|
|
|
|
|
|
|

Let’s measure the difference in means!

mean
72

mean
124.5

Diff = -52.5
….so what?
Difference due to chance?
Red -> treatment; Black -> control

ID

PV

ID

PV

Person 1
Person 2
Person 3
Person 4
Person 5
Person 6

39
209
31
98
9
151

1
2
3
4
5
6

39
209
31
98
9
151

If there was no difference from the treatment, shuffling the treatment status
can emulate the randomization of the samples.
Difference due to chance?
Red -> treatment; Black -> control

ID

PV

ID

PV

Person 1
Person 2
Person 3
Person 4
Person 5
Person 6

39
209
31
98
9
151

1
2
3
4
5
6

39
209
31
98
9
151

Diff = 122.25 – 24 = 98.25
Difference due to chance?
Red -> treatment; Black -> control

ID

PV

ID

PV

Person 1
Person 2
Person 3
Person 4
Person 5
Person 6

39
209
31
98
9
151

1
2
3
4
5
6

39
209
31
98
9
151

Diff = 107. 5 – 53.5 = 54
Difference due to chance?
50000 repeats later…..

Our original -52.5
Difference due to chance?

Our original -52.5

46.5% of the permutations yielded a larger if not the same
difference as our original sample (in magnitude).
Are you surprised by the initial results?
“Tests”
Congratulations!
- You just learned the permutation test!
- The 46.5% is the p-value under the permutation test.
“Tests”
Congratulations!
- You just learned the permutation test!
- The 46.5% is the p-value under the permutation test.

Problems:
- Permuting the labels can be computationally costly.
- Not possible before computers!
- Statistical theory says there are many tests out there.
Standard t-test:
1) Calculate delta:
= mean_treatment – mean_control
2) Assumes follows a Normal distribution then calculate
the p-value.
p-value = sum of red areas

-

0

3) If p-value < 0.05 then we reject the assumption that there is no
difference between treatment and control.

28

“Tests”
Big data meets classic Stats

29

Wait, our metrics may not be Normal!
Big Data meets Classic Stat
We care about the “mean of
the metric” and not the actual
metric distribution.
30

Wait, our metrics may not be Normal!
Big Data meets Classic Stat
We care about the “mean of
the metric” and not the actual
metric distribution.
31

Wait, our metrics may not be Normal!

Central Limit Theorem:
The “mean of the metric” will be
Normal if the sample size is LARGE!
Assumptions with t-test
- Normality of %delta
- Guaranteed with large sample sizes
- Independent Samples
- Not too many 0’s
That’s IT!!!
- Easy to automate.
- Simple and general.

32

Big Data meets Classic Stat
What are “Tests”?

33

• Statistical tests are just procedures that depend on data
to make a decision.
• Engineerify: Statistical tests are functions that take
in data, treatments, and return a boolean.
• Statistical tests are just procedures that depend on data
to make a decision.
• Engineerify: Statistical tests are functions that take
in data, treatments, and return a boolean.
Guarantees:
• By setting the p-value to compare to a 5% threshold, we control
P( Test says difference exists | In reality NO difference) <= 5%

34

What are “Tests”?
• Statistical tests are just procedures that depend on data
to make a decision.
• Engineerify: Statistical tests are functions that take
in data, treatments, and return a boolean.
Guarantees:
• By setting the p-value to compare to a 5% threshold, we control
P( Test says difference exists | In reality NO difference) <= 5%
• By setting the power of the test to be 80%, we control
P( Test says difference exists | In reality difference exists) >= 80%

35

What are “Tests”?
• Statistical tests are just procedures that depend on data
to make a decision.
• Engineerify: Statistical tests are functions that take
in data, treatments, and return a boolean.
Guarantees:
• By setting the p-value to compare to a 5% threshold, we control
P( Test says difference exists | In reality NO difference) <= 5%
• By setting the power of the test to be 80%, we control
P( Test says difference exists | In reality difference exists) >= 80%
• Increasing this often requires more data

36

What are “Tests”?
Meaning:
All treatments
No difference

Difference exist
37

Reality

Useless treatments

Impactful treatments
Meaning:
All treatments
No difference

Difference exist
38

Reality

Useless treatments

Test Decision

No difference

Difference Exists

Impactful treatments

No difference Difference Exists
Meaning:
All treatments
No difference

Difference exist
39

Reality

Useless treatments

Test Decision
Guarantees
through
conventional
thresholds

No difference

>95%

Difference Exists

<=5%

Impactful treatments

No difference Difference Exists

<20%

>=80%
Meaning:
All treatments
No difference

Difference exist
40

Reality

Useless treatments

Test Decision

Guarantees
through
conventional
thresholds
Jargon

No difference

>95%

Difference Exists

<=5%

Significance level

Impactful treatments

No difference Difference Exists

<20%

>=80%

Power
Meaning:

41

- Most appropriate over repeated decision making
- E.g. spammer or not
- Most appropriate over repeated decision making
- E.g. spammer or not
- Not seeing a difference could mean
- There is no difference
- Not enough power

42

Meaning:
- Most appropriate over repeated decision making
- E.g. spammer or not
- Not seeing a difference could mean
- There is no difference
- Not enough power
- Seeing a difference could mean
- There is a difference
- Got unlucky/lucky

43

Meaning:
- Most appropriate over repeated decision making
- E.g. spammer or not
- Not seeing a difference could mean
- There is no difference
- Not enough power
- Seeing a difference could mean
- There is a difference
- Got unlucky/lucky
- Your specific test is either impactful or not. (100% or 0%)
Not what most people want to hear….

44

Meaning:
Complaints with Hypth Testing

45

• People get really stuck on p-values and tests.
• Confusing, boring, and formulaic.
Complaints with Hypth Testing

46

• People get really stuck on p-values and tests.
• Confusing, boring, and formulaic.
• Statistical significance != Scientific significance
• You could detect a .000001 difference, so what?
• People get really stuck on p-values and tests.
• Confusing, boring, and formulaic.
• Statistical significance != Scientific significance
• You could detect a .000001 difference, so what?
• Multiple Hypothesis testing
• 5% false positive is 1 out of 20. Quite high!
• http://xkcd.com/882/
• Most published results are false still (Ioannidis 2005)

47

Complaints with Hypth Testing
• People get really stuck on p-values and tests.
• Confusing, boring, and formulaic.
• Statistical significance != Scientific significance
• You could detect a .000001 difference, so what?
• Multiple Hypothesis testing
• 5% false positive is 1 out of 20. Quite high!
• http://xkcd.com/882/
• Most published results are false still (Ioannidis 2005)
• What is it answering?
• Nothing specific about your test…. probabilities are
over repeated trials.

48

Complaints with Hypth Testing
Both children of a British mother died within a short period of
time. Mother was convicted of murder because p-value was low.
If she was innocent, the chance of both children dying is low

p-value = P( two deaths | innocent )

49

Abuse: Prosecutor Fallacy
Both children of a British mother died within a short period of
time. Mother was convicted of murder because p-value was low.
If she was innocent, the chance of both children dying is low

p-value = P( two deaths | innocent )
In fact, we should be looking at P( innocent | two deaths )

This is the prosecutor’s fallacy.

50

Abuse: Prosecutor Fallacy
Example:

51

All Mothers

Guilty Mothers

Two deaths

Innocent Mothers

Two deaths
Example: base line matters!

52

All Mothers

Guilty Mothers

Two deaths

Innocent Mothers

Two deaths

P-value can be small.
But base line can be huge.
Any Alternatives?

53

P( innocent | two deaths ) is what we want……
but does it make sense?
Bayesian methodology:
P( difference exists | data )
This requires knowing P(difference exists), i.e. the prior
- Philosophical debate, “What is a probability?”
- Easy to cheat the numbers
- How to deal with multiple hypothesis testing?
- What are we doing in the company?
- Rumor has it that “Multi-armed bandit > A/B testing”?

54

Questions?

More Related Content

PPTX
Spss session 1 and 2
PPTX
Test for proportion
PPTX
Hypothesis testing Part1
PPTX
Testing of Hypothesis
PPT
What So Funny About Proportion Testv3
PDF
To p or not to p
PDF
How Significant is Statistically Significant? The case of Audio Music Similar...
PDF
2010 smg training_cardiff_day1_session1(3 of 3)beyene
Spss session 1 and 2
Test for proportion
Hypothesis testing Part1
Testing of Hypothesis
What So Funny About Proportion Testv3
To p or not to p
How Significant is Statistically Significant? The case of Audio Music Similar...
2010 smg training_cardiff_day1_session1(3 of 3)beyene

What's hot (20)

PPT
RESEARCH METHODS LESSON 3
PPT
Hypothesis testing
PDF
Hypothesis testing - T Test
PPT
Review Z Test Ci 1
PPTX
Statistical tests
PDF
Lect w6 hypothesis_testing
PPT
parametric hypothesis testing using MATLAB
PPTX
Yoav Benjamini, "In the world beyond p<.05: When & How to use P<.0499..."
PDF
Hypothesis Tests in R Programming
PPT
Presentation week 8
PPTX
Testing Of Hypothesis
PPTX
Hypothesis testing
PPTX
Hypothesis testing
PPTX
Z test, f-test,etc
PPTX
Session 9 intro_of_topics_in_hypothesis_testing
PPTX
Hypothesis testing
PPT
Basics of statistics
PPTX
What's Significant? Hypothesis Testing, Effect Size, Confidence Intervals, & ...
PPT
Lecture2 hypothesis testing
PPT
ch04.ppt
RESEARCH METHODS LESSON 3
Hypothesis testing
Hypothesis testing - T Test
Review Z Test Ci 1
Statistical tests
Lect w6 hypothesis_testing
parametric hypothesis testing using MATLAB
Yoav Benjamini, "In the world beyond p<.05: When & How to use P<.0499..."
Hypothesis Tests in R Programming
Presentation week 8
Testing Of Hypothesis
Hypothesis testing
Hypothesis testing
Z test, f-test,etc
Session 9 intro_of_topics_in_hypothesis_testing
Hypothesis testing
Basics of statistics
What's Significant? Hypothesis Testing, Effect Size, Confidence Intervals, & ...
Lecture2 hypothesis testing
ch04.ppt
Ad

Viewers also liked (7)

PPTX
Feature selection can hurt model inference
PPTX
The Key to Blind Dates - Data Snooping
PPTX
Introduction to Bag of Little Bootstrap
ODP
Explaining the Basics of Mean Field Variational Approximation for Statisticians
ODP
What is bayesian statistics and how is it different?
PPTX
Genscape Photos
POTX
LDA Beginner's Tutorial
Feature selection can hurt model inference
The Key to Blind Dates - Data Snooping
Introduction to Bag of Little Bootstrap
Explaining the Basics of Mean Field Variational Approximation for Statisticians
What is bayesian statistics and how is it different?
Genscape Photos
LDA Beginner's Tutorial
Ad

Similar to Crash Course in A/B testing (20)

PPTX
P-values the gold measure of statistical validity are not as reliable as many...
PPTX
Basics of Statistics.pptx
PPT
PPT
Statistics basics for oncologist kiran
PPTX
Some statistical concepts relevant to proteomics data analysis
PPTX
5. testing differences
PPT
Chapter 28 clincal trials
PDF
BASIC STATISTICS AND THEIR INTERPRETATION AND USE IN EPIDEMIOLOGY 050822.pdf
PPTX
SAMPLE SIZE CALCULATION IN DIFFERENT STUDY DESIGNS AT.pptx
PPTX
Application of statistical tests in Biomedical Research .pptx
PPTX
PPTX
Causality in Python PyCon 2021 ISRAEL
PPTX
Non_parametric_test-n3.pptx ndufhdnjdnfufbfnfcnj
PPT
PPT
Inferential statistics_AAF 500L 2021.ppt
PPTX
7. hypothesis_tot (1)................pptx
PPTX
Public health stat Test of significance.pptx
PDF
Testing of Hypothesis combined with tests.pdf
PPTX
Probablity and statistics.pptx of btech cse data science
P-values the gold measure of statistical validity are not as reliable as many...
Basics of Statistics.pptx
Statistics basics for oncologist kiran
Some statistical concepts relevant to proteomics data analysis
5. testing differences
Chapter 28 clincal trials
BASIC STATISTICS AND THEIR INTERPRETATION AND USE IN EPIDEMIOLOGY 050822.pdf
SAMPLE SIZE CALCULATION IN DIFFERENT STUDY DESIGNS AT.pptx
Application of statistical tests in Biomedical Research .pptx
Causality in Python PyCon 2021 ISRAEL
Non_parametric_test-n3.pptx ndufhdnjdnfufbfnfcnj
Inferential statistics_AAF 500L 2021.ppt
7. hypothesis_tot (1)................pptx
Public health stat Test of significance.pptx
Testing of Hypothesis combined with tests.pdf
Probablity and statistics.pptx of btech cse data science

Recently uploaded (20)

PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PDF
Sports Quiz easy sports quiz sports quiz
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PPTX
Institutional Correction lecture only . . .
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PDF
Insiders guide to clinical Medicine.pdf
PDF
Computing-Curriculum for Schools in Ghana
PDF
RMMM.pdf make it easy to upload and study
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PDF
TR - Agricultural Crops Production NC III.pdf
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PPTX
Pharma ospi slides which help in ospi learning
PDF
VCE English Exam - Section C Student Revision Booklet
PPTX
master seminar digital applications in india
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
Renaissance Architecture: A Journey from Faith to Humanism
STATICS OF THE RIGID BODIES Hibbelers.pdf
Sports Quiz easy sports quiz sports quiz
2.FourierTransform-ShortQuestionswithAnswers.pdf
Microbial diseases, their pathogenesis and prophylaxis
Institutional Correction lecture only . . .
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
Insiders guide to clinical Medicine.pdf
Computing-Curriculum for Schools in Ghana
RMMM.pdf make it easy to upload and study
Supply Chain Operations Speaking Notes -ICLT Program
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
TR - Agricultural Crops Production NC III.pdf
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
Pharma ospi slides which help in ospi learning
VCE English Exam - Section C Student Revision Booklet
master seminar digital applications in india
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
Pharmacology of Heart Failure /Pharmacotherapy of CHF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf

Crash Course in A/B testing

  • 1. Crash Course in A/B testing A statistical perspective Wayne Tai Lee
  • 2. Roadmap • What is A/B testing? • • • • • • Good experiments and the role of statistics Similar to proof by contradiction “Tests” Big data meets classic asymptotics Complaints with classical hypothesis testing Alternatives?
  • 3. What is A/B Testing • An industry term for controlled and randomized experiment between treatment/control groups. • Age old problem….especially with humans
  • 4. What most people know: Gather samples Apply treatments Compare Measure Outcome Assign treatments A ? B
  • 5. What most people know: Only difference is in the treatment! A ? B
  • 6. Reality: Variability from Samples/Inputs Variability from Treatment/function Variability from Measurement A ?????? B How do we account for all that?
  • 7. Confounding: • If there are variabilities in addition to the treatment effect, how can we identify/isolate the effect from the treatment?
  • 8. 3 Types of Variability: • Controlled variability • Systematic and desired • i.e. our treatment • Bias • Systematic but not desired • Anything that can confound our study • Noise • Random error but not desired • Won’t confound the study but makes it hard to make a decision.
  • 9. How do we categorize each? Variability from Samples/Inputs Variability from Treatment/function Variability from Measurement A ?????? B
  • 12. Reality: Randomize assignment! Convert bias to noise A ?????? B Your population can be skewed or biased….but that only restricts the generalizability of the results
  • 13. Reality: Think about what you want to measure and how! Minimize the noise level/variability in the metric. A ? B
  • 14. A good experiment in general: - Good design and implementation should be used to avoid bias. - For unavoidable biases, use randomization to turn it into noise. - Good planning to minimize noise in data.
  • 15. How do we deal with noise? - Bread and butter of statisticians! - Quantify the magnitude of the treatment - Quantify the magnitude of the noise - Just compare…..most of the time
  • 16. Formalizing the Comparison Similar to proof by contradiction - You assume the difference is by chance (noise)
  • 17. Formalizing the Comparison Similar to proof by contradiction - You assume the difference is by chance (noise) - See how the data contradicts the assumption
  • 18. Formalizing the Comparison Similar to proof by contradiction - You assume the difference is by chance (noise) - See how the data contradicts the assumption - If the surprise surpasses a threshold, we reject the assumption. - ….nothing is “100%”
  • 19. Difference due to chance? Red -> treatment; Black -> control ID Person 1 Person 2 Person 3 Person 4 Person 5 Person 6 PV 39 209 31 98 9 151
  • 20. Difference due to chance? Red -> treatment; Black -> control ID Person 1 Person 2 Person 3 Person 4 Person 5 Person 6 PV 39 209 31 98 9 151 | | | | | | | Let’s measure the difference in means! mean 72 mean 124.5 Diff = -52.5 ….so what?
  • 21. Difference due to chance? Red -> treatment; Black -> control ID PV ID PV Person 1 Person 2 Person 3 Person 4 Person 5 Person 6 39 209 31 98 9 151 1 2 3 4 5 6 39 209 31 98 9 151 If there was no difference from the treatment, shuffling the treatment status can emulate the randomization of the samples.
  • 22. Difference due to chance? Red -> treatment; Black -> control ID PV ID PV Person 1 Person 2 Person 3 Person 4 Person 5 Person 6 39 209 31 98 9 151 1 2 3 4 5 6 39 209 31 98 9 151 Diff = 122.25 – 24 = 98.25
  • 23. Difference due to chance? Red -> treatment; Black -> control ID PV ID PV Person 1 Person 2 Person 3 Person 4 Person 5 Person 6 39 209 31 98 9 151 1 2 3 4 5 6 39 209 31 98 9 151 Diff = 107. 5 – 53.5 = 54
  • 24. Difference due to chance? 50000 repeats later….. Our original -52.5
  • 25. Difference due to chance? Our original -52.5 46.5% of the permutations yielded a larger if not the same difference as our original sample (in magnitude). Are you surprised by the initial results?
  • 26. “Tests” Congratulations! - You just learned the permutation test! - The 46.5% is the p-value under the permutation test.
  • 27. “Tests” Congratulations! - You just learned the permutation test! - The 46.5% is the p-value under the permutation test. Problems: - Permuting the labels can be computationally costly. - Not possible before computers! - Statistical theory says there are many tests out there.
  • 28. Standard t-test: 1) Calculate delta: = mean_treatment – mean_control 2) Assumes follows a Normal distribution then calculate the p-value. p-value = sum of red areas - 0 3) If p-value < 0.05 then we reject the assumption that there is no difference between treatment and control. 28 “Tests”
  • 29. Big data meets classic Stats 29 Wait, our metrics may not be Normal!
  • 30. Big Data meets Classic Stat We care about the “mean of the metric” and not the actual metric distribution. 30 Wait, our metrics may not be Normal!
  • 31. Big Data meets Classic Stat We care about the “mean of the metric” and not the actual metric distribution. 31 Wait, our metrics may not be Normal! Central Limit Theorem: The “mean of the metric” will be Normal if the sample size is LARGE!
  • 32. Assumptions with t-test - Normality of %delta - Guaranteed with large sample sizes - Independent Samples - Not too many 0’s That’s IT!!! - Easy to automate. - Simple and general. 32 Big Data meets Classic Stat
  • 33. What are “Tests”? 33 • Statistical tests are just procedures that depend on data to make a decision. • Engineerify: Statistical tests are functions that take in data, treatments, and return a boolean.
  • 34. • Statistical tests are just procedures that depend on data to make a decision. • Engineerify: Statistical tests are functions that take in data, treatments, and return a boolean. Guarantees: • By setting the p-value to compare to a 5% threshold, we control P( Test says difference exists | In reality NO difference) <= 5% 34 What are “Tests”?
  • 35. • Statistical tests are just procedures that depend on data to make a decision. • Engineerify: Statistical tests are functions that take in data, treatments, and return a boolean. Guarantees: • By setting the p-value to compare to a 5% threshold, we control P( Test says difference exists | In reality NO difference) <= 5% • By setting the power of the test to be 80%, we control P( Test says difference exists | In reality difference exists) >= 80% 35 What are “Tests”?
  • 36. • Statistical tests are just procedures that depend on data to make a decision. • Engineerify: Statistical tests are functions that take in data, treatments, and return a boolean. Guarantees: • By setting the p-value to compare to a 5% threshold, we control P( Test says difference exists | In reality NO difference) <= 5% • By setting the power of the test to be 80%, we control P( Test says difference exists | In reality difference exists) >= 80% • Increasing this often requires more data 36 What are “Tests”?
  • 37. Meaning: All treatments No difference Difference exist 37 Reality Useless treatments Impactful treatments
  • 38. Meaning: All treatments No difference Difference exist 38 Reality Useless treatments Test Decision No difference Difference Exists Impactful treatments No difference Difference Exists
  • 39. Meaning: All treatments No difference Difference exist 39 Reality Useless treatments Test Decision Guarantees through conventional thresholds No difference >95% Difference Exists <=5% Impactful treatments No difference Difference Exists <20% >=80%
  • 40. Meaning: All treatments No difference Difference exist 40 Reality Useless treatments Test Decision Guarantees through conventional thresholds Jargon No difference >95% Difference Exists <=5% Significance level Impactful treatments No difference Difference Exists <20% >=80% Power
  • 41. Meaning: 41 - Most appropriate over repeated decision making - E.g. spammer or not
  • 42. - Most appropriate over repeated decision making - E.g. spammer or not - Not seeing a difference could mean - There is no difference - Not enough power 42 Meaning:
  • 43. - Most appropriate over repeated decision making - E.g. spammer or not - Not seeing a difference could mean - There is no difference - Not enough power - Seeing a difference could mean - There is a difference - Got unlucky/lucky 43 Meaning:
  • 44. - Most appropriate over repeated decision making - E.g. spammer or not - Not seeing a difference could mean - There is no difference - Not enough power - Seeing a difference could mean - There is a difference - Got unlucky/lucky - Your specific test is either impactful or not. (100% or 0%) Not what most people want to hear…. 44 Meaning:
  • 45. Complaints with Hypth Testing 45 • People get really stuck on p-values and tests. • Confusing, boring, and formulaic.
  • 46. Complaints with Hypth Testing 46 • People get really stuck on p-values and tests. • Confusing, boring, and formulaic. • Statistical significance != Scientific significance • You could detect a .000001 difference, so what?
  • 47. • People get really stuck on p-values and tests. • Confusing, boring, and formulaic. • Statistical significance != Scientific significance • You could detect a .000001 difference, so what? • Multiple Hypothesis testing • 5% false positive is 1 out of 20. Quite high! • http://xkcd.com/882/ • Most published results are false still (Ioannidis 2005) 47 Complaints with Hypth Testing
  • 48. • People get really stuck on p-values and tests. • Confusing, boring, and formulaic. • Statistical significance != Scientific significance • You could detect a .000001 difference, so what? • Multiple Hypothesis testing • 5% false positive is 1 out of 20. Quite high! • http://xkcd.com/882/ • Most published results are false still (Ioannidis 2005) • What is it answering? • Nothing specific about your test…. probabilities are over repeated trials. 48 Complaints with Hypth Testing
  • 49. Both children of a British mother died within a short period of time. Mother was convicted of murder because p-value was low. If she was innocent, the chance of both children dying is low p-value = P( two deaths | innocent ) 49 Abuse: Prosecutor Fallacy
  • 50. Both children of a British mother died within a short period of time. Mother was convicted of murder because p-value was low. If she was innocent, the chance of both children dying is low p-value = P( two deaths | innocent ) In fact, we should be looking at P( innocent | two deaths ) This is the prosecutor’s fallacy. 50 Abuse: Prosecutor Fallacy
  • 51. Example: 51 All Mothers Guilty Mothers Two deaths Innocent Mothers Two deaths
  • 52. Example: base line matters! 52 All Mothers Guilty Mothers Two deaths Innocent Mothers Two deaths P-value can be small. But base line can be huge.
  • 53. Any Alternatives? 53 P( innocent | two deaths ) is what we want…… but does it make sense? Bayesian methodology: P( difference exists | data ) This requires knowing P(difference exists), i.e. the prior - Philosophical debate, “What is a probability?” - Easy to cheat the numbers
  • 54. - How to deal with multiple hypothesis testing? - What are we doing in the company? - Rumor has it that “Multi-armed bandit > A/B testing”? 54 Questions?

Editor's Notes

  • #14: Wine testing pairing vs just two groups