SlideShare a Scribd company logo
ANOVA:
Analysis of Variation
Math 243 Lecture
R. Pruim
The basic ANOVA situation
Two variables: 1 Categorical, 1 Quantitative
Main Question: Do the (means of) the quantitative
variables depend on which group (given by
categorical variable) the individual is in?
If categorical variable has only 2 values:
• 2-sample t-test
ANOVA allows for 3 or more groups
An example ANOVA situation
Subjects: 25 patients with blisters
Treatments: Treatment A, Treatment B, Placebo
Measurement: # of days until blisters heal
Data [and means]:
• A: 5,6,6,7,7,8,9,10 [7.25]
• B: 7,7,8,9,9,10,10,11 [8.875]
• P: 7,9,9,10,10,10,11,12,13 [10.11]
Are these differences significant?
Informal Investigation
Graphical investigation:
• side-by-side box plots
• multiple histograms
Whether the differences between the groups are
significant depends on
• the difference in the means
• the standard deviations of each group
• the sample sizes
ANOVA determines P-value from the F statistic
Side by Side Boxplots
PBA
13
12
11
10
9
8
7
6
5
treatment
days
What does ANOVA do?
At its simplest (there are extensions)
ANOVA tests the following hypotheses:
H0: The means of all the groups are equal.
Ha: Not all the means are equal
• doesn’t say how or which ones differ.
• Can follow up with “multiple comparisons”
Note: we usually refer to the sub-populations as
“groups” when doing ANOVA.
Assumptions of ANOVA
• each group is approximately normal
 check this by looking at histograms and/or
normal quantile plots, or use assumptions
 can handle some nonnormality, but not
severe outliers
• standard deviations of each group are
approximately equal
 rule of thumb: ratio of largest to smallest
sample st. dev. must be less than 2:1
Normality Check
We should check for normality using:
• assumptions about population
• histograms for each group
• normal quantile plot for each group
With such small data sets, there really isn’t a
really good way to check normality from data,
but we make the common assumption that
physical measurements of people tend to be
normally distributed.
Standard Deviation Check
Compare largest and smallest standard deviations:
• largest: 1.764
• smallest: 1.458
• 1.458 x 2 = 2.916 > 1.764
Note: variance ratio of 4:1 is equivalent.
Variable treatment N Mean Median StDev
days A 8 7.250 7.000 1.669
B 8 8.875 9.000 1.458
P 9 10.111 10.000 1.764
Notation for ANOVA
• n = number of individuals all together
• I = number of groups
• = mean for entire data set is
Group i has
• ni = # of individuals in group i
• xij = value for individual j in group i
• = mean for group i
• si = standard deviation for group i
ix
x
How ANOVA works (outline)
ANOVA measures two sources of variation in the data and
compares their relative sizes
• variation BETWEEN groups
• for each data value look at the difference between
its group mean and the overall mean
• variation WITHIN groups
• for each data value we look at the difference
between that value and the mean of its group
( )2
iij xx −
( )2
xxi −
The ANOVA F-statistic is a ratio of the
Between Group Variaton divided by the
Within Group Variation:
MSE
MSG
Within
Between
F ==
A large F is evidence against H0, since it
indicates that there is more difference
between groups than within groups.
Minitab ANOVA Output
Analysis of Variance for days
Source DF SS MS F P
treatment 2 34.74 17.37 6.45 0.006
Error 22 59.26 2.69
Total 24 94.00
Df Sum Sq Mean Sq F value Pr(>F)
treatment 2 34.7 17.4 6.45 0.0063 **
Residuals 22 59.3 2.7
R ANOVA Output
How are these computations
made?
We want to measure the amount of variation due
to BETWEEN group variation and WITHIN group
variation
For each data value, we calculate its contribution
to:
• BETWEEN group variation:
• WITHIN group variation:
xi −x( )
2
2
)( iij xx −
An even smaller example
Suppose we have three groups
• Group 1: 5.3, 6.0, 6.7
• Group 2: 5.5, 6.2, 6.4, 5.7
• Group 3: 7.5, 7.2, 7.9
We get the following statistics:
SUMMARY
Groups Count Sum Average Variance
Column1 3 18 6 0.49
Column2 4 23.8 5.95 0.176667
Column3 3 22.6 7.533333 0.123333
Excel ANOVA Output
ANOVA
Source of Variation SS df MS F P-value F crit
Between Groups 5.127333 2 2.563667 10.21575 0.008394 4.737416
Within Groups 1.756667 7 0.250952
Total 6.884 9
1 less than number
of groups
number of data values -
number of groups
(equals df for each
group added together)1 less than number of individuals
(just like other situations)
Computing ANOVA F statistic
WITHIN BETWEEN
difference: difference
group data - group mean group mean - overall mean
data group mean plain squared plain squared
5.3 1 6.00 -0.70 0.490 -0.4 0.194
6.0 1 6.00 0.00 0.000 -0.4 0.194
6.7 1 6.00 0.70 0.490 -0.4 0.194
5.5 2 5.95 -0.45 0.203 -0.5 0.240
6.2 2 5.95 0.25 0.063 -0.5 0.240
6.4 2 5.95 0.45 0.203 -0.5 0.240
5.7 2 5.95 -0.25 0.063 -0.5 0.240
7.5 3 7.53 -0.03 0.001 1.1 1.188
7.2 3 7.53 -0.33 0.109 1.1 1.188
7.9 3 7.53 0.37 0.137 1.1 1.188
TOTAL 1.757 5.106
TOTAL/df 0.25095714 2.55275
overall mean: 6.44 F = 2.5528/0.25025 = 10.21575
Minitab ANOVA Output
1 less than # of
groups
# of data values - # of groups
(equals df for each group
added together)
1 less than # of individuals
(just like other situations)
Analysis of Variance for days
Source DF SS MS F P
treatment 2 34.74 17.37 6.45 0.006
Error 22 59.26 2.69
Total 24 94.00
Minitab ANOVA Output
Analysis of Variance for days
Source DF SS MS F P
treatment 2 34.74 17.37 6.45 0.006
Error 22 59.26 2.69
Total 24 94.00
2
)( i
obs
ij xx −∑ (xi
obs
∑ −x)2
(xij
obs
∑ −x)2
SS stands for sum of squares
• ANOVA splits this into 3 parts
Minitab ANOVA Output
MSG = SSG / DFG
MSE = SSE / DFE
Analysis of Variance for days
Source DF SS MS F P
treatment 2 34.74 17.37 6.45 0.006
Error 22 59.26 2.69
Total 24 94.00
F = MSG / MSE
P-value
comes from
F(DFG,DFE)
(P-values for the F statistic are in Table E)
So How big is F?
Since F is
Mean Square Between / Mean Square Within
= MSG / MSE
A large value of F indicates relatively more
difference between groups than within groups
(evidence against H0)
To get the P-value, we compare to F(I-1,n-I)-distribution
• I-1 degrees of freedom in numerator (# groups -1)
• n - I degrees of freedom in denominator (rest of df)
Connections between SST, MST,
and standard deviation
So SST = (n -1) s2
, and MST = s2
. That is, SST
and MST measure the TOTAL variation in the
data set.
s2
=
xij −x( )
2
∑
n−1
=
SST
DFT
=MST
If ignore the groups for a moment and just
compute the standard deviation of the entire
data set, we see
Connections between SSE, MSE,
and standard deviation
So SS[Within Group i] = (si
2
) (dfi )
( )
ii
iij
i
df
iSS
n
xx
s
]GroupWithin[
1
2
2
=
−
−
=
∑
This means that we can compute SSE from the
standard deviations and sizes (df) of each group:
)()1(
][][
22
iiii dfsns
iGroupWithinSSWithinSSSSE
∑∑
∑
=−=
==
Remember:
Pooled estimate for st. dev
sp
2
=
(n1−1)s1
2
+(n2−1)s2
2
+...+(nI −1)sI
2
n−I
sp
2
=
(df1)s1
2
+(df2)s2
2
+...+(dfI)sI
2
df1+df2+...+dfI
One of the ANOVA assumptions is that all
groups have the same standard deviation. We
can estimate this with a weighted average:
MSE
DFE
SSE
sp ==2
so MSE is the
pooled estimate
of variance
In Summary
SST = (xij − x
obs
∑ )2
= s2
(DFT)
SSE = (xij − xi)2
=
obs
∑ si
2
groups
∑ (dfi)
SSG = (xi
obs
∑ − x)2
= ni(xi − x)2
groups
∑
SSE +SSG = SST; MS =
SS
DF
; F =
MSG
MSE
R2
Statistic
SST
SSG
TotalSS
BetweenSS
R ==
][
][2
R2
gives the percent of variance due to between
group variation
We will see R2
again when we study
regression.
Where’s the Difference?
Analysis of Variance for days
Source DF SS MS F P
treatmen 2 34.74 17.37 6.45 0.006
Error 22 59.26 2.69
Total 24 94.00
Individual 95% CIs For Mean
Based on Pooled StDev
Level N Mean StDev ----------+---------+---------+------
A 8 7.250 1.669 (-------*-------)
B 8 8.875 1.458 (-------*-------)
P 9 10.111 1.764 (------*-------)
----------+---------+---------+------
Pooled StDev = 1.641 7.5 9.0 10.5
Once ANOVA indicates that the groups do not all
appear to have the same means, what do we do?
Clearest difference: P is worse than A (CI’s don’t overlap)
Multiple Comparisons
Once ANOVA indicates that the groups do not all
have the same means, we can compare them two
by two using the 2-sample t test
• We need to adjust our p-value threshold because we
are doing multiple tests with the same data.
•There are several methods for doing this.
• If we really just want to test the difference between one
pair of treatments, we should set the study up that way.
Tuckey’s Pairwise Comparisons
Tukey's pairwise comparisons
Family error rate = 0.0500
Individual error rate = 0.0199
Critical value = 3.55
Intervals for (column level mean) - (row level mean)
A B
B -3.685
0.435
P -4.863 -3.238
-0.859 0.766
95% confidence
Use alpha = 0.0199 for
each test.
These give 98.01%
CI’s for each pairwise
difference.
Only P vs A is significant
(both values have same sign)
98% CI for A-P is (-0.86,-4.86)
Tukey’s Method in R
Tukey multiple comparisons of means
95% family-wise confidence level
diff lwr upr
B-A 1.6250 -0.43650 3.6865
P-A 2.8611 0.85769 4.8645
P-B 1.2361 -0.76731 3.2395

More Related Content

PPTX
Discriminant analysis
PPTX
Reporting pearson correlation in apa
PPT
Decision tree
PPTX
Sampling and Sampling Distributions
PPTX
Reporting an independent sample t test
PPTX
Data Display and Summary
PPTX
Two-way Repeated Measures ANOVA
PPTX
Reporting an independent sample t- test
Discriminant analysis
Reporting pearson correlation in apa
Decision tree
Sampling and Sampling Distributions
Reporting an independent sample t test
Data Display and Summary
Two-way Repeated Measures ANOVA
Reporting an independent sample t- test

What's hot (20)

PPTX
Clique and sting
PPTX
Reporting a one-way anova
PPTX
Two-Way ANOVA Overview & SPSS interpretation
PDF
Two way anova in spss (procedure and output)
PPT
K means Clustering Algorithm
PPTX
Decision tree
PPT
Data Analysis Using Spss T Test
PDF
Minitab Tutorial for Beginners | What is Minitab? | Minitab Training for Stat...
PPTX
Reporting a paired sample t -test
PPTX
Distributed data mining
PPTX
Repeated-Measures and Two-Factor Analysis of Variance
PPT
Univariate Analysis
PPTX
Cross validation.pptx
PPTX
Primer on major data mining algorithms
PPTX
Linear regression analysis
PPTX
ANOVA 2-WAY Classification
PDF
متسلسلة تايلور وماكلورين ملخص
PPTX
Clustering, k-means clustering
PPTX
Lect5 principal component analysis
PDF
Racing for unbalanced methods selection
Clique and sting
Reporting a one-way anova
Two-Way ANOVA Overview & SPSS interpretation
Two way anova in spss (procedure and output)
K means Clustering Algorithm
Decision tree
Data Analysis Using Spss T Test
Minitab Tutorial for Beginners | What is Minitab? | Minitab Training for Stat...
Reporting a paired sample t -test
Distributed data mining
Repeated-Measures and Two-Factor Analysis of Variance
Univariate Analysis
Cross validation.pptx
Primer on major data mining algorithms
Linear regression analysis
ANOVA 2-WAY Classification
متسلسلة تايلور وماكلورين ملخص
Clustering, k-means clustering
Lect5 principal component analysis
Racing for unbalanced methods selection
Ad

Similar to Anova (Analysis of variation) (20)

PPT
anova.ppt
PPT
ANOVAs01.ppt
PPT
ANOVAs01.ppt
PPT
ANOVAs01.ppt
PPT
ANOVAs01.ppt nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
PPT
ANOVAs01.ppt KHLUGYIFTFYLYUGUH;OUYYUHJLNOI
PPT
ANOVAs01.ppt
PPT
ANOVAs01.ppt
PPTX
ANOVA 2023 aa 2564896.pptx
PPT
Anova lecture
PPT
Anova test
PDF
Quality Engineering material
PPT
Q3W3_ANOVA_SC.pptQ3W3_ANOVA_SC.pptQ3W3_ANOVA_SC.ppt
PDF
2Analysis of Variance.pdf
PPTX
ANOVA BIOstat short explaination .pptx
PPTX
Full Lecture Presentation on ANOVA
PPTX
Analysis of variance
PPT
lecture12.ppt
PPT
lecture12.ppt
PPT
anova & analysis of variance pearson.ppt
anova.ppt
ANOVAs01.ppt
ANOVAs01.ppt
ANOVAs01.ppt
ANOVAs01.ppt nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
ANOVAs01.ppt KHLUGYIFTFYLYUGUH;OUYYUHJLNOI
ANOVAs01.ppt
ANOVAs01.ppt
ANOVA 2023 aa 2564896.pptx
Anova lecture
Anova test
Quality Engineering material
Q3W3_ANOVA_SC.pptQ3W3_ANOVA_SC.pptQ3W3_ANOVA_SC.ppt
2Analysis of Variance.pdf
ANOVA BIOstat short explaination .pptx
Full Lecture Presentation on ANOVA
Analysis of variance
lecture12.ppt
lecture12.ppt
anova & analysis of variance pearson.ppt
Ad

Recently uploaded (20)

PDF
Mega Projects Data Mega Projects Data
PDF
Fluorescence-microscope_Botany_detailed content
PPTX
IB Computer Science - Internal Assessment.pptx
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPTX
Computer network topology notes for revision
PPTX
Supervised vs unsupervised machine learning algorithms
PDF
Lecture1 pattern recognition............
PPTX
climate analysis of Dhaka ,Banglades.pptx
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PPT
Quality review (1)_presentation of this 21
PPTX
Business Acumen Training GuidePresentation.pptx
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PPTX
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
PDF
Clinical guidelines as a resource for EBP(1).pdf
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PPTX
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
Mega Projects Data Mega Projects Data
Fluorescence-microscope_Botany_detailed content
IB Computer Science - Internal Assessment.pptx
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Computer network topology notes for revision
Supervised vs unsupervised machine learning algorithms
Lecture1 pattern recognition............
climate analysis of Dhaka ,Banglades.pptx
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
Quality review (1)_presentation of this 21
Business Acumen Training GuidePresentation.pptx
Acceptance and paychological effects of mandatory extra coach I classes.pptx
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
Clinical guidelines as a resource for EBP(1).pdf
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
Galatica Smart Energy Infrastructure Startup Pitch Deck
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
Miokarditis (Inflamasi pada Otot Jantung)
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx

Anova (Analysis of variation)

  • 1. ANOVA: Analysis of Variation Math 243 Lecture R. Pruim
  • 2. The basic ANOVA situation Two variables: 1 Categorical, 1 Quantitative Main Question: Do the (means of) the quantitative variables depend on which group (given by categorical variable) the individual is in? If categorical variable has only 2 values: • 2-sample t-test ANOVA allows for 3 or more groups
  • 3. An example ANOVA situation Subjects: 25 patients with blisters Treatments: Treatment A, Treatment B, Placebo Measurement: # of days until blisters heal Data [and means]: • A: 5,6,6,7,7,8,9,10 [7.25] • B: 7,7,8,9,9,10,10,11 [8.875] • P: 7,9,9,10,10,10,11,12,13 [10.11] Are these differences significant?
  • 4. Informal Investigation Graphical investigation: • side-by-side box plots • multiple histograms Whether the differences between the groups are significant depends on • the difference in the means • the standard deviations of each group • the sample sizes ANOVA determines P-value from the F statistic
  • 5. Side by Side Boxplots PBA 13 12 11 10 9 8 7 6 5 treatment days
  • 6. What does ANOVA do? At its simplest (there are extensions) ANOVA tests the following hypotheses: H0: The means of all the groups are equal. Ha: Not all the means are equal • doesn’t say how or which ones differ. • Can follow up with “multiple comparisons” Note: we usually refer to the sub-populations as “groups” when doing ANOVA.
  • 7. Assumptions of ANOVA • each group is approximately normal  check this by looking at histograms and/or normal quantile plots, or use assumptions  can handle some nonnormality, but not severe outliers • standard deviations of each group are approximately equal  rule of thumb: ratio of largest to smallest sample st. dev. must be less than 2:1
  • 8. Normality Check We should check for normality using: • assumptions about population • histograms for each group • normal quantile plot for each group With such small data sets, there really isn’t a really good way to check normality from data, but we make the common assumption that physical measurements of people tend to be normally distributed.
  • 9. Standard Deviation Check Compare largest and smallest standard deviations: • largest: 1.764 • smallest: 1.458 • 1.458 x 2 = 2.916 > 1.764 Note: variance ratio of 4:1 is equivalent. Variable treatment N Mean Median StDev days A 8 7.250 7.000 1.669 B 8 8.875 9.000 1.458 P 9 10.111 10.000 1.764
  • 10. Notation for ANOVA • n = number of individuals all together • I = number of groups • = mean for entire data set is Group i has • ni = # of individuals in group i • xij = value for individual j in group i • = mean for group i • si = standard deviation for group i ix x
  • 11. How ANOVA works (outline) ANOVA measures two sources of variation in the data and compares their relative sizes • variation BETWEEN groups • for each data value look at the difference between its group mean and the overall mean • variation WITHIN groups • for each data value we look at the difference between that value and the mean of its group ( )2 iij xx − ( )2 xxi −
  • 12. The ANOVA F-statistic is a ratio of the Between Group Variaton divided by the Within Group Variation: MSE MSG Within Between F == A large F is evidence against H0, since it indicates that there is more difference between groups than within groups.
  • 13. Minitab ANOVA Output Analysis of Variance for days Source DF SS MS F P treatment 2 34.74 17.37 6.45 0.006 Error 22 59.26 2.69 Total 24 94.00 Df Sum Sq Mean Sq F value Pr(>F) treatment 2 34.7 17.4 6.45 0.0063 ** Residuals 22 59.3 2.7 R ANOVA Output
  • 14. How are these computations made? We want to measure the amount of variation due to BETWEEN group variation and WITHIN group variation For each data value, we calculate its contribution to: • BETWEEN group variation: • WITHIN group variation: xi −x( ) 2 2 )( iij xx −
  • 15. An even smaller example Suppose we have three groups • Group 1: 5.3, 6.0, 6.7 • Group 2: 5.5, 6.2, 6.4, 5.7 • Group 3: 7.5, 7.2, 7.9 We get the following statistics: SUMMARY Groups Count Sum Average Variance Column1 3 18 6 0.49 Column2 4 23.8 5.95 0.176667 Column3 3 22.6 7.533333 0.123333
  • 16. Excel ANOVA Output ANOVA Source of Variation SS df MS F P-value F crit Between Groups 5.127333 2 2.563667 10.21575 0.008394 4.737416 Within Groups 1.756667 7 0.250952 Total 6.884 9 1 less than number of groups number of data values - number of groups (equals df for each group added together)1 less than number of individuals (just like other situations)
  • 17. Computing ANOVA F statistic WITHIN BETWEEN difference: difference group data - group mean group mean - overall mean data group mean plain squared plain squared 5.3 1 6.00 -0.70 0.490 -0.4 0.194 6.0 1 6.00 0.00 0.000 -0.4 0.194 6.7 1 6.00 0.70 0.490 -0.4 0.194 5.5 2 5.95 -0.45 0.203 -0.5 0.240 6.2 2 5.95 0.25 0.063 -0.5 0.240 6.4 2 5.95 0.45 0.203 -0.5 0.240 5.7 2 5.95 -0.25 0.063 -0.5 0.240 7.5 3 7.53 -0.03 0.001 1.1 1.188 7.2 3 7.53 -0.33 0.109 1.1 1.188 7.9 3 7.53 0.37 0.137 1.1 1.188 TOTAL 1.757 5.106 TOTAL/df 0.25095714 2.55275 overall mean: 6.44 F = 2.5528/0.25025 = 10.21575
  • 18. Minitab ANOVA Output 1 less than # of groups # of data values - # of groups (equals df for each group added together) 1 less than # of individuals (just like other situations) Analysis of Variance for days Source DF SS MS F P treatment 2 34.74 17.37 6.45 0.006 Error 22 59.26 2.69 Total 24 94.00
  • 19. Minitab ANOVA Output Analysis of Variance for days Source DF SS MS F P treatment 2 34.74 17.37 6.45 0.006 Error 22 59.26 2.69 Total 24 94.00 2 )( i obs ij xx −∑ (xi obs ∑ −x)2 (xij obs ∑ −x)2 SS stands for sum of squares • ANOVA splits this into 3 parts
  • 20. Minitab ANOVA Output MSG = SSG / DFG MSE = SSE / DFE Analysis of Variance for days Source DF SS MS F P treatment 2 34.74 17.37 6.45 0.006 Error 22 59.26 2.69 Total 24 94.00 F = MSG / MSE P-value comes from F(DFG,DFE) (P-values for the F statistic are in Table E)
  • 21. So How big is F? Since F is Mean Square Between / Mean Square Within = MSG / MSE A large value of F indicates relatively more difference between groups than within groups (evidence against H0) To get the P-value, we compare to F(I-1,n-I)-distribution • I-1 degrees of freedom in numerator (# groups -1) • n - I degrees of freedom in denominator (rest of df)
  • 22. Connections between SST, MST, and standard deviation So SST = (n -1) s2 , and MST = s2 . That is, SST and MST measure the TOTAL variation in the data set. s2 = xij −x( ) 2 ∑ n−1 = SST DFT =MST If ignore the groups for a moment and just compute the standard deviation of the entire data set, we see
  • 23. Connections between SSE, MSE, and standard deviation So SS[Within Group i] = (si 2 ) (dfi ) ( ) ii iij i df iSS n xx s ]GroupWithin[ 1 2 2 = − − = ∑ This means that we can compute SSE from the standard deviations and sizes (df) of each group: )()1( ][][ 22 iiii dfsns iGroupWithinSSWithinSSSSE ∑∑ ∑ =−= == Remember:
  • 24. Pooled estimate for st. dev sp 2 = (n1−1)s1 2 +(n2−1)s2 2 +...+(nI −1)sI 2 n−I sp 2 = (df1)s1 2 +(df2)s2 2 +...+(dfI)sI 2 df1+df2+...+dfI One of the ANOVA assumptions is that all groups have the same standard deviation. We can estimate this with a weighted average: MSE DFE SSE sp ==2 so MSE is the pooled estimate of variance
  • 25. In Summary SST = (xij − x obs ∑ )2 = s2 (DFT) SSE = (xij − xi)2 = obs ∑ si 2 groups ∑ (dfi) SSG = (xi obs ∑ − x)2 = ni(xi − x)2 groups ∑ SSE +SSG = SST; MS = SS DF ; F = MSG MSE
  • 26. R2 Statistic SST SSG TotalSS BetweenSS R == ][ ][2 R2 gives the percent of variance due to between group variation We will see R2 again when we study regression.
  • 27. Where’s the Difference? Analysis of Variance for days Source DF SS MS F P treatmen 2 34.74 17.37 6.45 0.006 Error 22 59.26 2.69 Total 24 94.00 Individual 95% CIs For Mean Based on Pooled StDev Level N Mean StDev ----------+---------+---------+------ A 8 7.250 1.669 (-------*-------) B 8 8.875 1.458 (-------*-------) P 9 10.111 1.764 (------*-------) ----------+---------+---------+------ Pooled StDev = 1.641 7.5 9.0 10.5 Once ANOVA indicates that the groups do not all appear to have the same means, what do we do? Clearest difference: P is worse than A (CI’s don’t overlap)
  • 28. Multiple Comparisons Once ANOVA indicates that the groups do not all have the same means, we can compare them two by two using the 2-sample t test • We need to adjust our p-value threshold because we are doing multiple tests with the same data. •There are several methods for doing this. • If we really just want to test the difference between one pair of treatments, we should set the study up that way.
  • 29. Tuckey’s Pairwise Comparisons Tukey's pairwise comparisons Family error rate = 0.0500 Individual error rate = 0.0199 Critical value = 3.55 Intervals for (column level mean) - (row level mean) A B B -3.685 0.435 P -4.863 -3.238 -0.859 0.766 95% confidence Use alpha = 0.0199 for each test. These give 98.01% CI’s for each pairwise difference. Only P vs A is significant (both values have same sign) 98% CI for A-P is (-0.86,-4.86)
  • 30. Tukey’s Method in R Tukey multiple comparisons of means 95% family-wise confidence level diff lwr upr B-A 1.6250 -0.43650 3.6865 P-A 2.8611 0.85769 4.8645 P-B 1.2361 -0.76731 3.2395