SlideShare a Scribd company logo
An Introduction to Statistical
Inference and Its Applications
Michael W. Trosset
Department of Mathematics
College of William & Mary
August 11, 2006
I of dice possess the science
and in numbers thus am skilled.
From The Story of Nala, the third book of the Indian epic Mahábarata.
This book is dedicated to
Richard A. Tapia,
my teacher, mentor, collaborator, and friend.
Contents
1 Experiments 7
1.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.1 Spinning a Penny . . . . . . . . . . . . . . . . . . . . . 8
1.1.2 The Speed of Light . . . . . . . . . . . . . . . . . . . . 9
1.1.3 Termite Foraging Behavior . . . . . . . . . . . . . . . 11
1.2 Randomization . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3 The Importance of Probability . . . . . . . . . . . . . . . . . 17
1.4 Games of Chance . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2 Mathematical Preliminaries 27
2.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2 Counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.3 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.4 Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3 Probability 45
3.1 Interpretations of Probability . . . . . . . . . . . . . . . . . . 45
3.2 Axioms of Probability . . . . . . . . . . . . . . . . . . . . . . 46
3.3 Finite Sample Spaces . . . . . . . . . . . . . . . . . . . . . . . 55
3.4 Conditional Probability . . . . . . . . . . . . . . . . . . . . . 60
3.5 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . 72
3.6 Case Study: Padrolling in Milton Murayama’s All I asking
for is my body . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
1
2 CONTENTS
4 Discrete Random Variables 93
4.1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.3 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.4 Binomial Distributions . . . . . . . . . . . . . . . . . . . . . . 110
4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5 Continuous Random Variables 121
5.1 A Motivating Example . . . . . . . . . . . . . . . . . . . . . . 121
5.2 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 125
5.3 Elementary Examples . . . . . . . . . . . . . . . . . . . . . . 128
5.4 Normal Distributions . . . . . . . . . . . . . . . . . . . . . . . 132
5.5 Normal Sampling Distributions . . . . . . . . . . . . . . . . . 136
5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
6 Quantifying Population Attributes 145
6.1 Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6.2 Quantiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6.2.1 The Median of a Population . . . . . . . . . . . . . . . 151
6.2.2 The Interquartile Range of a Population . . . . . . . . 151
6.3 The Method of Least Squares . . . . . . . . . . . . . . . . . . 152
6.3.1 The Mean of a Population . . . . . . . . . . . . . . . . 153
6.3.2 The Standard Deviation of a Population . . . . . . . . 153
6.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
7 Data 159
7.1 The Plug-In Principle . . . . . . . . . . . . . . . . . . . . . . 160
7.2 Plug-In Estimates of Mean and Variance . . . . . . . . . . . . 163
7.3 Plug-In Estimates of Quantiles . . . . . . . . . . . . . . . . . 164
7.3.1 Box Plots . . . . . . . . . . . . . . . . . . . . . . . . . 166
7.3.2 Normal Probability Plots . . . . . . . . . . . . . . . . 168
7.4 Kernel Density Estimates . . . . . . . . . . . . . . . . . . . . 171
7.5 Case Study: Are Forearm Lengths Normally Distributed? . . 174
7.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
8 Lots of Data 183
8.1 Averaging Decreases Variation . . . . . . . . . . . . . . . . . 185
8.2 The Weak Law of Large Numbers . . . . . . . . . . . . . . . . 187
8.3 The Central Limit Theorem . . . . . . . . . . . . . . . . . . . 190
CONTENTS 3
8.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
9 Inference 199
9.1 A Motivating Example . . . . . . . . . . . . . . . . . . . . . . 200
9.2 Point Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 202
9.2.1 Estimating a Population Mean . . . . . . . . . . . . . 202
9.2.2 Estimating a Population Variance . . . . . . . . . . . 204
9.3 Heuristics of Hypothesis Testing . . . . . . . . . . . . . . . . 204
9.4 Testing Hypotheses About a Population Mean . . . . . . . . . 214
9.4.1 One-Sided Hypotheses . . . . . . . . . . . . . . . . . . 218
9.4.2 Formulating Suitable Hypotheses . . . . . . . . . . . . 219
9.4.3 Statistical Significance and Material Significance . . . 223
9.5 Set Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 224
9.5.1 Sample Size . . . . . . . . . . . . . . . . . . . . . . . . 227
9.5.2 One-Sided Confidence Intervals . . . . . . . . . . . . . 228
9.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
10 1-Sample Location Problems 235
10.1 The Normal 1-Sample Location Problem . . . . . . . . . . . . 238
10.1.1 Point Estimation . . . . . . . . . . . . . . . . . . . . . 238
10.1.2 Hypothesis Testing . . . . . . . . . . . . . . . . . . . . 239
10.1.3 Interval Estimation . . . . . . . . . . . . . . . . . . . . 243
10.2 The General 1-Sample Location Problem . . . . . . . . . . . . 245
10.2.1 Hypothesis Testing . . . . . . . . . . . . . . . . . . . . 245
10.2.2 Point Estimation . . . . . . . . . . . . . . . . . . . . . 248
10.2.3 Interval Estimation . . . . . . . . . . . . . . . . . . . . 249
10.3 The Symmetric 1-Sample Location Problem . . . . . . . . . . 250
10.4 A Case Study from Neuropsychology . . . . . . . . . . . . . . 250
10.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
11 2-Sample Location Problems 257
11.1 The Normal 2-Sample Location Problem . . . . . . . . . . . . 259
11.1.1 Known Variances . . . . . . . . . . . . . . . . . . . . . 261
11.1.2 Unknown Common Variance . . . . . . . . . . . . . . 263
11.1.3 Unknown Variances . . . . . . . . . . . . . . . . . . . 266
11.2 The Case of a General Shift Family . . . . . . . . . . . . . . . 270
11.3 The Symmetric Behrens-Fisher Problem . . . . . . . . . . . . 270
11.4 Case Study: Etruscan versus Italian Head Breadth . . . . . . 270
11.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
4 CONTENTS
12 k-Sample Location Problems 283
12.1 The Case of a Normal Shift Family . . . . . . . . . . . . . . . 284
12.1.1 The Fundamental Null Hypothesis . . . . . . . . . . . 284
12.1.2 Testing the Fundamental Null Hypothesis . . . . . . . 285
12.1.3 Planned Comparisons . . . . . . . . . . . . . . . . . . 291
12.1.4 Post Hoc Comparisons . . . . . . . . . . . . . . . . . . 301
12.2 The Case of a General Shift Family . . . . . . . . . . . . . . . 303
12.2.1 The Kruskal-Wallis Test . . . . . . . . . . . . . . . . . 303
12.3 The Behrens-Fisher Problem . . . . . . . . . . . . . . . . . . 303
12.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
13 Association 309
13.1 Categorical Random Variables . . . . . . . . . . . . . . . . . . 309
13.2 Normal Random Variables . . . . . . . . . . . . . . . . . . . . 309
13.2.1 Bivariate Normal Distributions . . . . . . . . . . . . . 310
13.2.2 Bivariate Normal Samples . . . . . . . . . . . . . . . . 313
13.2.3 Inferences about Correlation . . . . . . . . . . . . . . 316
13.3 Monotonic Association . . . . . . . . . . . . . . . . . . . . . . 321
13.4 Spurious Association . . . . . . . . . . . . . . . . . . . . . . . 321
13.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
14 Simple Linear Regression 325
14.1 The Regression Line . . . . . . . . . . . . . . . . . . . . . . . 325
14.2 The Method of Least Squares . . . . . . . . . . . . . . . . . . 331
14.3 Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
14.4 The Simple Linear Regression Model . . . . . . . . . . . . . . 341
14.5 Regression Diagnostics . . . . . . . . . . . . . . . . . . . . . . 346
14.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
15 Simulation-Based Inference 353
15.1 Termite Foraging Revisited . . . . . . . . . . . . . . . . . . . 353
R A Statistical Programming Language 355
R.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
R.1.1 What is R? . . . . . . . . . . . . . . . . . . . . . . . . 355
R.1.2 Why Use R? . . . . . . . . . . . . . . . . . . . . . . . 355
R.1.3 Installing R . . . . . . . . . . . . . . . . . . . . . . . . 356
R.1.4 Learning About R . . . . . . . . . . . . . . . . . . . . 357
R.2 Using R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
CONTENTS 5
R.2.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 358
R.2.2 R is a Calculator! . . . . . . . . . . . . . . . . . . . . . 360
R.2.3 Some Statistics Functions . . . . . . . . . . . . . . . . 360
R.2.4 Creating New Functions . . . . . . . . . . . . . . . . . 360
R.2.5 Exploring Bivariate Normal Data . . . . . . . . . . . . 364
R.2.6 Simulating Termite Foraging . . . . . . . . . . . . . . 364
6 CONTENTS
Chapter 1
Experiments
Statistical methods have proven enormously valuable in helping scientists
interpret the results of their experiments—and in helping them design ex-
periments that will produce interpretable results. In a quite general sense,
the purpose of statistical analysis is to organize a data set in ways that reveal
its structure. Sometimes this is so easy that one does not think that one is
doing “statistics;” sometimes it is so difficult that one seeks the assistance
of a professional statistician.
This is a book about how statisticians draw conclusions from experimen-
tal data. Its primary goal is to introduce the reader to an important type of
reasoning that statisticians call “statistical inference.” Rather than provide
a superficial introduction to a wide variety of inferential methods, we will
concentrate on fundamental concepts and study a few methods in depth.
Although statistics can be studied at many levels with varying degrees of
sophistication, there is no escaping the simple fact that statistics is a mathe-
matical discipline. Statistical inference rests on the mathematical foundation
of probability. The better one desires to understand statistical inference, the
more that one needs to know about probability. Accordingly, we will devote
several chapters to probability before we begin our study of statistics. To
motivate the reader to embark on this program of study, the present chapter
describes the important role that probability plays in scientific investigation.
1.1 Examples
This section describes several scientific experiments. Each involves chance
variation in a different way. The common theme is that chance variation
7
8 CHAPTER 1. EXPERIMENTS
cannot be avoided in scientific experimentation.
1.1.1 Spinning a Penny
In August 1994, while attending the 15th International Symposium on Math-
ematical Programming in Ann Arbor, MI, I read an article in which the au-
thor asserted that spinning (as opposed to tossing/flipping) a typical penny
is not fair, i.e., that Heads and Tails are not equally likely to result. Specif-
ically, the author asserted that the chance of obtaining Heads by spinning a
penny is about 30%.1
I was one of several people in a delegation from Rice University. That
evening, we ended up at a local Subway restaurant for dinner and talk turned
to whether or not spinning pennies is fair. Before long we were each spin-
ning pennies and counting Heads. At first it seemed that about 70% of the
spins were Heads, but this proved to be a temporary anomaly. By the time
that we tired of our informal experiment, our results seemed to confirm the
plausibility of the author’s assertion.
I subsequently used penny-spinning as an example in introductory sta-
tistics courses, each time asserting that the chance of obtaining Heads by
spinning a penny is about 30%. Students found this to be an interesting
bit of trivia, but no one bothered to check it—until 2001. In the spring of
2001, three students at the College of William & Mary spun pennies, counted
Heads, and obtained some intriguing results.
For example, Matt, James, and Sarah selected one penny that had been
minted in the year 2000 and spun it 300 times, observing 145 Heads. This
is very nearly 50% and the discrepancy might easily be explained by chance
variation—perhaps spinning their penny is fair! They tried different pennies
1
Years later, I have been unable to discover what I read or who wrote it. It seems
to be widely believed that the chance is less than 50%. The most extreme assertion
that I have discovered is by R. L. Graham, D. E. Knuth, and O. Patashnik (Concrete
Mathematics, Second Edition, Addison-Wesley, 1994, page 401), who claimed that the
chance is approximately 10% “when you spin a newly minted U.S. penny on a smooth
table.” A fairly comprehensive discussion of “Flipping, spinning, and tilting coins” can be
found at
http://www.dartmouth.edu/~chance/chance news/recent news/
chance news 11.02.html#item2,
in which various individuals emphasize that the chance of Heads depends on such factors
as the year in which the penny was minted, the surface on which the penny is spun, and
the quality of the spin. For pennies minted in the 1960s, one individual reported 1878
Heads in 5520 spins, about 34%.
1.1. EXAMPLES 9
and obtained different percentages. Perhaps all pennies are not alike! (Pen-
nies minted before 1982 are 95% copper and 5% zinc; pennies minted after
1982 are 97.5% zinc and 2.5% copper.) Or perhaps the differences were due
to chance variation.
Were one to undertake a scientific study of penny spinning, there are
many questions that one might ask. Here are several:
• Choose a penny. What is the chance of obtaining Heads by spinning
that penny? (This question is the basis for Exercise 1 at the end of
this chapter.)
• Choose two pennies. Are they equally likely to produce Heads when
spun?
• Choose several pennies minted before 1982 and several pennies minted
after 1982. As groups, are pre-1982 pennies and post-1982 pennies
equally likely to produce Heads when spun?
1.1.2 The Speed of Light
According to Albert Einstein’s special theory of relativity, the speed of light
through a vacuum is a universal constant c. Since 1974, that speed has
been given as c = 299, 792.458 kilometers per second.2 Long before Ein-
stein, however, philosophers had debated whether or not light is transmitted
instantaneously and, if not, at what speed it moved. In this section, we
consider Albert Abraham Michelson’s famous 1879 experiment to determine
the speed of light.3
Aristotle believed that light “is not a movement” and therefore has no
speed. Francis Bacon, Johannes Kepler, and René Descartes believed that
light moved with infinite speed, whereas Galileo Galilei thought that its
speed was finite. In 1638 Galileo proposed a terrestrial experiment to resolve
the dispute, but two centuries would pass before this experiment became
technologically practicable. Instead, early determinations of the speed of
light were derived from astronomical data.
2
Actually, a second is defined to be 9, 192, 631, 770 periods of radiation from cesium-
133 and a kilometer is defined to be the distance travelled by light through a vacuum in
1/299792458 seconds!
3
A. A. Michelson (1880). Experimental determination of the velocity of light made at
the U.S. Naval Academy, Annapolis. Astronomical Papers, 1:109–145. The material in
this section is taken from R. J. MacKay and R. W. Oldford (2000), Scientific method,
statistical method and the speed of light, Statistical Science, 15:254–278.
10 CHAPTER 1. EXPERIMENTS
The first empirical evidence that light is not transmitted instantaneously
was presented by the Danish astronomer Ole Römer, who studied a series of
eclipses of Io, Jupiter’s largest moon. In September 1676, Römer correctly
predicted a 10-minute discrepancy in the time of an impending eclipse. He
argued that this discrepancy was due to the finite speed of light, which
he estimated to be about 214, 000 kilometers per second. In 1729, James
Bradley discovered an annual variation in stellar positions that could be
explained by the earth’s motion if the speed of light was finite. Bradley
estimated that light from the sun took 8 minutes and 12 seconds to reach
the earth and that the speed of light was 301,000 kilometers per second. In
1809, Jean-Baptiste Joseph Delambre used 150 years of data on eclipses of
Jupiter’s moons to estimate that light travels from sun to earth in 8 minutes
and 13.2 seconds, at a speed of 300,267.64 kilometers per second.
In 1849, Hippolyte Fizeau became the first scientist to estimate the speed
of light from a terrestrial experiment, a refinement of the one proposed by
Galileo. An accurately machined toothed wheel was spun in front of a light
source, automatically covering and uncovering it. The light emitted in the
gaps between the teeth travelled 8633 meters to a fixed flat mirror, which
reflected the light back to its source. The returning light struck either a
tooth or a gap, depending on the wheel’s speed of rotation. By varying
the speed of rotation and observing the resulting image from reflected light
beams, Fizeau was able to measure the speed of light.
In 1851, Leon Foucault further refined Galileo’s experiment, replacing
Fizeau’s toothed wheel with a rotating mirror. Michelson further refined
Foucault’s experimental setup. A precise account of the experiment is be-
yond the scope of this book, but Mackay’s and Oldford’s account of how
Michelson produced each of his 100 measurements of the speed of light pro-
vides some sense of what was involved. More importantly, their account
reveals the multiple ways in which Michelson’s measurements were subject
to error.
1. The distance |RM| from the rotating mirror to the fixed mirror
was measured five times, each time allowing for temperature, and
the average used as the “true distance” between the mirrors for
all determinations.
2. The fire for the pump was started about a half hour before mea-
surement began. After this time, there was sufficient pressure to
begin the determinations.
1.1. EXAMPLES 11
3. The fixed mirror M was adjusted. . . and the heliostat placed and
adjusted so that the Sun’s image was directed at the slit.
4. The revolving mirror was adjusted on two different axes.. . .
5. The distance |SR| from the revolving mirror to the crosshair of
the eyepiece was measured using the steel tape.
6. The vertical crosshair of the eyepiece of the micrometer was cen-
tred on the slit and its position recorded in terms of the position
of the screw.
7. The electric tuning fork was started. The frequency of the fork
was measured two or three times for each set of observations.
8. The temperature was recorded.
9. The revolving mirror was started. The eyepiece was set approx-
imately to capture the displaced image. If the image did not
appear in the eyepiece, the mirror was inclined forward or back
until it came into sight.
10. The speed of rotation of the mirror was adjusted until the image
of the revolving mirror came to rest.
11. The micrometer eyepiece was moved by turning the screw until its
vertical crosshair was centred on the return image of the slit. The
number of turns of the screw was recorded. The displacement is
the difference in the two positions. To express this as the distance
|IS| in millimetres the measured number of turns was multiplied
by the calibrated number of millimetres per turn of the screw.
12. Steps 10 and 11 were repeated until 10 measurements of the dis-
placement |IS| were made.
13. The rotating mirror was stopped, the temperature noted and the
frequency of the electric fork was determined again.
Michelson used the procedure described above to obtain 100 measure-
ments of the speed of light in air. Each measurement was computed using
the average of the 10 measured displacements in Step 12. These measure-
ments, reported in Table 1.1, subsequently were adjusted for temperature
and corrected by a factor based on the refractive index of air. Michelson re-
ported the speed of light in a vacuum as 299, 944±51 kilometers per second.
1.1.3 Termite Foraging Behavior
In the mid-1980s, Susan Jones was a USDA entomologist and a graduate
student in the Department of Entomology at the University of Arizona. Her
dissertation research concerned the foraging ecology of subterranean termites
12 CHAPTER 1. EXPERIMENTS
50 −60 100 270 130 50 150 180 180 80
200 180 130 −150 −40 10 200 200 160 160
160 140 160 140 80 0 50 80 100 40
30 −10 10 80 80 30 0 −10 −40 0
80 80 80 60 −80 −80 −180 60 170 150
80 110 50 70 40 40 50 40 40 40
90 10 10 20 0 −30 −40 −60 −50 −40
110 120 90 60 80 −80 40 50 50 −20
90 40 −20 10 −40 10 −10 10 10 50
70 70 10 −60 10 140 150 0 10 70
Table 1.1: Michelson’s 100 unadjusted measurements of the speed of light in
air. Add 299, 800 to obtain measurements in units of kilometers per second.
in the Sonoran Desert. Her field studies were conducted on the Santa Rita
Experimental Range, about 40 kilometers south of Tucson, AZ:
The foraging activity of H. aureus4
was studied in 30 plots, each
consisting of a grid (6 by 6 m) of 25 toilet-paper rolls which served as
baits. . . Plots were selected on the basis of two criteria: the presence of
H. aureus foragers in dead wood, and separation by at least 12 m from
any other plot. A 6-by-6-m area was then marked off within the vicinity
of infested wood, and toilet-paper rolls were aligned in five rows and
five columns and spaced at 1.5-m intervals. The rolls were positioned
on the soil surface and each was held in place with a heavy wire stake.
All pieces of wood ca. 15 cm long and longer were removed from each
plot and ca. 3 m around the periphery to minimize the availability
of natural wood as an alternative food source. Before infested wood
was removed from the site, termites were allowed to retreat into their
galleries in the soil to avoid depleting the numbers of surface foragers.
All plots were established within a 1-wk period during late June 1984.
Plots were examined once a week during the first 5 wk after es-
tablishment, and then at least once monthly thereafter until August
1985.5
4
Heterotermes aureus (Snyder) is the most common subterranean termite species in the
Sonoran Desert. Haverty, Nutting, and LaFage (Density of colonies and spatial distribution
of foraging territories of the desert subterranean termite, Heterotermes aureues (Snyder),
Environmental Entomology, 4:105–109, 1975) estimated the population density of this
species in the Santa Rita Experimental Range at 4.31 × 106
termites per hectare.
5
Jones, Trosset, and Nutting. Biotic and abiotic influences on foraging of Heteroter-
1.1. EXAMPLES 13
An important objective of the above study was
. . . to investigate the relationship between food-source distance (on
a scale 6 by 6 m) and foraging behavior. This was accomplished by
analyzing the order in which different toilet-paper rolls in the same
plot were attacked.. . . Specifically, a statistical methodology was devel-
oped to test the null hypothesis that any previously unattacked roll
was equally likely to be the next roll attacked (random foraging). Al-
ternative hypotheses supposed that the likelihood that a previously
unattacked roll would be the next attacked roll decreased with increas-
ing distance from previously attacked rolls (systematic foraging).6
✐ ✐
4 ✐
5
✐
4 3 ✐ ✐
✐
4 4 4 ✐
4 2 4 2 5
✐
1 ② ✐ ②
Figure 1.1: Order of H. aureus attack in Plot 20.
The order in which the toilet-paper rolls in Plot 20 were attacked is
displayed in Figure 1.1. The unattacked rolls are denoted by ◦, the initially
mes aureus (Snyder) (Isoptera: Rhinotermitidae), Environmental Entomology, 16:791–795,
1987.
6
Ibid.
14 CHAPTER 1. EXPERIMENTS
attacked rolls are denoted by •, and the subsequently attacked rolls are
denoted (in order of attack) by 1, 2, 3, 4, and 5. Notice that these numbers
do not specify a unique order of attack:
. . . because the plots were not observed continuously, a number of rolls
seemed to have been attacked simultaneously. Therefore, it was not al-
ways possible to determine the exact order in which they were attacked.
Accordingly, all permutations consistent with the observed ties in order
were considered. . . 7
In a subsequent chapter, we will return to the question of whether or
not H. aureus forages randomly and describe the statistical methodology
that was developed to answer it. Along the way, we will develop rigorous
interpretations of the phrases that appear in the above passages, e.g., “per-
mutations”, “equally likely”, “null hypothesis”, “alternative hypotheses”,
etc.
1.2 Randomization
This section illustrates an important principle in the design of experiments.
We begin by describing two famous studies that produced embarrassing re-
sults because they failed to respect this principle.
The Lanarkshire Milk Experiment A 1930 experiment in the schools
of Lanarkshire attempted to ascertain the effect of milk supplements on Scot-
tish children. For four months, 5000 children received a daily supplement of
3/4 pint of raw milk, 5000 children received a daily supplement of 3/4 pint
of pasteurized milk, and 10, 000 children received no daily milk supplement.
Each child was weighed (while wearing indoor clothing) and measured for
height before the study commenced (in February) and after it ended (in
June). The final observations of the control group exceeded the final obser-
vations of the treatment groups by average amounts equivalent to 3 months
growth in weight and 4 months growth in weight, thereby suggesting that
the milk supplements actually retarded growth! What went wrong?
To explain the results of the Lanarkshire milk experiment, one must
examine how the 20, 000 children enrolled in the study were assigned to
the study groups. An initial division into treatment versus control groups
7
Ibid.
1.2. RANDOMIZATION 15
was made arbitrarily, e.g., using the alphabet. However, if the initial di-
vision appeared to produce groups with unbalanced numbers of well-fed or
ill-nourished children, then teachers were allowed to swap children between
the two groups in order to obtain (apparently) better balanced groups. It
is thought that well-meaning teachers, concerned about the plight of ill-
nourished children and “knowing” that milk supplements would be bene-
ficial, consciously or subconsciously availed themselves of the opportunity
to swap ill-nourished children into the treatment group. This resulted in a
treatment group that was lighter and shorter than the control group. Fur-
thermore, it is likely that differences in weight gains were confounded with a
tendency for well-fed children to come from families that could afford warm
(heavier) winter clothing, as opposed to a tendency for ill-nourished children
to come from poor families that provided shabbier (lighter) clothing.8
The Pre-Election Polls of 1948 The 1948 presidential election pitted
Harry Truman, the Democratic incumbent who had succeeded to the pres-
idency when Franklin Roosevelt died in office, against Thomas Dewey, the
Republican governor of New York.9 Each of the three major polling orga-
nizations that covered the campaign predicted that Dewey would win: the
Crossley poll predicted 50% of the popular vote for Dewey and 45% for Tru-
man, the Gallup poll predicted 50% for Dewey and 44% for Truman, and
the Roper poll predicted 53% for Dewey and 38% for Truman. Dewey was
considered “as good as elected” until the votes were actually counted: in
one of the great upsets in American politics, Truman received slightly less
than 50% of the popular vote and Dewey received slightly more than 45%.10
What went wrong?
Poll predictions are based on data collected from a sample of prospective
8
For additional details and commentary, see Student (1931), The Lanarkshire milk
experiment, Biometrika, 23:398, and Section 5.4 (Justification of Randomization) of Cox
(1958), Planning of Experiments, John Wiley & Sons, New York.
9
As a crusading district attorney in New York City, Dewey was a national hero in the
1930s. In late 1938, two Hollywood films attempted to capitalize on his popularity, RKO’s
Smashing the Rackets and Warner Brothers’ Racket Busters. The special prosecutor in the
latter film was played by Walter Abel, who bore a strong physical resemblance to Dewey.
10
A famous photograph shows an exuberant Truman holding a copy of the Chicago
Tribune with the headline Dewey Defeats Truman. On election night, Dewey confidently
asked his wife, “How will it be to sleep with the president of the United States?” “A high
honor, and quite frankly, darling, I’m looking forward to it,” she replied. At breakfast next
morning, having learned of Truman’s upset victory, Frances playfully teased her husband:
“Tell me, Tom, am I going to Washington or is Harry coming here?”
16 CHAPTER 1. EXPERIMENTS
voters. For example, Gallup’s prediction was based on 50, 000 interviews. To
assess the quality of Gallup’s prediction, one must examine how his sample
was selected. In 1948, all three polling organizations used a method called
quota sampling that attempts to hand-pick a sample that is representative
of the entire population. First, one attempts to identify several important
characteristics that may be associated with different voting patterns, e.g.,
place of residence, sex, age, race, etc. Second, one attempts to obtain a
sample that resembles the entire population with respect to those charac-
teristics. For example, a Gallup interviewer in St. Louis was instructed to
interview 13 subjects. Exactly 6 were to live in the suburbs, 7 in the city;
exactly 7 were to be men, 6 women. Of the 7 men, exactly 3 were to be less
than 40 years old and exactly 1 was to be black. Monthly rent categories for
the 6 white men were specified. Et cetera, et cetera, et cetera.
Although the quotas used in quota sampling are reasonable, the method
does not work especially well. The reason is that quota sampling does not
specify how to choose the sample within the quotas—these choices are left to
the discretion of the interviewer. Human choice is unpredictable and often
subject to bias. In 1948, Republicans were more accessible than Democrats:
they were more likely to have permanent addresses, own telephones, etc.
Within their carefully prescribed quotas, Gallup interviewers were slightly
more likely to find Republicans than Democrats. This unintentional bias
toward Republicans had distorted previous polls; in 1948, the election was
close enough that the polls picked the wrong candidate.11
In both the Lanarkshire milk experiment and the pre-election polls of
1948, subjective attempts to hand-pick representative samples resulted in
embarrassing failures. Let us now exploit our knowledge of what not to do
and design a simple experiment. An instructor—let’s call him Ishmael—of
one section of Math 106 (Elementary Statistics) has prepared two versions
of a final exam. Ishmael hopes that the two versions are equivalent, but he
recognizes that this will have to be determined experimentally. He therefore
decides to divide his class of 40 students into two groups, each of which will
receive a different version of the final. How should he proceed?
Ishmael recognizes that he requires two comparable groups if he hopes
to draw conclusions about his two exams. For example, suppose that he
11
For additional details and commentary, see Mosteller et al. (1949), The Pre-Election
Polls of 1948, Social Science Research Council, New York, and Section 19.3 (The Year the
Polls Elected Dewey) of Freedman, Pisani, and Purves (1998), Statistics, Third Edition,
W. W. Norton & Company, New York.
1.3. THE IMPORTANCE OF PROBABILITY 17
administers one exam to the students who attained an A average on the
midterms and the other exam to the other students. If the average score on
exam A is 20 points higher than the average score on exam B, then what can
he conclude? It might be that exam A is 20 points easier than exam B. Or it
might be that the two exams are equally difficult, but that the A students are
20 points more capable than the B students. Or it might be that exam A is
actually 10 points more difficult than exam B, but that the A students are 30
points more capable than the B students. There is no way to decide—exam
version and student capability are confounded in this experiment.
The lesson of the Lanarkshire milk experiment and the pre-election polls
of 1948 is that it is difficult to hand-pick representative samples. Accordingly,
Ishmael decides to randomly assign the exams, relying on chance variation to
produce balanced groups. This can be done in various ways, but a common
principle prevails: each student is equally likely to receive exam A or B. Here
are two possibilities:
1. Ishmael creates 40 identical slips of paper. He writes the name of each
student on one slip, mixes the slips in a large jar, then draws 20 slips.
(After each draw, the selected slip is set aside and the next draw uses
only those slips that remain in the jar, i.e., sampling occurs without
replacement.) The 20 students selected receive exam A; the remaining
20 students receive exam B. This is called simple random sampling.
2. Ishmael notices that his class comprises 30 freshmen and 10 non-
freshman. Believing that it is essential to have 3/4 freshmen in each
group, he assigns freshmen and non-freshmen separately. Again, Ish-
mael creates 40 identical slips of paper and writes the name of each
student on one slip. This time he separates the 30 freshman slips from
the 10 non-freshman slips. To assign the freshmen, he mixes the 30
freshman slips and draws 15 slips. The 15 freshmen selected receive
exam A; the remaining 15 freshmen receive exam B. To assign the non-
freshmen, he mixes the 10 non-freshman slips and draws 5 slips. The
5 non-freshmen selected receive exam A; the remaining 5 non-freshmen
receive exam B. This is called stratified random sampling.
1.3 The Importance of Probability
Each of the experiments described in Sections 1.1 and 1.2 reveals something
about the role of chance variation in scientific experimentation.
18 CHAPTER 1. EXPERIMENTS
It is beyond our ability to predict with certainty if a spinning penny will
come to rest with Heads facing up. Even if we believe that the outcome is
completely determined, we cannot measure all the relevant variables with
sufficient precision, nor can we perform the necessary calculations, to know
what it will be. We express our inability to predict Heads versus Tails in the
language of probability, e.g., “there is a 30% chance that Heads will result.”
(Section 3.1 discusses how such statements may be interpreted.) Thus, even
when studying allegedly deterministic phenomena, probability models may
be of enormous value.
When measuring the speed of light, it is not the phenomenon itself
but the experiment that admits chance variation. Despite his excruciat-
ing precautions, Michelson was unable to remove chance variation from his
experiment—his measurements differ. Adjusting the measurements for tem-
perature removes one source of variation, but it is impossible to remove them
all. Later experiments with more sophisticated equipment produced better
measurements, but did not succeed in completely removing all sources of
variation. Experiments are never perfect,12 and probability models may be
of enormous value in modelling errors that the experimenter is unable to
remove or control.
Probability plays another, more subtle role in statistical inference. When
studying termites, it is not clear whether or not one is observing a systematic
foraging strategy. Probability was introduced as a hypothetical benchmark:
what if termites forage randomly? Even if termites actually do forage deter-
ministically, understanding how they would behave if they foraged randomly
provides insights that inform our judgments about their behavior.
Thus, probability helps us answer questions that naturally arise when
analyzing experimental data. Another example arose when we remarked
that Matt, James, and Sarah observed nearly 50% Heads, specifically 145
Heads in 300 spins. What do we mean by “nearly”? Is this an important
discrepancy or can chance variation account for it? To find out, we might
study the behavior of penny spinning under the mathematical assumption
that it is fair. If we learn that 300 spins of a fair penny rarely produce
a discrepancy of 5 (or more) Heads, then we might conclude that penny
spinning is not fair. If we learn that discrepancies of this magnitude are
12
Another example is described by Freedman, Pisani, and Purves in Section 6.2 of
Statistics (Third Edition, W. W. Norton & Company, 1998). The National Bureau of
Standards repeatedly weighs the national prototype kilogram under carefully controlled
conditions. The measurements are extremely precise, but nevertheless subject to small
variations.
1.4. GAMES OF CHANCE 19
common, then we would be reluctant to draw this conclusion.
The ability to use the tools of probability to understand the behavior
of inferential procedures is so powerful that good experiments are designed
with this in mind. Besides avoiding the pitfalls of subjective methods, ran-
domization allows us to answer questions about how well our methods work.
For example, Ishmael might ask “How likely is simple random sampling to
result in exactly 5 non-freshman receiving exam A?” Such questions derive
meaning from the use of probability methods.
When a scientist performs an experiment, s/he observes a sample of
possible experimental values. The set of all values that might have been
observed is a population. Probability helps us describe the population and
understand the data generating process that produced the sample. It also
helps us understand the behavior of the statistical procedures used to analyze
experimental data, e.g., averaging 100 measurements to produce an estimate.
This linkage, of sample to population through probability, is the foundation
on which statistical inference is based. Statistical inference is relatively new,
but the linkage that we have described is wonderfully encapsulated in a
remarkable passage from The Book of Nala, the third book of the ancient
Indian epic Mahábarata.13 Rtuparna examines a single twig of a spreading
tree and accurately estimates the number of fruit on two great branches.
Nala marvels at this ability, and Rtuparna rejoins:
I of dice possess the science
and in numbers thus am skilled.
1.4 Games of Chance
In The Book of Nala, Rtuparna’s skill in estimation is connected with his
prowess at dicing. Throughout history, probabilistic concepts have invari-
ably been illustrated using simple games of chance. There are excellent
reasons for us to embrace this pedagogical cliché. First, many fundamen-
tal probabilistic concepts were invented for the purpose of understanding
certain games of chance; it is pleasant to incorporate a bit of this fascinat-
ing, centuries-old history into a modern program of study. Second, games
of chance serve as idealized experiments that effectively reveal essential is-
sues without the distraction of the many complicated nuances associated
13
This passage is summarized in Ian Hacking’s The Emergence of Probability, Cambridge
University Press, 1975, pp. 6–7, which quotes H. H. Milman’s 1860 translation.
20 CHAPTER 1. EXPERIMENTS
with most scientific experiments. Third, as idealized experiments, games of
chance provide canonical examples of various recurring experimental struc-
tures. For example, tossing a coin is a useful abstraction of such diverse ex-
periments as observing whether a baby is male or female, observing whether
an Alzheimer’s patient does or does not know the day of the week, or ob-
serving whether a pond is or is not inhabited by geese. A scientist who is
familiar with these idealized experiments will find it easier to diagnose the
mathematical structure of an actual scientific experiment.
Many of the examples and exercises in subsequent chapters will refer to
simple games of chance. The present section collects some facts and trivia
about several of the most common.
Coins According to the Encylopædia Britannica,
“Early cast-bronze animal shapes of known and readily identifi-
able weight, provided for the beam-balance scales of the Middle
Eastern civilizations of the 7th millennium BC, are evidence of
the first attempts to provide a medium of exchange. . . . The first
true coins, that is, cast disks of standard weight and value specifi-
cally designed as a medium of exchange, were probably produced
by the Lydians of Anatolia in about 640 BC from a natural alloy
of gold containing 20 to 35 percent silver.”14
Despite (or perhaps because of) the simplicity of tossing a coin and observing
which side (canonically identified as Heads or Tails) comes to lie facing up,
it appears that coins did not play an important role in the early history
of probability. Nevertheless, the use of coin tosses (or their equivalents) as
randomizing agents is ubiquitous in modern times. In football, an official
tosses a coin and a representative of one team calls Heads or Tails. If his
call matches the outcome of the toss, then his team may choose whether
to kick or receive (or, which goal to defend); otherwise, the opposing team
chooses. A similar practice is popular in tennis, except that one player spins
a racquet instead of tossing a coin. In each of these practices, it is presumed
that the “coin” is balanced or fair, i.e., that each side is equally likely to turn
up; see Section 1.1.1 for a discussion of whether or not spinning a penny is
fair.
14
“Coins and coinage,” The New Encyclopædia Britannica in 30 Volumes, Macropædia,
Volume 4, 1974, pp. 821–822.
1.4. GAMES OF CHANCE 21
Dice The noun dice is the plural form of the noun die.15 A die is a small
cube, marked on each of its six faces with a number of pips (spots, dots).
To generate a random outcome, the die is cast (tossed, thrown, rolled) on a
smooth surface and the number of pips on the uppermost face is observed.
If each face is equally likely to be uppermost, then the die is balanced or
fair; otherwise, it is unbalanced or loaded.
The casting of dice is an ancient practice. According to F. N. David,
“The earliest dice so far found are described as being of well-fired
buff pottery and date from the beginning of the third millenium.
. . . consecutive order of the pips must have continued for some
time. It is still to be seen in dice of the late XVIIIth Dynasty
(Egypt c. 1370 B.C.), but about that time, or soon after, the
arrangement must have settled into the 2-partitions of 7 familiar
to us at the present time. Out of some fifty dice of the classical
period which I have seen, forty had the ‘modern’ arrangement of
the pips.”16
Today, pure dice games include craps, in which two dice are cast, and
Yahtzee, in which five dice are cast. More commonly, the casting of dice is
used as a randomizing agent in a variety of board games, e.g., backgammon
and MonopolyTM. Typically, two dice are cast and the outcome is defined to
be the sum of the pips on the two uppermost faces.
Astragali Even more ancient than dice are astragali, the singular form
of which is astragalus. The astragalus is a bone in the heel of many verte-
brate animals; it lies directly above the talus, and is roughly symmetrical in
hooved mammals, e.g., deer. Such astragali have been found in abundance
in excavations of prehistoric man, who may have used them for counting.
They were used for board games at least as early as the First Dynasty in
Egypt (c. 3500 B.C.) and were the principal randomizing agent in classical
Greece and Rome. According to F. N. David,
“The astragalus has only four sides on which it will rest, since the
other two are rounded. . . A favourite research of the scholars of
15
In The Devil’s Dictionary, Ambrose Bierce defined die as the singular of dice, remark-
ing that “we seldom hear the word, because there is a prohibitory proverb, ‘Never say
die.’ ”
16
F. N. David, Games, Gods and Gambling: A History of Probability and Statistical
Ideas, 1962, p. 10 (Dover Publications).
22 CHAPTER 1. EXPERIMENTS
the Italian Renaissance was to try to deduce the scoring used. It
was generally agreed from a close study of the writings of classical
times that the upper side of the bone, broad and slightly convex,
counted 4; the opposite side, broad and slightly concave, 3; the
lateral side, flat and narrow, scored 1, and the opposite narrow
lateral side, which is slightly hollow, 6. The numbers 2 and 5
were omitted.”17
Accordingly, we can think of an astragalus as a 4-sided die with possible
outcomes 1, 3, 4, and 6. An astragalus is not balanced. From tossing a
modern sheep’s astragalus, David estimated the chances of throwing a 1 or
a 6 at roughly 10 percent each and the chances of throwing a 3 or a 4 at
roughly 40 percent each.
The Greeks and Romans invariably cast four astragali. The most de-
sirable result, the venus, occurred when the four uppermost sides were all
different; the dog, which occurred when each uppermost side was a 1, was
undesirable. In Asia Minor, five astragali were cast and different results were
identified with the names of different gods, e.g., the throw of Saviour Zeus
(one one, two threes, and two fours), the throw of child-eating Cronos (three
fours and two sixes), etc. In addition to their use in gaming, astragali were
cast for the purpose of divination, i.e., to ascertain if the gods favored a
proposed undertaking.
In 1962, David reported that “it is not uncommon to see children in
France and Italy playing games with them [astragali] today;” for the most
part, however, unbalanced astragali have given way to balanced dice. A
whimsical contemporary example of unbalanced dice that evoke astragali
are the pig dice used in Pass the PigTM (formerly PigmaniaTM).
Cards David estimated that playing cards “were not invented until c. A.D.
1350, but once in use, they slowly began to display dice both as instruments
of play and for fortune-telling.” By a standard deck of playing cards, we
shall mean the familiar deck of 52 cards, organized into four suits (clubs,
diamonds, hearts, spades) of thirteen ranks or denominations (2–10, jack,
queen, king, ace). The diamonds and hearts are red; the clubs and spades
are black. When we say that a deck has been shuffled, we mean that the
order of the cards in the deck has been randomized. When we say that cards
are dealt, we mean that they are removed from a shuffled deck in sequence,
17
F. N. David, Games, Gods and Gambling: A History of Probability and Statistical
Ideas, 1962, p. 7 (Dover Publications).
1.4. GAMES OF CHANCE 23
beginning with the top card. The cards received by a player constitute that
player’s hand. The quality of a hand depends on the game being played;
however, unless otherwise specified, the order in which the player received
the cards in her hand is irrelevant.
Poker involves hands of five cards. The following types of hands are
arranged in order of decreasing value. An ace is counted as either the highest
or the lowest rank, whichever results in the more valuable hand. Thus, every
possible hand is of exactly one type.
1. A straight flush contains five cards of the same suit and of consecutive
ranks.
2. A hand with 4 of a kind contains cards of exactly two ranks, four cards
of one rank and one of the other rank.
3. A full house contains cards of exactly two ranks, three cards of one
rank and two cards of the other rank.
4. A flush contains five cards of the same suit, not of consecutive rank.
5. A straight contains five cards of consecutive rank, not all of the same
suit.
6. A hand with 3 of a kind contains cards of exactly three ranks, three
cards of one rank and one card of each of the other two ranks.
7. A hand with two pairs contains contains cards of exactly three ranks,
two cards of one rank, two cards of a second rank, and one card of a
third rank.
8. A hand with one pair contains cards of exactly four ranks, two cards
of one rank and one card each of a second, third, and fourth rank.
9. Any other hand contains no pair.
Urns For the purposes of this book, an urn is a container from which ob-
jects are drawn, e.g., a box of raffle tickets or a jar of marbles. Modern
lotteries often select winning numbers by using air pressure to draw num-
bered ping pong balls from a clear plastic container. When an object is
drawn from an urn, it is presumed that each object in the urn is equally
likely to be selected.
24 CHAPTER 1. EXPERIMENTS
That urn models have enormous explanatory power was first recognized
by J. Bernoulli (1654–1705), who used them in Ars Conjectandi, his brilliant
treatise on probability. It is not difficult to devise urn models that are
equivalent to other randomizing agents considered in this section.
Example 1.1: Urn Model for Tossing a Fair Coin Imagine an
urn that contains one red marble and one black marble. A marble is drawn
from this urn. If it is red, then the outcome is Heads; if it is black, then the
outcome is Tails. This is equivalent to tossing a fair coin once.
Example 1.2: Urn Model for Throwing a Fair Die Imagine an
urn that contains six tickets, labelled 1 through 6. Drawing one ticket from
this urn is equivalent to throwing a fair die once. If we want to throw the
die a second time, then we return the selected ticket to the urn and repeat
the procedure. This is an example of drawing with replacement.
Example 1.3: Urn Model for Throwing an Astragalus Imagine
an urn that contains ten tickets, one labelled 1, four labelled 3, four labelled
4, and one labelled 6. Drawing one ticket from this urn is equivalent to
throwing an astragalus once. If we want to throw four astragali, then we
repeat this procedure four times, each time returning the selected ticket to
the urn. This is another example of drawing with replacement.
Example 1.4: Urn Model for Drawing a Poker Hand Place a
standard deck of playing cards in an urn. Draw one card, then a second,
then a third, then a fourth, then a fifth. Because each card in the deck can
only be dealt once, we do not return a card to the urn after drawing it. This
is an example of drawing without replacement.
In the preceding examples, the statements about the equivalence of the
urn model and another randomizing agent were intended to appeal to your
intuition. Subsequent chapters will introduce mathematical tools that will
allow us to validate these assertions.
1.5 Exercises
1. Select a penny minted in any year other than 1982. Find a smooth
surface on which to spin it. Practice spinning the penny until you are
1.5. EXERCISES 25
able to do so in a reasonably consistent manner. Develop an experi-
mental protocol that specifies precisely how you spin your penny. Spin
your penny 100 times in accordance with this protocol. Record the
outcome of each spin, including aberrant events (e.g., the penny spun
off the table and therefore neither Heads nor Tails was recorded).
Your report of this experiment should include the following:
• The penny itself, taped to your report. Note any features of the
penny that seem relevant, e.g., the year and city in which it was
minted, its condition, etc.
• A description of the surface on which you spun it and of any
possibly relevant environmental considerations.
• A description of your experimental protocol.
• The results of your 100 spins. This means a list, in order, of what
happened on each spin.
• A summary of your results. This means (i) the total number of
spins that resulted in either Heads or Tails (ideally, this number,
n, will equal 100) and (ii) the number of spins that resulted in
Heads (y).
• The observed frequency of heads, y/n.
2. The Department of Mathematics at the College of William & Mary is
housed in Jones Hall. To find the department, one passes through the
building’s main entrance, into its lobby, and immediately turns left.
In Jones 131, the department’s seminar room, is a long rectangular
wood table. Let L denote the length of this table. The purpose of this
experiment is to measure L using a standard (12-inch) ruler.
You will need a 12-inch ruler that is marked in increments of 1/16
inches. Groups of students may use the same ruler, but it is important
that each student obtain his/her own measurement of L. Please do
not attempt to obtain your measurement at a time when Jones 131 is
being used for a seminar or faculty meeting!
Your report of this experiment should include the following informa-
tion:
• A description of the ruler that you used. From what was it made?
In what condition is it? Who owns it? What other students used
the same ruler?
26 CHAPTER 1. EXPERIMENTS
• A description of your measuring protocol. How did you position
the ruler initially? How did you reposition it? How did you ensure
that you were measuring along a straight line?
• An account of the experiment. When did you measure? How
long did it take you? Please note any unusual circumstances that
might bear on your results.
• Your estimate (in inches, to the nearest 1/16 inch) of L.
3. Statisticians say that a procedure that tends to either underestimate or
overestimate the quantity that it is being used to determine is biased.
(a) In the preceding problem, suppose that you tried to measure the
length of the table with a ruler that—unbeknownst to you—was
really 11.9 inches long instead of the nominal 12 inches. Would
you tend to underestimate or overestimate the true length of the
table? Explain.
(b) In the Lanarkshire milk experiment, would a tendency for well-
fed children to wear heavier winter clothing than ill-nourished
children cause weight gains due to milk supplements to be under-
estimated or overestimated? Explain.
Chapter 2
Mathematical Preliminaries
This chapter collects some fundamental mathematical concepts that we will
use in our study of probability and statistics. Most of these concepts should
seem familiar, although our presentation of them may be a bit more formal
than you have previously encountered. This formalism will be quite useful
as we study probability, but it will tend to recede into the background as we
progress to the study of statistics.
2.1 Sets
It is an interesting bit of trivia that “set” has the most different meanings of
any word in the English language. To describe what we mean by a set, we
suppose the existence of a designated universe of possible objects. In this
book, we will often denote the universe by S. By a set, we mean a collection
of objects with the property that each object in the universe either does or
does not belong to the collection. We will tend to denote sets by uppercase
Roman letters toward the beginning of the alphabet, e.g., A, B, C, etc.
The set of objects that do not belong to a designated set A is called the
complement of A. We will denote complements by Ac, Bc, Cc, etc. The
complement of the universe is the empty set, denoted Sc = ∅.
An object that belongs to a designated set is called an element or member
of that set. We will tend to denote elements by lower case Roman letters
and write expressions such as x ∈ A, pronounced “x is an element of the
set A.” Sets with a small number of elements are often identified by simple
enumeration, i.e., by writing down a list of elements. When we do so, we will
enclose the list in braces and separate the elements by commas or semicolons.
27
28 CHAPTER 2. MATHEMATICAL PRELIMINARIES
For example, the set of all feature films directed by Sergio Leone is
{ A Fistful of Dollars;
For a Few Dollars More;
The Good, the Bad, and the Ugly;
Once Upon a Time in the West;
Duck, You Sucker!;
Once Upon a Time in America }
In this book, of course, we usually will be concerned with sets defined by
certain mathematical properties. Some familiar sets to which we will refer
repeatedly include:
• The set of natural numbers, N = {1, 2, 3, . . .}.
• The set of integers, Z = {. . . , −3, −2, −1, 0, 1, 2, 3, . . .}.
• The set of real numbers, ℜ = (−∞, ∞).
If A and B are sets and each element of A is also an element of B, then
we say that A is a subset of B and write A ⊂ B. For example,
N ⊂ Z ⊂ ℜ.
Quite often, a set A is defined to be those elements of another set B that
satisfy a specified mathematical property. In such cases, we often specify A
by writing a generic element of B to the left of a colon, the property to the
right of the colon, and enclosing this syntax in braces. For example,
A = {x ∈ Z : x2
< 5} = {−2, −1, 0, 1, 2},
is pronounced “A is the set of integers x such that x2 is less than 5.”
Given sets A and B, there are several important sets that can be con-
structed from them. The union of A and B is the set
A ∪ B = {x ∈ S : x ∈ A or x ∈ B}
and the intersection of A and B is the set
A ∩ B = {x ∈ S : x ∈ A and x ∈ B}.
For example, if A is as above and
B = {x ∈ Z : |x − 2| ≤ 1} = {1, 2, 3},
2.1. SETS 29
then A ∪ B = {−2, −1, 0, 1, 2, 3} and A ∩ B = {1, 2}. Notice that unions
and intersections are symmetric constructions, i.e., A ∪ B = B ∪ A and
A ∩ B = B ∩ A.
If A∩B = ∅, i.e., if A and B have no elements in common, then A and B
are disjoint or mutually exclusive. By convention, the empty set is a subset
of every set, so
∅ ⊂ A ∩ B ⊂ A ⊂ A ∪ B ⊂ S
and
∅ ⊂ A ∩ B ⊂ B ⊂ A ∪ B ⊂ S.
These facts are illustrated by the Venn diagram in Figure 2.1, in which sets
are qualitatively indicated by connected subsets of the plane. We will make
frequent use of Venn diagrams as we develop basic facts about probabilities.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
A
B
·
· ·
· · ·
· · · ·
· · · · ·
· · · · · ·
· · · · · · ·
· · · · · · · ·
· · · · · · · · ·
· · · · · · · ·
· · · · · · · · ·
· · · · · · · · · ·
· · · · · · · · · · ·
· · · · · · · · · ·
· · · · · · · · · · ·
· · · · · · · · · · · ·
· · · · · · · · · · ·
· · · · · · · · · · · ·
· · · · · · · · · · ·
· · · · · · · · · · · ·
· · · · · · · · · · · · ·
· · · · · · · · · · · ·
· · · · · · · · · · ·
· · · · · · · · · · · ·
· · · · · · · · · · ·
· · · · · · · · · · · ·
· · · · · · · · · · ·
· · · · · · · · · ·
· · · · · · · · · · ·
· · · · · · · · · ·
· · · · · · · · ·
· · · · · · · ·
· · · · · · · · ·
· · · · · · · ·
· · · · · · ·
· · · · · ·
· · · · ·
· · · ·
· · ·
· ·
·
Figure 2.1: A Venn diagram. The shaded region represents the intersection
of the nondisjoint sets A and B.
It is often useful to extend the concepts of union and intersection to more
than two sets. Let {Ak} denote an arbitrary collection of sets, where k is an
index that identifies the set. Then x ∈ S is an element of the union of {Ak},
30 CHAPTER 2. MATHEMATICAL PRELIMINARIES
denoted [
k
Ak,
if and only if there exists some k0 such that x ∈ Ak0 . Also, x ∈ S is an
element of the intersection of {Ak}, denoted

k
Ak,
if and only if x ∈ Ak for every k. For example, if Ak = {0, 1, . . . , k} for
k = 1, 2, 3, . . ., then [
k
Ak = {0, 1, 2, 3, . . .}
and 
k
Ak = {0, 1}.
Furthermore, it will be important to distinguish collections of sets with
the following property:
Definition 2.1 A collection of sets is pairwise disjoint if and only if each
pair of sets in the collection has an empty intersection.
Unions and intersections are related to each other by two distributive
laws:
B ∩
Ã
[
k
Ak
!
=
[
k
(B ∩ Ak)
and
B ∪
Ã

k
Ak
!
=

k
(B ∪ Ak) .
Furthermore, unions and intersections are related to complements by De-
Morgan’s laws: Ã
[
k
Ak
!c
=

k
Ac
k
and Ã

k
Ak
!c
=
[
k
Ac
k.
The first law states that an object is not in any of the sets in the collection
if and only if it is in the complement of each set; the second law states that
2.2. COUNTING 31
an object is not in every set in the collection if it is in the complement of at
least one set.
Finally, we consider another important set that can be constructed from
A and B.
Definition 2.2 The Cartesian product of two sets A and B, denoted A×B,
is the set of ordered pairs whose first component is an element of A and whose
second component is an element of B, i.e.,
A × B = {(a, b) : a ∈ A, b ∈ B}.
For example, if A = {−2, −1, 0, 1, 2} and B = {1, 2, 3}, then the set A × B
contains the following elements:
(−2, 1) (−1, 1) (0, 1) (1, 1) (2, 1)
(−2, 2) (−1, 2) (0, 2) (1, 2) (2, 2)
(−2, 3) (−1, 3) (0, 3) (1, 3) (2, 3)
A familiar example of a Cartesian product is the Cartesian coordinatization
of the plane,
ℜ2
= ℜ × ℜ = {(x, y) : x, y ∈ ℜ}.
Of course, this construction can also be extended to more than two sets, e.g.,
ℜ3
= {(x, y, z) : x, y, z ∈ ℜ}.
2.2 Counting
This section is concerned with determining the number of elements in a
specified set. One of the fundamental concepts that we will exploit in our
brief study of counting is the notion of a one-to-one correspondence between
two sets. We begin by illustrating this notion with an elementary example.
Example 2.1 Define two sets,
A1 = {diamond, emerald, ruby, sapphire}
and
B = {blue, green, red, white} .
The elements of these sets can be paired in such a way that to each element
of A1 there is assigned a unique element of B and to each element of B there
32 CHAPTER 2. MATHEMATICAL PRELIMINARIES
is assigned a unique element of A1. Such a pairing can be accomplished in
various ways; a natural assignment is the following:
diamond ↔ white
emerald ↔ green
ruby ↔ red
sapphire ↔ blue
This assignment exemplifies a one-to-one correspondence.
Now suppose that we augment A1 by forming
A2 = A1 ∪ {aquamarine} .
Although we can still assign a color to each gemstone, we cannot do so in
such a way that each gemstone corresponds to a different color. There does
not exist a one-to-one correspondence between A2 and B.
From Example 2.1, we abstract
Definition 2.3 Two sets can be placed in one-to-one correspondence if their
elements can be paired in such a way that each element of either set is asso-
ciated with a unique element of the other set.
The concept of one-to-one correspondence can then be exploited to obtain a
formal definition of a familiar concept:
Definition 2.4 A set A is finite if there exists a natural number N such
that the elements of A can be placed in one-to-one correspondence with the
elements of {1, 2, . . . , N}.
If A is finite, then the natural number N that appears in Definition 2.4
is unique. It is, in fact, the number of elements in A. We will denote this
quantity, sometimes called the cardinality of A, by #(A). In Example 2.1
above, #(A1) = #(B) = 4 and #(A2) = 5.
The Multiplication Principle Most of our counting arguments will rely
on a fundamental principle, which we illustrate with an example.
Example 2.2 Suppose that each gemstone in Example 2.1 has been
mounted on a ring. You desire to wear one of these rings on your left hand
and another on your right hand. How many ways can this be done?
2.2. COUNTING 33
First, suppose that you wear the diamond ring on your left hand. Then
there are three rings available for your right hand: emerald, ruby, sapphire.
Next, suppose that you wear the emerald ring on your left hand. Again
there are three rings available for your right hand: diamond, ruby, sapphire.
Suppose that you wear the ruby ring on your left hand. Once again there
are three rings available for your right hand: diamond, emerald, sapphire.
Finally, suppose that you wear the sapphire ring on your left hand. Once
more there are three rings available for your right hand: diamond, emerald,
ruby.
We have counted a total of 3 + 3 + 3 + 3 = 12 ways to choose a ring for
each hand. Enumerating each possibility is rather tedious, but it reveals a
useful shortcut. There are 4 ways to choose a ring for the left hand and, for
each such choice, there are three ways to choose a ring for the right hand.
Hence, there are 4 · 3 = 12 ways to choose a ring for each hand. This is an
instance of a general principle:
Suppose that two decisions are to be made and that there are n1
possible outcomes of the first decision. If, for each outcome of
the first decision, there are n2 possible outcomes of the second
decision, then there are n1n2 possible outcomes of the pair of
decisions.
Permutations and Combinations We now consider two more concepts
that are often employed when counting the elements of finite sets. We mo-
tivate these concepts with an example.
Example 2.3 A fast-food restaurant offers a single entree that comes
with a choice of 3 side dishes from a total of 15. To address the perception
that it serves only one dinner, the restaurant conceives an advertisement
that identifies each choice of side dishes as a distinct dinner. Assuming that
each entree must be accompanied by 3 distinct side dishes, e.g., {stuffing,
mashed potatoes, green beans} is permitted but {stuffing, stuffing, mashed
potatoes} is not, how many distinct dinners are available?1
Answer 2.3a The restaurant reasons that a customer, asked to choose
3 side dishes, must first choose 1 side dish from a total of 15. There are
1
This example is based on an actual incident involving the Boston Chicken (now Boston
Market) restaurant chain and a high school math class in Denver, CO.
34 CHAPTER 2. MATHEMATICAL PRELIMINARIES
15 ways of making this choice. Having made it, the customer must then
choose a second side dish that is different from the first. For each choice of
the first side dish, there are 14 ways of choosing the second; hence 15 × 14
ways of choosing the pair. Finally, the customer must choose a third side
dish that is different from the first two. For each choice of the first two,
there are 13 ways of choosing the third; hence 15 × 14 × 13 ways of choosing
the triple. Accordingly, the restaurant advertises that it offers a total of
15 × 14 × 13 = 2730 possible dinners.
Answer 2.3b A high school math class considers the restaurant’s claim
and notes that the restaurant has counted side dishes of
{ stuffing, mashed potatoes, green beans },
{ stuffing, green beans, mashed potatoes },
{ mashed potatoes, stuffing, green beans },
{ mashed potatoes, green beans, stuffing },
{ green beans, stuffing, mashed potatoes }, and
{ green beans, mashed potatoes, stuffing }
as distinct dinners. Thus, the restaurant has counted dinners that differ only
with respect to the order in which the side dishes were chosen as distinct.
Reasoning that what matters is what is on one’s plate, not the order in
which the choices were made, the math class concludes that the restaurant
has overcounted. As illustrated above, each triple of side dishes can be
ordered in 6 ways: the first side dish can be any of 3, the second side dish
can be any of the remaining 2, and the third side dish must be the remaining
1 (3 × 2 × 1 = 6). The math class writes a letter to the restaurant, arguing
that the restaurant has overcounted by a factor of 6 and that the correct
count is 2730÷6 = 455. The restaurant cheerfully agrees and donates $1000
to the high school’s math club.
From Example 2.3 we abstract the following definitions:
Definition 2.5 The number of permutations (ordered choices) of r objects
from n objects is
P(n, r) = n × (n − 1) × · · · × (n − r + 1).
Definition 2.6 The number of combinations (unordered choices) of r ob-
jects from n objects is
C(n, r) = P(n, r) ÷ P(r, r).
2.2. COUNTING 35
In Example 2.3, the restaurant claimed that it offered P(15, 3) dinners, while
the math class argued that a more plausible count was C(15, 3). There, as
always, the distinction was made on the basis of whether the order of the
choices is or is not relevant.
Permutations and combinations are often expressed using factorial nota-
tion. Let
0! = 1
and let k be a natural number. Then the expression k!, pronounced “k
factorial” is defined recursively by the formula
k! = k × (k − 1)!.
For example,
3! = 3 × 2! = 3 × 2 × 1! = 3 × 2 × 1 × 0! = 3 × 2 × 1 × 1 = 3 × 2 × 1 = 6.
Because
n! = n × (n − 1) × · · · × (n − r + 1) × (n − r) × · · · × 1
= P(n, r) × (n − r)!,
we can write
P(n, r) =
n!
(n − r)!
and
C(n, r) = P(n, r) ÷ P(r, r) =
n!
(n − r)!
÷
r!
(r − r)!
=
n!
r!(n − r)!
.
Finally, we note (and will sometimes use) the popular notation
C(n, r) =
Ã
n
r
!
,
pronounced “n choose r”.
Example 2.4 A coin is tossed 10 times. How many sequences of 10
tosses result in a total of exactly 2 Heads?
36 CHAPTER 2. MATHEMATICAL PRELIMINARIES
Answer A sequence of Heads and Tails is completely specified by
knowing which tosses resulted in Heads. To count how many sequences
result in 2 Heads, we simply count how many ways there are to choose the
pair of tosses on which Heads result. This is choosing 2 tosses from 10, or
Ã
10
2
!
=
10!
2!(10 − 2)!
=
10 · 9
2 · 1
= 45.
Example 2.5 Consider the hypothetical example described in Section 1.2.
In a class of 40 students, how many ways can one choose 20 students to
receive exam A? Assuming that the class comprises 30 freshmen and 10 non-
freshmen, how many ways can one choose 15 freshmen and 5 non-freshmen
to receive exam A?
Solution There are
Ã
40
20
!
=
40!
20!(40 − 20)!
=
40 · 39 · · · 22 · 21
20 · 19 · · · 2 · 1
= 137, 846, 528, 820
ways to choose 20 students from 40. There are
Ã
30
15
!
=
30!
15!(30 − 15)!
=
30 · 29 · · · 17 · 16
15 · 14 · · · 2 · 1
= 155, 117, 520
ways to choose 15 freshmen from 30 and
Ã
10
5
!
=
10!
5!(10 − 5)!
=
10 · 9 · 8 · 7 · 6
5 · 4 · 3 · 2 · 1
= 252
ways to choose 5 non-freshmen from 10; hence,
155, 117, 520 · 252 = 39, 089, 615, 040
ways to choose 15 freshmen and 5 non-freshmen to receive exam A. Notice
that, of all the ways to choose 20 students to receive exam A, about 28%
result in exactly 15 freshman and 5 non-freshman.
Countability Thus far, our study of counting has been concerned exclu-
sively with finite sets. However, our subsequent study of probability will
require us to consider sets that are not finite. Toward that end, we intro-
duce the following definitions:
2.2. COUNTING 37
Definition 2.7 A set is infinite if it is not finite.
Definition 2.8 A set is denumerable if its elements can be placed in one-
to-one correspondence with the natural numbers.
Definition 2.9 A set is countable if it is either finite or denumerable.
Definition 2.10 A set is uncountable if it is not countable.
Like Definition 2.4, Definition 2.8 depends on the notion of a one-to-one
correspondence between sets. However, whereas this notion is completely
straightforward when at least one of the sets is finite, it can be rather elu-
sive when both sets are infinite. Accordingly, we provide some examples of
denumerable sets. In each case, we superscript each element of the set in
question with the corresponding natural number.
Example 2.6 Consider the set of even natural numbers, which excludes
one of every two consecutive natural numbers It might seem that this set
cannot be placed in one-to-one correspondence with the natural numbers in
their entirety; however, infinite sets often possess counterintuitive properties.
Here is a correspondence that demonstrates that this set is denumerable:
21
, 42
, 63
, 84
, 105
, 126
, 147
, 168
, 189
, . . .
Example 2.7 Consider the set of integers. It might seem that this
set, which includes both a positive and a negative copy of each natural
number, cannot be placed in one-to-one correspondence with the natural
numbers; however, here is a correspondence that demonstrates that this set
is denumerable:
. . . , −49
, −37
, −25
, −13
, 01
, 12
, 24
, 36
, 48
, . . .
Example 2.8 Consider the Cartesian product of the set of natural
numbers with itself. This set contains one copy of the entire set of natural
numbers for each natural number—surely it cannot be placed in one-to-one
correspondence with a single copy of the set of natural numbers! In fact, the
38 CHAPTER 2. MATHEMATICAL PRELIMINARIES
following correspondence demonstrates that this set is also denumerable:
(1, 1)1 (1, 2)2 (1, 3)6 (1, 4)7 (1, 5)15 . . .
(2, 1)3 (2, 2)5 (2, 3)8 (2, 4)14 (2, 5)17 . . .
(3, 1)4 (3, 2)9 (3, 3)13 (3, 4)18 (3, 5)26 . . .
(4, 1)10 (4, 2)12 (4, 3)19 (4, 4)25 (4, 5)32 . . .
(5, 1)11 (5, 2)20 (5, 3)24 (5, 4)33 (5, 5)41 . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
In light of Examples 2.6–2.8, the reader may wonder what is required to
construct a set that is not countable. We conclude this section by remarking
that the following intervals are uncountable sets, where a, b ∈ ℜ and a < b.
(a, b) = {x ∈ ℜ : a < x < b}
[a, b) = {x ∈ ℜ : a ≤ x < b}
(a, b] = {x ∈ ℜ : a < x ≤ b}
[a, b] = {x ∈ ℜ : a ≤ x ≤ b}
We will make frequent use of such sets, often referring to (a, b) as an open
interval and [a, b] as a closed interval.
2.3 Functions
A function is a rule that assigns a unique element of a set B to each element
of another set A. A familiar example is the rule that assigns to each real
number x the real number y = x2, e.g., that assigns y = 4 to x = 2. Notice
that each real number has a unique square (y = 4 is the only number that
this rule assigns to x = 2), but that more than one number may have the
same square (y = 4 is also assigned to x = −2).
The set A is the function’s domain. Notice that each element of A must
be assigned some element of B, but that an element of B need not be assigned
to any element of A. Thus, in the preceding example, every x ∈ A = ℜ has a
squared value y ∈ B = ℜ, but not every y ∈ B is the square of some number
x ∈ A. (For example, y = −4 is not the square of any real number.) The
elements of B that are assigned to elements of A constitute the image of the
function. In the preceding example, the image of f(x) = x2 is f(ℜ) = [0, ∞).
We will use a variety of letters to denote various types of functions.
Examples include P, X, Y, f, g, F, G, φ. If φ is a function with domain A and
2.4. LIMITS 39
range B, then we write φ : A → B, often pronounced “φ maps A into B”.
If φ assigns b ∈ B to a ∈ A, then we say that b is the value of φ at a and we
write b = φ(a).
If φ : A → B, then for each b ∈ B there is a subset (possibly empty) of
A comprising those elements of A at which φ has value b. We denote this
set by
φ−1
(b) = {a ∈ A : φ(a) = b}.
For example, if φ : ℜ → ℜ is the function defined by φ(x) = x2, then
φ−1
(4) = {−2, 2}.
More generally, if B0 ⊂ B, then
φ−1
(B0) = {a ∈ A : φ(a) ∈ B0} .
Using the same example,
φ−1
([4, 9]) =
n
x ∈ ℜ : x2
∈ [4, 9]
o
= [−3, −2] ∪ [2, 3].
The object φ−1 is called the inverse of φ and φ−1(B0) is called the inverse
image of B0.
2.4 Limits
In Section 2.2 we examined several examples of denumerable sets of real
numbers. In each of these examples, we imposed an order on the set when
we placed it in one-to-one correspondence with the natural numbers. Once
an order has been specified, we can inquire how the set behaves as we progress
through its values in the prescribed sequence. For example, the real numbers
in the ordered denumerable set
½
1,
1
2
,
1
3
,
1
4
,
1
5
, . . .
¾
(2.1)
steadily decrease as one progresses through them. Furthermore, as in Zeno’s
famous paradoxes, the numbers seem to approach the value zero without
ever actually attaining it. To describe such sets, it is helpful to introduce
some specialized terminology and notation.
We begin with
40 CHAPTER 2. MATHEMATICAL PRELIMINARIES
Definition 2.11 A sequence of real numbers is an ordered denumerable sub-
set of ℜ.
Sequences are often denoted using a dummy variable that is specified or
understood to index the natural numbers. For example, we might identify
the sequence (2.1) by writing {1/n} for n = 1, 2, 3, . . ..
Next we consider the phenomenon that 1/n approaches 0 as n increases,
although each 1/n > 0. Let ǫ denote any strictly positive real number. What
we have noticed is the fact that, no matter how small ǫ may be, eventually
n becomes so large that 1/n < ǫ. We formalize this observation in
Definition 2.12 Let {yn} denote a sequence of real numbers. We say that
{yn} converges to a constant value c ∈ ℜ if, for every ǫ > 0, there exists a
natural number N such that yn ∈ (c − ǫ, c + ǫ) for each n ≥ N.
If the sequence of real numbers {yn} converges to c, then we say that c is
the limit of {yn} and we write either yn → c as n → ∞ or limn→∞ yn = c.
In particular,
lim
n→∞
1
n
= 0.
2.5 Exercises
1. A classic riddle inquires:
As I was going to St. Ives,
I met a man with seven wives.
Each wife had seven sacks,
Each sack had seven cats,
Each cat had seven kits.
Kits, cats, sacks, wives—
How many were going to St. Ives?
(a) How many creatures (human and feline) were in the entourage
that the narrator encountered?
(b) What is the answer to the riddle?
2. A well-known carol, “The Twelve Days of Christmas,” describes a
progression of gifts that the singer receives from her true love:
2.5. EXERCISES 41
On the first day of Christmas, my true love gave to me:
A partridge in a pear tree.
On the second day of Christmas, my true love gave to me:
Two turtle doves, and a partridge in a pear tree.
Et cetera.2
How many birds did the singer receive from her true love?
3. The throw of an astragalus (see Section 1.4) has four possible outcomes,
{1, 3, 4, 6}. When throwing four astragali,
(a) How many ways are there to obtain a dog, i.e., for each astragalus
to produce a 1?
(b) How many ways are there to obtain a venus, i.e., for each astra-
galus to produce a different outcome?
Hint: Label each astragalus (e.g., antelope, bison, cow, deer) and keep
track of the outcome of each distinct astragalus.
4. When throwing five astragali,
(a) How many ways are there to obtain the throw of child-eating
Cronos, i.e., to obtain three fours and two sixes?
(b) How many ways are there to obtain the throw of Saviour Zeus,
i.e., to obtain one one, two threes, and two fours?
5. The throw of one die has six possible outcomes, {1, 2, 3, 4, 5, 6}. A
medieval poem, “The Chance of the Dyse,” enumerates the fortunes
that could be divined from casting three dice. Order does not matter,
e.g., the fortune associated with 6-5-3 is also associated with 3-5-6.
How many fortunes does the poem enumerate?
6. Suppose that five cards are dealt from a standard deck of playing cards.
(a) How many hands are possible?
(b) How many straight-flush hands are possible?
(c) How many 4-of-a-kind hands are possible?
(d) Why do you suppose that a straight flush beats 4-of-a-kind?
2
You should be able to find the complete lyrics by doing a web search.
42 CHAPTER 2. MATHEMATICAL PRELIMINARIES
7. In the television reality game show Survivor, 16 contestants (the “cast-
aways”) compete for $1 million. The castaways are stranded in a re-
mote location, e.g., an uninhabited island in the China Sea. Initially,
the castaways are divided into two tribes. The tribes compete in a se-
quence of immunity challenges. After each challenge, the losing tribe
must vote out one of its members and that person is eliminated from
the game. Eventually, the tribes merge and the surviving castaways
compete in a sequence of individual immunity challenges. The winner
receives immunity and the merged tribe must then vote out one of its
other members. After the merged tribe has been reduced to two mem-
bers, a jury of the last 7 castaways to have been eliminated votes on
who should be the Sole Survivor and win $1 million. (Technically, the
jury votes for the Sole Survivor, but this is equivalent to eliminating
one of the final two castaways.)
(a) Suppose that we define an outcome of Survivor to be the name
of the Sole Survivor. In any given game of Survivor, how many
outcomes are possible?
(b) Suppose that we define an outcome of Survivor to be a list of
the castaways’ names, arranged in the order in which they were
eliminated. In any given game of Survivor, how many outcomes
are possible?
8. The final eight castaways in Survivor 2: Australian Outback included
four men (Colby, Keith, Nick, and Rodger) and four women (Amber,
Elisabeth, Jerri, and Tina). They participated in a reward challenge
that required them to form four teams of two persons, one male and
one female. (The teams raced over an obstacle course, recording the
time of the slower team member.) The castaways elected to pair off
by drawing lots.
(a) How many ways were there for the castaways to form four teams?
(b) Jerri was opposed to drawing lots—she wanted to team with
Colby. How many ways are there for the castaways to form four
male-female teams if one of the teams is Colby-Jerri?
(c) If all pairings (male-male, male-female, female-female) are al-
lowed, then how many ways are there for the castaways to form
four teams?
2.5. EXERCISES 43
9. In Major League Baseball’s World Series, the winners of the National
(N) and American (A) League pennants play a sequence of games. The
first team to win four games wins the Series. Thus, the Series must
last at least four games and can last no more than seven games. Let us
define an outcome of the World Series by identifying which League’s
pennant winner won each game. For example, the outcome of the 1975
World Series, in which the Cincinnati Reds represented the National
League and the Boston Red Sox represented the American League, was
ANNANAN. How many World Series outcomes are possible?
10. The following table defines a function that assigns to each feature film
directed by Sergio Leone the year in which it was released.
A Fistful of Dollars 1964
For a Few Dollars More 1965
The Good, the Bad, and the Ugly 1966
Once Upon a Time in the West 1968
Duck, You Sucker! 1972
Once Upon a Time in America 1984
What is the inverse image of the set known as The Sixties?
11. For n = 0, 1, 2, . . ., let
yn =
n
X
k=0
2−k
= 2−0
+ 2−1
+ · · · + 2−n
.
(a) Compute y0, y1, y2, y3, and y4.
(b) The sequence {y0, y1, y2, . . .} is an example of a sequence of partial
sums. Guess the value of its limit, usually written
lim
n→∞
yn = lim
n→∞
n
X
k=0
2−k
=
∞
X
k=0
2−k
.
44 CHAPTER 2. MATHEMATICAL PRELIMINARIES
Chapter 3
Probability
The goal of statistical inference is to draw conclusions about a population
from “representative information” about it. In future chapters, we will dis-
cover that a powerful way to obtain representative information about a pop-
ulation is through the planned introduction of chance. Thus, probability
is the foundation of statistical inference—to study the latter, we must first
study the former. Fortunately, the theory of probability is an especially
beautiful branch of mathematics. Although our purpose in studying proba-
bility is to provide the reader with some tools that will be needed when we
study statistics, we also hope to impart some of the beauty of those tools.
3.1 Interpretations of Probability
Probabilistic statements can be interpreted in different ways. For example,
how would you interpret the following statement?
There is a 40 percent chance of rain today.
Your interpretation is apt to vary depending on the context in which the
statement is made. If the statement was made as part of a forecast by the
National Weather Service, then something like the following interpretation
might be appropriate:
In the recent history of this locality, of all days on which present
atmospheric conditions have been experienced, rain has occurred
on approximately 40 percent of them.
45
46 CHAPTER 3. PROBABILITY
This is an example of the frequentist interpretation of probability. With this
interpretation, a probability is a long-run average proportion of occurence.
Suppose, however, that you had just peered out a window, wondering
if you should carry an umbrella to school, and asked your roommate if she
thought that it was going to rain. Unless your roommate is studying metere-
ology, it is not plausible that she possesses the knowledge required to make
a frequentist statement! If her response was a casual “I’d say that there’s a
40 percent chance,” then something like the following interpretation might
be appropriate:
I believe that it might very well rain, but that it’s a little less
likely to rain than not.
This is an example of the subjectivist interpretation of probability. With
this interpretation, a probability expresses the strength of one’s belief.
The philosopher I. Hacking has observed that dual notions of proba-
bility, one aleatory (frequentist) and one epistemological (subjectivist) have
co-existed throughout history, and that “philosophers seem singularly unable
to put [them] asunder. . . ”1 We shall not attempt so perilous an undertak-
ing. But however we decide to interpret probabilities, we will need a formal
mathematical description of probability to which we can appeal for insight
and guidance. The remainder of this chapter provides an introduction to
the most commonly adopted approach to axiomatic probability. The chap-
ters that follow tend to emphasize a frequentist interpretation of probability,
but the mathematical formalism can also be used with a subjectivist inter-
pretation.
3.2 Axioms of Probability
The mathematical model that has dominated the study of probability was
formalized by the Russian mathematician A. N. Kolmogorov in a monograph
published in 1933. The central concept in this model is a probability space,
which is assumed to have three components:
S A sample space, a universe of “possible” outcomes for the experiment
in question.
1
I. Hacking, The Emergence of Probability, Cambridge University Press, 1975, Chapter
2: Duality.
3.2. AXIOMS OF PROBABILITY 47
C A designated collection of “observable” subsets (called events) of the
sample space.
P A probability measure, a function that assigns real numbers (called
probabilities) to events.
We describe each of these components in turn.
The Sample Space The sample space is a set. Depending on the nature
of the experiment in question, it may or may not be easy to decide upon an
appropriate sample space.
Example 3.1 A coin is tossed once.
A plausible sample space for this experiment will comprise two outcomes,
Heads and Tails. Denoting these outcomes by H and T, we have
S = {H, T}.
Remark: We have discounted the possibility that the coin will come to
rest on edge. This is the first example of a theme that will recur throughout
this text, that mathematical models are rarely—if ever—completely faithful
representations of nature. As described by Mark Kac,
“Models are, for the most part, caricatures of reality, but if they
are good, then, like good caricatures, they portray, though per-
haps in distorted manner, some of the features of the real world.
The main role of models is not so much to explain and predict—
though ultimately these are the main functions of science—as to
polarize thinking and to pose sharp questions.”2
In Example 3.1, and in most of the other elementary examples that we will
use to illustrate the fundamental concepts of axiomatic probability, the fi-
delity of our mathematical descriptions to the physical phenomena described
should be apparent. Practical applications of inferential statistics, however,
often require imposing mathematical assumptions that may be suspect. Data
analysts must constantly make judgments about the plausibility of their as-
sumptions, not so much with a view to whether or not the assumptions are
completely correct (they almost never are), but with a view to whether or
not the assumptions are sufficient for the analysis to be meaningful.
2
Mark Kac, “Some mathematical models in science,” Science, 1969, 166:695–699.
48 CHAPTER 3. PROBABILITY
Example 3.2 A coin is tossed twice.
A plausible sample space for this experiment will comprise four outcomes,
two outcomes per toss. Here,
S =
(
HH TH
HT TT
)
.
Example 3.3 An individual’s height is measured.
In this example, it is less clear what outcomes are possible. All human
heights fall within certain bounds, but precisely what bounds should be
specified? And what of the fact that heights are not measured exactly?
Only rarely would one address these issues when choosing a sample space.
For this experiment, most statisticians would choose as the sample space the
set of all real numbers, then worry about which real numbers were actually
observed. Thus, the phrase “possible outcomes” refers to conceptual rather
than practical possibility. The sample space is usually chosen to be mathe-
matically convenient and all-encompassing.
The Collection of Events Events are subsets of the sample space, but
how do we decide which subsets of S should be designated as events? If the
outcome s ∈ S was observed and E ⊂ S is an event, then we say that E
occurred if and only if s ∈ E. A subset of S is observable if it is always
possible for the experimenter to determine whether or not it occurred. Our
intent is that the collection of events should be the collection of observable
subsets. This intent is often tempered by our desire for mathematical con-
venience and by our need for the collection to possess certain mathematical
properties. In practice, the issue of observability is rarely considered and
certain conventional choices are automatically adopted. For example, when
S is a finite set, one usually designates all subsets of S to be events.
Whether or not we decide to grapple with the issue of observability, the
collection of events must satisfy the following properties:
1. The sample space is an event.
2. If E is an event, then Ec is an event.
3. The union of any countable collection of events is an event.
A collection of subsets with these properties is sometimes called a sigma-field.
Taken together, the first two properties imply that both S and ∅ must
be events. If S and ∅ are the only events, then the third property holds;
3.2. AXIOMS OF PROBABILITY 49
hence, the collection {S, ∅} is a sigma-field. It is not, however, a very useful
collection of events, as it describes a situation in which the experimental
outcomes cannot be distinguished!
Example 3.1 (continued) To distinguish Heads from Tails, we must
assume that each of these individual outcomes is an event. Thus, the only
plausible collection of events for this experiment is the collection of all subsets
of S, i.e.,
C = {S, {H}, {T}, ∅} .
Example 3.2 (continued) If we designate all subsets of S as events,
then we obtain the following collection:
C =























S,
{HH, HT, TH}, {HH, HT, TT},
{HH, TH, TT}, {HT, TH, TT},
{HH, HT}, {HH, TH}, {HH, TT},
{HT, TH}, {HT, TT}, {TH, TT},
{HH}, {HT}, {TH}, {TT},
∅























.
This is perhaps the most plausible collection of events for this experiment,
but others are also possible. For example, suppose that we were unable
to distinguish the order of the tosses, so that we could not distinguish be-
tween the outcomes HT and TH. Then the collection of events should not
include any subsets that contain one of these outcomes but not the other,
e.g., {HH, TH, TT}. Thus, the following collection of events might be deemed
appropriate:
C =













S,
{HH, HT, TH}, {HT, TH, TT},
{HH, TT}, {HT, TH},
{HH}, {TT},
∅













.
The interested reader should verify that this collection is indeed a sigma-
field.
The Probability Measure Once the collection of events has been des-
ignated, each event E ∈ C can be assigned a probability P(E). This must
50 CHAPTER 3. PROBABILITY
be done according to specific rules; in particular, the probability measure P
must satisfy the following properties:
1. If E is an event, then 0 ≤ P(E) ≤ 1.
2. P(S) = 1.
3. If {E1, E2, E3, . . .} is a countable collection of pairwise disjoint events,
then
P
à ∞
[
i=1
Ei
!
=
∞
X
i=1
P(Ei).
We discuss each of these properties in turn.
The first property states that probabilities are nonnegative and finite.
Thus, neither the statement that “the probability that it will rain today
is −.5” nor the statement that “the probability that it will rain today is
infinity” are meaningful. These restrictions have certain mathematical con-
sequences. The further restriction that probabilities are no greater than
unity is actually a consequence of the second and third properties.
The second property states that the probability that an outcome occurs,
that something happens, is unity. Thus, the statement that “the probability
that it will rain today is 2” is not meaningful. This is a convention that
simplifies formulae and facilitates interpretation.
The third property, called countable additivity, is the most interesting.
Consider Example 3.2, supposing that {HT} and {TH} are events and that
we want to compute the probability that exactly one Head is observed, i.e.,
the probability of
{HT} ∪ {TH} = {HT, TH}.
Because {HT} and {TH} are events, their union is an event and therefore
has a probability. Because they are mutually exclusive, we would like that
probability to be
P ({HT, TH}) = P ({HT}) + P ({TH}) .
We ensure this by requiring that the probability of the union of any two
disjoint events is the sum of their respective probabilities.
Having assumed that
A ∩ B = ∅ ⇒ P(A ∪ B) = P(A) + P(B), (3.1)
3.2. AXIOMS OF PROBABILITY 51
it is easy to compute the probability of any finite union of pairwise disjoint
events. For example, if A, B, C, and D are pairwise disjoint events, then
P (A ∪ B ∪ C ∪ D) = P (A ∪ (B ∪ C ∪ D))
= P(A) + P (B ∪ C ∪ D)
= P(A) + P (B ∪ (C ∪ D))
= P(A) + P(B) + P (C ∪ D)
= P(A) + P(B) + P(C) + P(D)
Thus, from (3.1) can be deduced the following implication:
If E1, . . . , En are pairwise disjoint events, then
P
à n
[
i=1
Ei
!
=
n
X
i=1
P (Ei) .
This implication is known as finite additivity. Notice that the union of
E1, . . . , En must be an event (and hence have a probability) because each
Ei is an event.
An extension of finite additivity, countable additivity is the following
implication:
If E1, E2, E3, . . . are pairwise disjoint events, then
P
à ∞
[
i=1
Ei
!
=
∞
X
i=1
P (Ei) .
The reason for insisting upon this extension has less to do with applications
than with theory. Although some axiomatic theories of probability assume
only finite additivity, it is generally felt that the stronger assumption of
countable additivity results in a richer theory. Again, notice that the union
of E1, E2, . . . must be an event (and hence have a probability) because each
Ei is an event.
Finally, we emphasize that probabilities are assigned to events. It may
or may not be that the individual experimental outcomes are events. If
they are, then they will have probabilities. In some such cases (see Chapter
4), the probability of any event can be deduced from the probabilities of the
individual outcomes; in other such cases (see Chapter 5), this is not possible.
52 CHAPTER 3. PROBABILITY
All of the facts about probability that we will use in studying statistical
inference are consequences of the assumptions of the Kolmogorov probability
model. It is not the purpose of this book to present derivations of these facts;
however, three elementary (and useful) propositions suggest how one might
proceed along such lines. In each case, a Venn diagram helps to illustrate
the proof.
Theorem 3.1 If E is an event, then
P (Ec
) = 1 − P(E).
S
E
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
Figure 3.1: A Venn diagram for the probability of Ec.
Proof Refer to Figure 3.1. Ec is an event because E is an event. By
definition, E and Ec are disjoint events whose union is S. Hence,
1 = P(S) = P (E ∪ Ec
) = P(E) + P (Ec
)
and the theorem follows upon subtracting P(E) from both sides. ✷
3.2. AXIOMS OF PROBABILITY 53
Theorem 3.2 If A and B are events and A ⊂ B, then
P(A) ≤ P(B).
S
· · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · ·
· · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · ·
A
B
Figure 3.2: A Venn diagram for the probability of A ⊂ B.
Proof Refer to Figure 3.2. Ac is an event because A is an event. Hence,
B ∩ Ac is an event and
B = A ∪ (B ∩ Ac
) .
Because A and B ∩ Ac are disjoint events,
P(B) = P(A) + P (B ∩ Ac
) ≥ P(A),
as claimed. ✷
Theorem 3.3 If A and B are events, then
P(A ∪ B) = P(A) + P(B) − P(A ∩ B).
54 CHAPTER 3. PROBABILITY
S
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
A
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · ·
· · · · · · · · · · · · · · · · · ·
B
Figure 3.3: A Venn diagram for the probability of A ∪ B.
Proof Refer to Figure 3.3. Both A ∪ B and A ∩ B = (Ac ∪ Bc)c
are
events because A and B are events. Similarly, A ∩ Bc and B ∩ Ac are also
events.
Notice that A∩Bc, B∩Ac, and A∩B are pairwise disjoint events. Hence,
P(A) + P(B) − P(A ∩ B)
= P ((A ∩ Bc
) ∪ (A ∩ B)) + P ((B ∩ Ac
) ∪ (A ∩ B)) − P(A ∩ B)
= P (A ∩ Bc
) + P(A ∩ B) + P (B ∩ Ac
) + P(A ∩ B) − P(A ∩ B)
= P (A ∩ Bc
) + P(A ∩ B) + P (B ∩ Ac
)
= P ((A ∩ Bc
) ∪ (A ∩ B) ∪ (B ∩ Ac
))
= P(A ∪ B),
as claimed. ✷
Theorem 3.3 provides a general formula for computing the probability
of the union of two sets. Notice that, if A and B are in fact disjoint, then
P (A ∩ B) = P(∅) = P (Sc
) = 1 − P(S) = 1 − 1 = 0
3.3. FINITE SAMPLE SPACES 55
and we recover our original formula for that case.
3.3 Finite Sample Spaces
Let
S = {s1, . . . , sN }
denote a sample space that contains N outcomes and suppose that every
subset of S is an event. For notational convenience, let
pi = P ({si})
denote the probability of outcome i, for i = 1, . . . , N. Then, for any event
A, we can write
P(A) = P


[
si∈A
{si}

 =
X
si∈A
P ({si}) =
X
si∈A
pi. (3.2)
Thus, if the sample space is finite, then the probabilities of the individual
outcomes determine the probability of any event. The same reasoning applies
if the sample space is denumerable.
In this section, we focus on an important special case of finite probability
spaces, the case of “equally likely” outcomes. By a fair coin, we mean a
coin that when tossed is equally likely to produce Heads or Tails, i.e., the
probability of each of the two possible outcomes is 1/2. By a fair die, we
mean a die that when tossed is equally likely to produce any of six possible
outcomes, i.e., the probability of each outcome is 1/6. In general, we say
that the outcomes of a finite sample space are equally likely if
pi =
1
N
(3.3)
for i = 1, . . . , N.
In the case of equally likely outcomes, we substitute (3.3) into (3.2) and
obtain
P(A) =
X
si∈A
1
N
=
P
si∈A 1
N
=
#(A)
#(S)
. (3.4)
This equation reveals that, when the outcomes in a finite sample space are
equally likely, calculating probabilities is just a matter of counting. The
counting may be quite difficult, but the probabilty is trivial. We illustrate
this point with some examples.
56 CHAPTER 3. PROBABILITY
Example 3.4 A fair coin is tossed twice. What is the probability of
observing exactly one Head?
The sample space for this experiment was described in Example 3.2.
Because the coin is fair, each of the four outcomes in S is equally likely. Let
A denote the event that exactly one Head is observed. Then A = {HT, TH}
and
P(A) =
#(A)
#(S)
=
2
4
=
1
2
= 0.5.
Example 3.5 A fair die is tossed once. What is the probability that
the number of dots on the top face of the die is a prime number?
The sample space for this experiment is S = {1, 2, 3, 4, 5, 6}. Because the
die is fair, each of the six outcomes in S is equally likely. Let A denote the
event that a prime number is observed. If we agree to count 1 as a prime
number, then A = {1, 2, 3, 5} and
P(A) =
#(A)
#(S)
=
4
6
=
2
3
.
Example 3.6 A deck of 40 cards, labelled 1,2,3,. . . ,40, is shuffled and
cards are dealt as specified in each of the following scenarios.
(a) One hand of four cards is dealt to Arlen. What is the probability that
Arlen’s hand contains four even numbers?
Let S denote the possible hands that might be dealt. Because the
order in which the cards are dealt is not important,
#(S) =
Ã
40
4
!
.
Let A denote the event that the hand contains four even numbers
There are 20 even cards, so the number of ways of dealing 4 even cards
is
#(A) =
Ã
20
4
!
.
Substituting these expressions into (3.4), we obtain
P(A) =
#(A)
#(S)
=
¡20
4
¢
¡40
4
¢ =
51
962
.
= 0.0530.
3.3. FINITE SAMPLE SPACES 57
(b) One hand of four cards is dealt to Arlen. What is the probability that
this hand is a straight, i.e., that it contains four consecutive numbers?
Let S denote the possible hands that might be dealt. Again,
#(S) =
Ã
40
4
!
.
Let A denote the event that the hand is a straight. The possible
straights are:
1-2-3-4
2-3-4-5
3-4-5-6
.
.
.
37-38-39-40
By simple enumeration (just count the number of ways of choosing the
smallest number in the straight), there are 37 such hands. Hence,
P(A) =
#(A)
#(S)
=
37
¡40
4
¢ =
1
2470
.
= 0.0004.
(c) One hand of four cards is dealt to Arlen and a second hand of four
cards is dealt to Mike. What is the probability that Arlen’s hand is a
straight and Mike’s hand contains four even numbers?
Let S denote the possible pairs of hands that might be dealt. Dealing
the first hand requires choosing 4 cards from 40. After this hand has
been dealt, the second hand requires choosing an additional 4 cards
from the remaining 36. Hence,
#(S) =
Ã
40
4
!
·
Ã
36
4
!
.
Let A denote the event that Arlen’s hand is a straight and Mike’s hand
contains four even numbers. There are 37 ways for Arlen’s hand to be
a straight. Each straight contains 2 even numbers, leaving 18 even
numbers available for Mike’s hand. Thus, for each way of dealing a
straight to Arlen, there are
¡18
4
¢
ways of dealing 4 even numbers to
Mike. Hence,
P(A) =
#(A)
#(S)
=
37 ·
¡18
4
¢
¡40
4
¢
·
¡36
4
¢
.
= 2.1032 × 10−5
.
58 CHAPTER 3. PROBABILITY
Example 3.7 Five fair dice are tossed simultaneously.
Let S denote the possible outcomes of this experiment. Each die has 6
possible outcomes, so
#(S) = 6 · 6 · 6 · 6 · 6 = 65
.
(a) What is the probability that the top faces of the dice all show the same
number of dots?
Let A denote the specified event; then A comprises the following out-
comes:
1-1-1-1-1
2-2-2-2-2
3-3-3-3-3
4-4-4-4-4
5-5-5-5-5
6-6-6-6-6
By simple enumeration, #(A) = 6. (Another way to obtain #(A) is
to observe that the first die might result in any of six numbers, after
which only one number is possible for each of the four remaining dice.
Hence, #(A) = 6 · 1 · 1 · 1 · 1 = 6.) It follows that
P(A) =
#(A)
#(S)
=
6
65
=
1
1296
.
= 0.0008.
(b) What is the probability that the top faces of the dice show exactly four
different numbers?
Let A denote the specified event. If there are exactly 4 different num-
bers, then exactly 1 number must appear twice. There are 6 ways to
choose the number that appears twice and
¡5
2
¢
ways to choose the two
dice on which this number appears. There are 5 · 4 · 3 ways to choose
the 3 different numbers on the remaining dice. Hence,
P(A) =
#(A)
#(S)
=
6 ·
¡5
2
¢
· 5 · 4 · 3
65
=
25
54
.
= 0.4630.
(c) What is the probability that the top faces of the dice show exactly three
6’s or exactly two 5’s?
3.3. FINITE SAMPLE SPACES 59
Let A denote the event that exactly three 6’s are observed and let B
denote the event that exactly two 5’s are observed. We must calculate
P(A ∪ B) = P(A) + P(B) − P(A ∩ B) =
#(A) + #(B) − #(A ∩ B)
#(S)
.
There are
¡5
3
¢
ways of choosing the three dice on which a 6 appears and
5 · 5 ways of choosing a different number for each of the two remaining
dice. Hence,
#(A) =
Ã
5
3
!
· 52
.
There are
¡5
2
¢
ways of choosing the two dice on which a 5 appears
and 5 · 5 · 5 ways of choosing a different number for each of the three
remaining dice. Hence,
#(B) =
Ã
5
2
!
· 53
.
There are
¡5
3
¢
ways of choosing the three dice on which a 6 appears and
only 1 way in which a 5 can then appear on the two remaining dice.
Hence,
#(A ∩ B) =
Ã
5
3
!
· 1.
Thus,
P(A ∪ B) =
¡5
3
¢
· 52 +
¡5
2
¢
· 53 −
¡5
3
¢
65
=
1490
65
.
= 0.1916.
Example 3.8 (The Birthday Problem) In a class of k students,
what is the probability that at least two students share a common birthday?
As is inevitably the case with constructing mathematical models of actual
phenomena, some simplifying assumptions are required to make this problem
tractable. We begin by assuming that there are 365 possible birthdays, i.e.,
we ignore February 29. Then the sample space, S, of possible birthdays for
k students comprises 365k outcomes.
Next we assume that each of the 365k outcomes is equally likely. This is
not literally correct, as slightly more babies are born in some seasons than
60 CHAPTER 3. PROBABILITY
in others. Furthermore, if the class contains twins, then only certain pairs of
birthdays are possible outcomes for those two students! In most situations,
however, the assumption of equally likely outcomes is reasonably plausible.
Let A denote the event that at least two students in the class share a
birthday. We might attempt to calculate
P(A) =
#(A)
#(S)
,
but a moment’s reflection should convince the reader that counting the num-
ber of outcomes in A is an extremely difficult undertaking. Instead, we invoke
Theorem 3.1 and calculate
P(A) = 1 − P(Ac
) = 1 −
#(Ac)
#(S)
.
This is considerably easier, because we count the number of outcomes in
which each student has a different birthday by observing that 365 possible
birthdays are available for the oldest student, after which 364 possible birth-
days remain for the next oldest student, after which 363 possible birthdays
remain for the next, etc. The formula is
# (Ac
) = 365 · 364 · · · (366 − k)
and so
P(A) = 1 −
365 · 364 · · · (366 − k)
365 · 365 · · · 365
.
The reader who computes P(A) for several choices of k may be astonished to
discover that a class of just k = 23 students is required to obtain P(A) > 0.5!
3.4 Conditional Probability
Consider a sample space with 10 equally likely outcomes, together with the
events indicated in the Venn diagram that appears in Figure 3.4. Applying
the methods of Section 3.3, we find that the (unconditional) probability of
A is
P(A) =
#(A)
#(S)
=
3
10
= 0.3.
Suppose, however, that we know that we can restrict attention to the ex-
perimental outcomes that lie in B. Then the conditional probability of the
3.4. CONDITIONAL PROBABILITY 61
event A given the occurrence of the event B is
P(A|B) =
#(A ∩ B)
#(S ∩ B)
=
1
5
= 0.2.
Notice that (for this example) the conditional probability, P(A|B), differs
from the unconditional probability, P(A).
S
⋆ ⋆ ⋆ ⋆
⋆
⋆
⋆
⋆
⋆
⋆
B
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
A
Figure 3.4: A Venn diagram that illustrates conditional probability. Each ⋆
represents an individual outcome.
To develop a definition of conditional probability that is not specific to
finite sample spaces with equally likely outcomes, we now write
P(A|B) =
#(A ∩ B)
#(S ∩ B)
=
#(A ∩ B)/#(S)
#(B)/#(S)
=
P(A ∩ B)
P(B)
.
We take this as a definition:
Definition 3.1 If A and B are events, and P(B) > 0, then
P(A|B) =
P(A ∩ B)
P(B)
. (3.5)
62 CHAPTER 3. PROBABILITY
The following consequence of Definition 3.1 is extremely useful. Upon
multiplication of equation (3.5) by P(B), we obtain
P(A ∩ B) = P(B)P(A|B)
when P(B) > 0. Furthermore, upon interchanging the roles of A and B, we
obtain
P(A ∩ B) = P(B ∩ A) = P(A)P(B|A)
when P(A) > 0. We will refer to these equations as the multiplication rule
for conditional probability.
Used in conjunction with tree diagrams, the multiplication rule provides a
powerful tool for analyzing situations that involve conditional probabilities.
Example 3.9 Consider three fair coins, identical except that one coin
(HH) is Heads on both sides, one coin (HT) is Heads on one side and Tails
on the other, and one coin (TT) is Tails on both sides. A coin is selected
at random and tossed. The face-up side of the coin is Heads. What is the
probability that the face-down side of the coin is Heads?
This problem was once considered by Marilyn vos Savant in her syndi-
cated column, Ask Marilyn. As have many of the probability problems that
she has considered, it generated a good deal of controversy. Many readers
reasoned as follows:
1. The observation that the face-up side of the tossed coin is Heads means
that the selected coin was not TT. Hence the selected coin was either
HH or HT.
2. If HH was selected, then the face-down side is Heads; if HT was selected,
then the face-down side is Tails.
3. Hence, there is a 1 in 2, or 50 percent, chance that the face-down side
is Heads.
At first glance, this reasoning seems perfectly plausible and readers who
advanced it were dismayed that Marilyn insisted that .5 is not the correct
probability. How did these readers err?
A tree diagram of this experiment is depicted in Figure 3.5. The branches
represent possible outcomes and the numbers associated with the branches
are the respective probabilities of those outcomes. The initial triple of
branches represents the initial selection of a coin—we have interpreted “at
3.4. CONDITIONAL PROBABILITY 63
•
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
3
1
3
1
3
coin=HH
coin=HT
coin=TT
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
2
1
2
1
up=H
up=H
up=T
up=T
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
1
1
down=H
down=T
down=H
down=T
1
3
1
6
1
6
1
3
Figure 3.5: A tree diagram for Example 3.9.
random” to mean that each coin is equally likely to be selected. The second
level of branches represents the toss of the coin by identifying its resulting
up-side. For HH and TT, only one outcome is possible; for HT, there are two
equally likely outcomes. Finally, the third level of branches represents the
down-side of the tossed coin. In each case, this outcome is determined by
the up-side.
The multiplication rule for conditional probability makes it easy to calcu-
late the probabilities of the various paths through the tree. The probability
that HT is selected and the up-side is Heads and the down-side is Tails is
P(HT ∩ up=H ∩ down=T) = P(HT ∩ up=H) · P(down=T|HT ∩ up=H)
= P(HT) · P(up=H|HT) · 1
= (1/3) · (1/2) · 1
= 1/6
and the probability that HH is selected and the up-side is Heads and the
down-side is Heads is
P(HH ∩ up=H ∩ down=H) = P(HH ∩ up=H) · P(down=H|HH ∩ up=H)
= P(HH) · P(up=H|HH) · 1
= (1/3) · 1 · 1
= 1/3.
64 CHAPTER 3. PROBABILITY
Once these probabilities have been computed, it is easy to answer the original
question:
P(down=H|up=H) =
P(down=H ∩ up=H)
P(up=H)
=
1/3
(1/3) + (1/6)
=
2
3
,
which was Marilyn’s answer.
From the tree diagram, we can discern the fallacy in our first line of
reasoning. Having narrowed the possible coins to HH and HT, we claimed
that HH and HT were equally likely candidates to have produced the observed
Head. In fact, HH was twice as likely as HT. Once this fact is noted it seems
completely intuitive (HH has twice as many Heads as HT), but it is easily
overlooked. This is an excellent example of how the use of tree diagrams
may prevent subtle errors in reasoning.
Example 3.10 (Bayes Theorem) An important application of con-
ditional probability can be illustrated by considering a population of patients
at risk for contracting the HIV virus. The population can be partitioned into
two sets: those who have contracted the virus and developed antibodies to
it, and those who have not contracted the virus and lack antibodies to it.
We denote the first set by D and the second set by Dc.
An ELISA test was designed to detect the presence of HIV antibodies in
human blood. This test also partitions the population into two sets: those
who test positive for HIV antibodies and those who test negative for HIV
antibodies. We denote the first set by + and the second set by −.
Together, the partitions induced by the true disease state and by the
observed test outcome partition the population into four sets, as in the
following Venn diagram:
D ∩ + D ∩ −
Dc ∩ + Dc ∩ −
(3.6)
In two of these cases, D ∩ + and Dc ∩ −, the test provides the correct
diagnosis; in the other two cases, Dc ∩ + and D ∩ −, the test results in a
diagnostic error. We call Dc ∩ + a false positive and D ∩ − a false negative.
In such situations, several quantities are likely to be known, at least
approximately. The medical establishment is likely to have some notion of
P(D), the probability that a patient selected at random from the popula-
tion is infected with HIV. This is the proportion of the population that is
3.4. CONDITIONAL PROBABILITY 65
infected—it is called the prevalence of the disease. For the calculations that
follow, we will assume that P(D) = .001.
Because diagnostic procedures undergo extensive evaluation before they
are approved for general use, the medical establishment is likely to have a
fairly precise notion of the probabilities of false positive and false negative
test results. These probabilities are conditional: a false positive is a positive
test result within the set of patients who are not infected and a false negative
is a negative test results within the set of patients who are infected. Thus,
the probability of a false positive is P(+|Dc) and the probability of a false
negative is P(−|D). For the calculations that follow, we will assume that
P(+|Dc) = .015 and P(−|D) = .003.3
Now suppose that a randomly selected patient has a positive ELISA test
result. Obviously, the patient has an extreme interest in properly assessing
the chances that a diagnosis of HIV is correct. This can be expressed as
P(D|+), the conditional probability that a patient has HIV given a positive
ELISA test. This quantity is called the predictive value of the test.
•
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
0.001
0.999
D
Dc
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
0.997
0.003
0.015
0.985
+
−
+
−
Figure 3.6: A tree diagram for Example 3.10.
To motivate our calculation of P(D|+), it is again helpful to construct
a tree diagram, as in Figure 3.6. This diagram was constructed so that the
branches depicted in the tree have known probabilities, i.e., we first branch
on the basis of disease state because P(D) and P(Dc) are known, then on
the basis of test result because P(+|D), P(−|D), P(+|Dc), and P(−|Dc) are
known. Notice that each of the four paths in the tree corresponds to exactly
one of the four sets in (3.6). Furthermore, we can calculate the probability of
3
See E.M. Sloan et al. (1991), “HIV Testing: State of the Art,” Journal of the American
Medical Association, 266:2861–2866.
66 CHAPTER 3. PROBABILITY
each set by multiplying the probabilities that occur along its corresponding
path:
P(D ∩ +) = P(D) · P(+|D) = 0.001 · 0.997,
P(D ∩ −) = P(D) · P(−|D) = 0.001 · 0.003,
P(Dc
∩ +) = P(Dc
) · P(+|Dc
) = 0.999 · 0.015,
P(Dc
∩ −) = P(Dc
) · P(−|Dc
) = 0.999 · 0.985.
The predictive value of the test is now obtained by computing
P(D|+) =
P(D ∩ +)
P(+)
=
P(D ∩ +)
P(D ∩ +) + P(Dc ∩ +)
=
0.001 · 0.997
0.001 · 0.997 + 0.999 · 0.015
.
= 0.0624.
This probability may seem quite small, but consider that a positive test
result can be obtained in two ways. If the person has the HIV virus, then a
positive result is obtained with high probability, but very few people actually
have the virus. If the person does not have the HIV virus, then a positive
result is obtained with low probability, but so many people do not have the
virus that the combined number of false positives is quite large relative to
the number of true positives. This is a common phenomenon when screening
for diseases.
The preceding calculations can be generalized and formalized in a formula
known as Bayes Theorem; however, because such calculations will not play an
important role in this book, we prefer to emphasize the use of tree diagrams
to derive the appropriate calculations on a case-by-case basis.
Independence We now introduce a concept that is of fundamental im-
portance in probability and statistics. The intuitive notion that we wish to
formalize is the following:
Two events are independent if the occurrence of either is unaf-
fected by the occurrence of the other.
This notion can be expressed mathematically using the concept of condi-
tional probability. Let A and B denote events and assume for the moment
that the probability of each is strictly positive. If A and B are to be regarded
as independent, then the occurrence of A is not affected by the occurrence
of B. This can be expressed by writing
P(A|B) = P(A). (3.7)
3.4. CONDITIONAL PROBABILITY 67
Similarly, the occurrence of B is not affected by the occurrence of A. This
can be expressed by writing
P(B|A) = P(B). (3.8)
Substituting the definition of conditional probability into (3.7) and mul-
tiplying by P(B) leads to the equation
P(A ∩ B) = P(A) · P(B).
Substituting the definition of conditional probability into (3.8) and multi-
plying by P(A) leads to the same equation. We take this equation, called
the multiplication rule for independence, as a definition:
Definition 3.2 Two events A and B are independent if and only if
P(A ∩ B) = P(A) · P(B).
We proceed to explore some consequences of this definition.
Example 3.11 Notice that we did not require P(A) > 0 or P(B) > 0 in
Definition 3.2. Suppose that P(A) = 0 or P(B) = 0, so that P(A)·P(B) = 0.
Because A ∩ B ⊂ A, P(A ∩ B) ≤ P(A); similarly, P(A ∩ B) ≤ P(B). It
follows that
0 ≤ P(A ∩ B) ≤ min(P(A), P(B)) = 0
and therefore that
P(A ∩ B) = 0 = P(A) · P(B).
Thus, if either of two events has probability zero, then the events are neces-
sarily independent.
Example 3.12 Consider the disjoint events depicted in Figure 3.7 and
suppose that P(A) > 0 and P(B) > 0. Are A and B independent? Many
students instinctively answer that they are, but independence is very dif-
ferent from mutual exclusivity. In fact, if A occurs then B does not (and
vice versa), so Figure 3.7 is actually a fairly extreme example of dependent
events. This can also be deduced from Definition 3.2: P(A) · P(B) > 0, but
P(A ∩ B) = P(∅) = 0
so A and B are not independent.
68 CHAPTER 3. PROBABILITY
S
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
A B
Figure 3.7: A Venn diagram for Example 3.12.
Example 3.13 For each of the following, explain why the events A and
B are or are not independent.
(a) P(A) = 0.4, P(B) = 0.5, P([A ∪ B]c) = 0.3.
It follows that
P(A ∪ B) = 1 − P ([A ∪ B]c
) = 1 − 0.3 = 0.7
and, because P(A ∪ B) = P(A) + P(B) − P(A ∩ B), that
P(A ∩ B) = P(A) + P(B) − P(A ∪ B) = 0.4 + 0.5 − 0.7 = 0.2.
Then, since
P(A) · P(B) = 0.5 · 0.4 = 0.2 = P(A ∩ B),
it follows that A and B are independent events.
(b) P(A ∩ Bc) = 0.3, P(Ac ∩ B) = 0.2, P(Ac ∩ Bc) = 0.1.
Refer to the Venn diagram in Figure 3.8 to see that
P(A) · P(B) = 0.7 · 0.6 = 0.42 6= 0.40 = P(A ∩ B)
and hence that A and B are dependent events.
3.4. CONDITIONAL PROBABILITY 69
S
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
A
B
0.3 0.2
0.1
Figure 3.8: A Venn diagram for Example 3.13.
Thus far we have verified that two events are independent by verifying
that the multiplication rule for independence holds. In applications, how-
ever, we usually reason somewhat differently. Using our intuitive notion of
independence, we appeal to common sense, our knowledge of science, etc.,
to decide if independence is a property that we wish to incorporate into our
mathematical model of the experiment in question. If it is, then we assume
that two events are independent and the multiplication rule for independence
becomes available to us for use as a computational formula.
Example 3.14 Consider an experiment in which a typical penny is first
tossed, then spun. Let A denote the event that the toss results in Heads and
let B denote the event that the spin results in Heads. What is the probability
of observing two Heads?
We assume that, for a typical penny, P(A) = 0.5 and P(B) = 0.3 (see
Section 1.1.1). Common sense tells us that the occurrence of either event
is unaffected by the occurrence of the other. (Time is not reversible, so
obviously the occurrence of A is not affected by the occurrence of B. One
70 CHAPTER 3. PROBABILITY
might argue that tossing the penny so that A occurs results in wear that
is slightly different than the wear that results if Ac occurs, thereby slightly
affecting the subsequent probability that B occurs. However, this argument
strikes most students as completely preposterous. Even if it has a modicum
of validity, the effect is undoubtedly so slight that we can safely neglect it
in constructing our mathematical model of the experiment.) Therefore, we
assume that A and B are independent and calculate that
P(A ∩ B) = P(A) · P(B) = 0.5 · 0.3 = 0.15.
Example 3.15 For each of the following, explain why the events A and
B are or are not independent.
(a) Consider the population of William & Mary undergraduate students,
from which one student is selected at random. Let A denote the event
that the student is female and let B denote the event that the student
is concentrating in elementary education.
I’m told that P(A) is roughly 60 percent, while it appears to me that
P(A|B) exceeds 90 percent. Whatever the exact probabilities, it is
evident that the probability that a random elementary education con-
centrator is female is considerably greater than the probability that a
random student is female. Hence, A and B are dependent events.
(b) Consider the population of registered voters, from which one voter is
selected at random. Let A denote the event that the voter belongs to a
country club and let B denote the event that the voter is a Republican.
It is generally conceded that one finds a greater proportion of Repub-
licans among the wealthy than in the general population. Since one
tends to find a greater proportion of wealthy persons at country clubs
than in the general population, it follows that the probability that a
random country club member is a Republican is greater than the prob-
ability that a randomly selected voter is a Republican. Hence, A and
B are dependent events.4
4
This phenomenon may seem obvious, but it was overlooked by the respected Literary
Digest poll. Their embarrassingly awful prediction of the 1936 presidential election resulted
in the previously popular magazine going out of business. George Gallup’s relatively
accurate prediction of the outcome (and his uncannily accurate prediction of what the
Literary Digest poll would predict) revolutionized polling practices.
3.4. CONDITIONAL PROBABILITY 71
Before progressing further, we ask what it should mean for A, B,
and C to be three mutually independent events. Certainly each pair should
comprise two independent events, but we would also like to write
P(A ∩ B ∩ C) = P(A) · P(B) · P(C).
It turns out that this equation cannot be deduced from the pairwise inde-
pendence of A, B, and C, so we have to include it in our definition of mutual
independence. Similar equations must be included when defining the mutual
independence of more than three events. Here is a general definition:
Definition 3.3 Let {Aα} be an arbitrary collection of events. These events
are mutually independent if and only if, for every finite choice of events
Aα1 , . . . , Aαk
,
P (Aα1 ∩ · · · ∩ Aαk
) = P (Aα1 ) · · · P (Aαk
) .
Example 3.16 In the preliminary hearing for the criminal trial of O.J.
Simpson, the prosecution presented conventional blood-typing evidence that
blood found at the murder scene possessed three characteristics also pos-
sessed by Simpson’s blood. The prosecution also presented estimates of the
prevalence of each characteristic in the general population, i.e., of the proba-
bilities that a person selected at random from the general population would
possess these characteristics. Then, to obtain the estimated probability that
a randomly selected person would possess all three characteristics, the pros-
ecution multiplied the three individual probabilities, resulting in an estimate
of 0.005.
In response to this evidence, defense counsel Gerald Uehlman objected
that the prosecution had not established that the three events in question
were independent and therefore had not justified their use of the multipli-
cation rule. The prosecution responded that it was standard practice to
multiply such probabilities and Judge Kennedy-Powell admitted the 0.005
estimate on that basis. No attempt was made to assess whether or not the
standard practice was proper; it was inferred from the fact that the practice
was standard that it must be proper. In this example, science and law di-
verge. From a scientific perspective, Gerald Uehlman was absolutely correct
in maintaining that an assumption of independence must be justified.
72 CHAPTER 3. PROBABILITY
3.5 Random Variables
Informally, a random variable is a rule for assigning real numbers to exper-
imental outcomes. By convention, random variables are usually denoted by
upper case Roman letters near the end of the alphabet, e.g., X, Y , Z.
Example 3.17 A coin is tossed once and Heads (H) or Tails (T) is
observed.
The sample space for this experiment is S = {H, T}. For reasons that
will become apparent, it is often convenient to assign the real number 1 to
Heads and the real number 0 to Tails. This assignment, which we denote
by the random variable X, can be depicted as follows:
H
T
X
−→
1
0
In functional notation, X : S → ℜ and the rule of assignment is defined by
X(H) = 1,
X(T) = 0.
Example 3.18 A coin is tossed twice and the number of Heads is
counted.
The sample space for this experiment is S = {HH, HT, TH, TT}. We want
to assign the real number 2 to the outcome HH, the real number 1 to the
outcomes HT and TH, and the real number 0 to the outcome TT. Several
representations of this assignment are possible:
(a) Direct assignment, which we denote by the random variable Y , can be
depicted as follows:
HH HT
TH TT
Y
−→
2 1
1 0
In functional notation, Y : S → ℜ and the rule of assignment is defined
by
Y (HH) = 2,
Y (HT) = Y (TH) = 1,
Y (TT) = 0.
(b) Instead of directly assigning the counts, we might take the intermediate
step of assigning an ordered pair of numbers to each outcome. As in
3.5. RANDOM VARIABLES 73
Example 3.17, we assign 1 to each occurence of Heads and 0 to each
occurence of Tails. We denote this assignment by X : S → ℜ2. In
this context, X = (X1, X2) is called a random vector. Each component
of the random vector X is a random variable.
Next, we define a function g : ℜ2 → ℜ by
g(x1, x2) = x1 + x2.
The composition g(X) is equivalent to the random variable Y , as re-
vealed by the following depiction:
HH HT
TH TT
X
−→
(1, 1) (1, 0)
(0, 1) (0, 0)
g
−→
2 1
1 0
(c) The preceding representation suggests defining two random variables,
X1 and X2, as in the following depiction:
1 1
0 0
X1
←−
HH HT
TH TT
X2
−→
1 0
1 0
As in the preceding representation, the random variable X1 counts the
number of Heads observed on the first toss and the random variable X2
counts the number of Heads observed on the second toss. The sum of
these random variables, X1 +X2, is evidently equivalent to the random
variable Y .
The primary reason that we construct a random variable, X, is to replace
the probability space that is naturally suggested by the experiment in ques-
tion with a familiar probability space in which the possible outcomes are real
numbers. Thus, we replace the original sample space, S, with the familiar
number line, ℜ. To complete the transference, we must decide which subsets
of ℜ will be designated as events and we must specify how the probabilities
of these events are to be calculated.
It is an interesting fact that it is impossible to construct a probability
space in which the set of outcomes is ℜ and every subset of ℜ is an event. For
this reason, we define the collection of events to be the smallest collection of
subsets that satisfies the assumptions of the Kolmogorov probability model
and that contains every interval of the form (−∞, y]. This collection is
called the Borel sets and it is a very large collection of subsets of ℜ. In
particular, it contains every interval of real numbers and every set that can
74 CHAPTER 3. PROBABILITY
be constructed by applying a countable number of set operations (union,
intersection, complementation) to intervals. Most students will never see a
set that is not a Borel set!
Finally, we must define a probability measure that assigns probabilities
to Borel sets. Of course, we want to do so in a way that preserves the
probability structure of the experiment in question. The only way to do so
is to define the probability of each Borel set B to be the probability of the
set of outcomes to which X assigns a value in B. This set of outcomes is
denoted by
X−1
(B) = {s ∈ S : X(s) ∈ B}
and is depicted in Figure 3.9.
S
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
X−1(B)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
X
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
ℜ
B
Figure 3.9: The inverse image of a Borel set.
How do we know that the set of outcomes to which X assigns a value in
B is an event and therefore has a probability? We don’t, so we guarantee
that it is by including this requirement in our formal definition of a random
variable.
3.5. RANDOM VARIABLES 75
Definition 3.4 A function X : S → ℜ is a random variable if and only if
P ({s ∈ S : X(s) ≤ y})
exists for all choices of y ∈ ℜ.
We will denote the probability measure induced by the random variable
X by PX. The following equation defines various representations of PX:
PX ((−∞, y]) = P
³
X−1
((−∞, y])
´
= P ({s ∈ S : X(s) ∈ (−∞, y]})
= P (−∞ < X ≤ y)
= P (X ≤ y)
A probability measure on the Borel sets is called a probability distribution
and PX is called the distribution of the random variable X. A hallmark
feature of probability theory is that we study the distributions of random
variables rather than arbitrary probability measures. One important reason
for this emphasis is that many different experiments may result in identical
distributions. For example, the random variable in Example 3.17 might have
the same distribution as a random variable that assigns 1 to male newborns
and 0 to female newborns.
Cumulative Distribution Functions Our construction of the proba-
bility measure induced by a random variable suggests that the following
function will be useful in describing the properties of random variables.
Definition 3.5 The cumulative distribution function (cdf) of a random var-
iable X is the function F : ℜ → ℜ defined by
F(y) = P(X ≤ y).
Example 3.17 (continued) We consider two probability structures
that might obtain in the case of a typical penny.
(a) A typical penny is tossed.
For this experiment, P(H) = P(T) = 0.5, and the following values of
the cdf are easily determined:
76 CHAPTER 3. PROBABILITY
– If y < 0, e.g., y = −9.1185 or y = −0.3018, then
F(y) = P(X ≤ y) = P(∅) = 0.
– F(0) = P(X ≤ 0) = P({T}) = 0.5.
– If y ∈ (0, 1), e.g., y = 0.6241 or y = 0.9365, then
F(y) = P(X ≤ y) = P({T}) = 0.5.
– F(1) = P(X ≤ 1) = P({T, H}) = 1.
– If y > 1, e.g., y = 1.5248 or y = 7.7397, then
F(y) = P(X ≤ y) = P({T, H}) = 1.
The entire cdf is plotted in Figure 3.10.
y
F(y)
-2 -1 0 1 2 3
0.0
0.5
1.0
Figure 3.10: The cumulative distribution function for tossing a penny with
P(Heads) = 0.5.
(b) A typical penny is spun.
For this experiment, we assume that P(H) = 0.3 and P(T) = 0.7 (see
Section 1.1.1). Then the following values of the cdf are easily deter-
mined:
3.5. RANDOM VARIABLES 77
– If y < 0, e.g., y = −1.6633 or y = −0.5485, then
F(y) = P(X ≤ y) = P(∅) = 0.
– F(0) = P(X ≤ 0) = P({T}) = 0.7.
– If y ∈ (0, 1), e.g., y = 0.0685 or y = 0.4569, then
F(y) = P(X ≤ y) = P({T}) = 0.7.
– F(1) = P(X ≤ 1) = P({T, H}) = 1.
– If y > 1, e.g., y = 1.4789 or y = 2.6117, then
F(y) = P(X ≤ y) = P({T, H}) = 1.
The entire cdf is plotted in Figure 3.11.
y
F(y)
-2 -1 0 1 2 3
0.0
0.2
0.4
0.6
0.8
1.0
Figure 3.11: The cumulative distribution function for spinning a penny with
P(Heads) = 0.3.
78 CHAPTER 3. PROBABILITY
Example 3.18 (continued) Suppose that the coin is fair, so that each
of the four possible outcomes in S is equally likely, i.e., has probability 0.25.
Then the following values of the cdf are easily determined:
• If y < 0, e.g., y = −4.2132 or y = −0.5615, then
F(y) = P(X ≤ y) = P(∅) = 0.
• F(0) = P(X ≤ 0) = P({TT}) = 0.25.
• If y ∈ (0, 1), e.g., y = 0.3074 or y = 0.6924, then
F(y) = P(X ≤ y) = P({TT}) = 0.25.
• F(1) = P(X ≤ 1) = P({TT, HT, TH}) = 0.75.
• If y ∈ (1, 2), e.g., y = 1.4629 or y = 1.5159, then
F(y) = P(X ≤ y) = P({TT, HT, TH}) = 0.75.
• F(2) = P(X ≤ 2) = P({TT, HT, TH, HH}) = 1.
• If y > 2, e.g., y = 2.1252 or y = 3.7790, then
F(y) = P(X ≤ y) = P({TT, HT, TH, HH}) = 1.
The entire cdf is plotted in Figure 3.12.
Let us make some observations about the cdfs that we have plotted.
First, each cdf assumes its values in the unit interval, [0, 1]. This is a general
property of cdfs: each F(y) = P(X ≤ y), and probabilities necessarily
assume values in [0, 1].
Second, each cdf is nondecreasing; i.e., if y2 > y1, then F(y2) ≥ F(y1).
This is also a general property of cdfs, for suppose that we observe an out-
come s such that X(s) ≤ y1. Because y1 < y2, it follows that X(s) ≤ y2.
Thus, {X ≤ y1} ⊂ {X ≤ y2} and therefore
F (y1) = P (X ≤ y1) ≤ P (X ≤ y2) = F (y2) .
Finally, each cdf equals 1 for sufficiently large y and 0 for sufficiently
small y. This is not a general property of cdfs—it occurs in our examples
because X(S) is a bounded set, i.e., there exist finite real numbers a and b
such that every x ∈ X(S) satisfies a ≤ x ≤ b. However, all cdfs do satisfy
the following properties:
lim
y→∞
F(y) = 1 and lim
y→−∞
F(y) = 0.
3.5. RANDOM VARIABLES 79
y
F(y)
-2 -1 0 1 2 3 4
0.0
0.2
0.4
0.6
0.8
1.0
Figure 3.12: The cumulative distribution function for tossing two pennies
with P(Heads) = 0.5 and counting the number of Heads.
Independence We say that two random variables, X1 and X2, are inde-
pendent if each event defined by X1 is independent of each event defined by
X2. More precisely,
Definition 3.6 Let X1 : S → ℜ and X2 : S → ℜ be random variables. X1
and X2 are independent if and only if, for each y1 ∈ ℜ and each y2 ∈ ℜ,
P(X1 ≤ y1, X2 ≤ y2) = P(X1 ≤ y1) · P(X2 ≤ y2).
This definition can be extended to mutually independent collections of ran-
dom variables in precisely the same way that we extended Definition 3.2 to
Definition 3.3.
Intuitively, two random variables are independent if the distribution of
either does not depend on the value of the other. As we discussed in Section
3.4, in most applications we will appeal to common sense, our knowledge of
science, etc., to decide if independence is a property that we wish to incorpo-
rate into our mathematical model of the experiment in question. If it is, then
we will assume that the appropriate random variables are independent. This
80 CHAPTER 3. PROBABILITY
assumption will allow us to apply many powerful theorems from probability
and statistics that are only true of independent random variables.
3.6 Case Study: Padrolling in Milton Murayama’s
All I asking for is my body
The American dice game Craps evolved from the English dice game Hazard:
“According to tradition, blacks living around New Orleans tried
their hand at Hazard. . . In the course of time they modifed the
rules and playing procedures so greatly that they ended up in-
venting the game of Craps (in the U.S. idiom known as Crap-
shooting or Shooting Craps and here identified as Private Craps
to distinguish it from Open Craps and the more formalized vari-
ants offered in gambling casinos). . . . The popularity of the pri-
vate game of Craps with the U.S. military personnel during World
Wars I and II helped to spread that game to many parts of the
world.”5
Craps is played with two fair dice, each marked in a specific way. According
to Hoyle,
Each face of [each] die is marked with one to six dots, opposite
faces representing. . . numbers adding to seven; if the vertical face
toward you is 5, and the horizontal face on top of the die is 6,
[then] the 3 should be on the vertical face to your right.”6
The shooter rolls the pair of dice, resulting in one of 6 × 6 = 36 possible
outcomes. Of interest is the combined number of dots on the horizontal faces
atop the two dice, a number that we denote by the random variable X. The
possible values of X are displayed in Figure 3.13.
Let x denote the value of X produced by the first roll. The game ends
immediately if x ∈ {2, 3, 7, 11, 12}. If x ∈ {7, 11}, then x is a natural and
the shooter wins; if x ∈ {2, 3, 12}, then x is craps and the shooter loses;
otherwise, x becomes the shooter’s point. If the first roll is not decisive,
then the shooter continues to roll until he either (a) again rolls x (makes
5
“Dice and dice games,” The New Encyclopædia Britannica in 30 Volumes, Macropæ-
dia, Volume 5, 1974, pp. 702–706.
6
Richard L. Frey, According to Hoyle, Fawcett Publications, 1970, p. 266.
3.6. CASE STUDY: PADROLLING DICE 81
1 2 3 4 5 6
1 2 3 4 5 6 7
2 3 4 5 6 7 8
3 4 5 6 7 8 9
4 5 6 7 8 9 10
5 6 7 8 9 10 11
6 7 8 9 10 11 12
Figure 3.13: The possible outcomes of rolling two standard dice.
his point), in which case he wins, or (b) rolls 7 (craps out), in which case he
loses.
A game of craps is fair when each of the 36 outcomes in Figure 3.13 is
equally likely. Fairness is usually ensured by tossing the dice from a cup, or,
more crudely, by tossing them against a wall. In a fair game of craps, we
have the following probabilities:
P(X = 7) = 6/36
P(X = 6) = P(X = 8) = 5/36
P(X = 5) = P(X = 9) = 4/36
P(X = 4) = P(X = 10) = 3/36
P(X = 3) = P(X = 11) = 2/36
P(X = 2) = P(X = 12) = 1/36
Let us begin by calculating the probability that the shooter wins a fair game
of craps.
There are several ways for the shooter to win. We will calculate the
probability of each, then sum these probabilities.
• Roll a natural.
P(X ∈ {7, 11}) =
6 + 2
36
=
2
32
.
• Roll x = 6 or x = 8, then make point.
First,
P(X ∈ {6, 8}) =
5 + 5
36
=
5
18
.
82 CHAPTER 3. PROBABILITY
Then, the shooter must roll x before rolling 7. Other outcomes are
ignored. There are 5 ways to roll x versus 6 ways to roll 7, so the
conditional probability of making point is 5/11. Hence, the probability
of the shooter winning in this way is
5
18
·
5
11
=
25
2 · 32 · 11
.
• Roll x = 5 or x = 9, then make point.
First,
P(X ∈ {5, 9}) =
4 + 4
36
=
2
9
.
Then, the shooter must roll x before rolling 7. Other outcomes are
ignored. There are 4 ways to roll x versus 6 ways to roll 7, so the
conditional probability of making point is 4/10. Hence, the probability
of the shooter winning in this way is
2
9
·
4
10
=
4
32 · 5
.
• Roll x = 4 or x = 10, then make point.
First,
P(X ∈ {4, 10}) =
3 + 3
36
=
1
6
.
Then, the shooter must roll x before rolling 7. Other outcomes are
ignored. There are 3 ways to roll x versus 6 ways to roll 7, so the
conditional probability of making point is 3/9. Hence, the probability
of the shooter winning in this way is
1
6
·
3
9
=
1
2 · 32
.
The probability that the shooter wins is
2
32
+
25
2 · 32 · 11
+
4
32 · 5
+
1
2 · 32
=
244
495
.
= 0.4929.
Thus, the shooter is slightly more likely to lose than to win a fair game of
craps.
Milton Murayama’s 1959 novel, All I asking for is my body, is a brilliant
evocation of nisei (second-generation Japanese American) life on Hawaiian
3.6. CASE STUDY: PADROLLING DICE 83
sugar plantations in the 1930s.7 One of its central concerns is the concept
of Japanese honor and its implicatons for the young protagonist/narrator,
Kiyoshi, and his siblings. Years earlier, Kiyoshi’s parents had sacrificed their
future to pay Kiyoshi’s grandfather’s debts; now they owe the impossible sum
of $6000 and they expect their children to do likewise. Toward the novel’s
end, Japan attacks Pearl Harbor and Kiyoshi subsequently volunteers for an
all-nisei regiment that will fight in Europe. In the final chapter, he contrives
to win $6000 by playing Craps.
Kiyoshi had watched a former classmate, Hiroshi Sakai, play Craps at
the Citizens’ Quarters in Kahana.
“It was weird the way he kept winning. Whenever he rolled, the
dice rolled in unison like the wheels of a cart, and even when one
die rolled ahead of the other, neither flipped on its side. The
Kahana players finally refused to fade [bet against] him, and he
stopped coming.”
We subsequently learn that Hiroshi’s technique is called padrolling.
In the Army,
“Everybody had money and every third guy was a crapshooter.
The sight of all that money drove me mad. There was $25,000
at least floating around in the crap games.. . . Most of the games
were played on blankets on barrack floors, the dice rolled by
hand. There were a few guys who rolled the dice the way Hiroshi
did at the Citizens’ Quarters in Kahana. The dice didn’t bounce
but rolled out in unison like the wheels of a cart. There had to
be an advantage to that.”
Kiyoshi buys a pair of dice and examines them carefully. He realizes that,
by rolling the dice “like the wheels of a cart,” he can keep the sides of the
dice that form the axis of the wheels from appearing. Then, by combining
certain numbers to form the axis, he can improve his chance of winning.
Kiyoshi teaches himself to padroll and develops the following system for
choosing the axis:
1. For the initial roll, use the 1-6 axis for each die.
Padrolling this axis has the effect of eliminating the first and sixth rows
and columns in Figure 3.13, resulting in the following set of possible
7
I am indebted to M. Lynn Weiss for bringing this novel to my attention.
84 CHAPTER 3. PROBABILITY
outcomes:
2 3 4 5
2 4 5 6 7
3 5 6 7 8
4 6 7 8 9
5 7 8 9 10
Notice that this choice eliminates the possibility of crapping out! Fur-
thermore, assuming that the 16 remaining outcomes are equally likely,
it also improves the chance of rolling a natural from 4/18 to 4/16.
2. If x ∈ {6, 8}, then use the 1-6 axis on one die and the 2-5 axis on the
other.
Padrolling this axis results in the following set of possible outcomes:
1 3 4 6
2 3 5 6 8
3 4 6 7 9
4 5 7 8 10
5 6 8 9 11
With this choice, there are 3 ways to roll x versus 2 ways to roll 7.
Again assuming that the 16 remaining outcomes are equally likely,
this choice improves the conditional probability of making point from
5/11 to 3/5.
3. If x ∈ {4, 5, 9, 10}, then use the 1-6 axis on one die and the 3-4 axis on
the other.
Padrolling this axis results in the following set of possible outcomes:
1 2 5 6
2 3 4 7 8
3 4 5 8 9
4 5 6 9 10
5 6 7 10 11
With this choice, there are 2 ways to roll x versus 2 ways to roll 7.
Again, assume that the 16 remaining outcomes are equally likely. If x ∈
{5, 9}, then this choice improves the conditional probability of making
point from 4/10 to 2/4. If x ∈ {4, 10}, then this choice improves the
conditional probability of making point from 3/9 to 2/4.
3.7. EXERCISES 85
If a shooter padrolls successfully, then the probability that he will win
using Kiyoshi’s system is
4
16
+
6
16
·
3
5
+
4
16
·
2
4
+
2
16
·
2
4
=
53
80
= 0.6625,
a substantial improvement on his chance of winning a fair game. “And,”
Kiyoshi rationalizes, “it wasn’t really cheating. The others had the option
of stopping any of your rolls, or they could play with a cup, or have the
roller bang the dice against the wall, or use a canvas or the bare floor instead
of a blanket.” So, Kiyoshi padrolls. I leave to my readers the pleasure of
discovering whether or not he succeeds in winning the $6000 his family needs.
3.7 Exercises
1. Consider three events that might occur when a new mine is dug in the
Cleveland National Forest in San Diego County, California:
A = { quartz specimens are found }
B = { tourmaline specimens are found }
C = { aquamarine specimens are found }
Assume the following probabilities: P(A) = 0.80, P(B) = 0.36, P(C) =
0.28, P(A ∩ B) = 0.29, P(A ∩ C) = 0.24, P(B ∩ C) = 0.16, and
P(A ∩ B ∩ C) = 0.13.
(a) Draw a suitable Venn diagram for this situation.
(b) Calculate the probability that both quartz and tourmaline will be
found, but not aquamarine.
(c) Calculate the probability that quartz will be found, but not tour-
maline or aquamarine.
(d) Calculate the probability that none of these types of specimens
will be found.
(e) Calculate the probability of Ac ∩ (B ∪ C).
2. Consider two urns, one containing four tickets labelled {1, 3, 4, 6}; the
other containing ten tickets, labelled {1, 3, 3, 3, 3, 4, 4, 4, 4, 6}.
(a) What is the probability of drawing a 3 from the first urn?
86 CHAPTER 3. PROBABILITY
(b) What is the probability of drawing a 3 from the second urn?
(c) Which urn is a better model for throwing an astragalus? Why?
3. Suppose that five cards are dealt from a standard deck of playing cards.
(a) What is the probability of drawing a straight flush?
(b) What is the probability of drawing 4 of a kind?
Hint: Use the results of Exercise 2.5.6.
4. Suppose that four fair dice are thrown simultaneously.
(a) How many outcomes are possible?
(b) What is the probability that each top face shows a different num-
ber?
(c) What is the probability that the top faces show four numbers that
sum to five?
(d) What is the probability that at least one of the top faces shows
an odd number?
(e) What is the probability that three of the top faces show the same
odd number and the other top face shows an even number?
5. A dreidl is a four-sided top that contains a Hebrew letter on each side:
nun, gimmel, heh, shin. These letters are an acronym for the Hebrew
phrase nes gadol hayah sham (a great miracle happened there), which
refers to the miracle of the temple light that burned for eight days with
only one day’s supply of oil—the miracle celebrated at Chanukah. Here
we suppose that a fair dreidl (one that is equally likely to fall on each
of its four sides) is to be spun ten times. Compute the probability of
each of the following events:
(a) Five gimmels and five hehs;
(b) No nuns or shins;
(c) Two letters are absent and two letters are present;
(d) At least two letters are absent.
6. Suppose that P(A) = 0.7, P(B) = 0.6, and P(Ac ∩ B) = 0.2.
(a) Draw a Venn diagram that describes this experiment.
3.7. EXERCISES 87
(b) Is it possible for A and B to be disjoint events? Why or why not?
(c) What is the probability of A ∪ Bc?
(d) Is it possible for A and B to be independent events? Why or why
not?
(e) What is the conditional probability of A given B?
7. Suppose that 20 percent of the adult population is hypertensive. Sup-
pose that an automated blood-pressure machine diagnoses 84 percent
of hypertensive adults as hypertensive and 23 percent of nonhyperten-
sive adults as hypertensive. A person is selected at random from the
adult population.
(a) Construct a tree diagram that describes this experiment.
(b) What is the probability that the automated blood-pressure ma-
chine will diagnose the selected person as hypertensive?
(c) Suppose that the automated blood-pressure machine does diag-
nose the selected person as hypertensive. What then is the prob-
ability that this person actually is hypertensive?
(d) The following passage appeared in a recent article (Bruce Bower,
Roots of reason, Science News, 145:72–75, January 29, 1994)
about how human beings think. Please comment on it in whatever
way seems appropriate to you.
And in a study slated to appear in COGNITION,
Cosmides and Tooby confront a cognitive bias known as
the “base-rate fallacy.” As an illustration, they cite a
1978 study in which 60 staff and students at Harvard
Medical School attempted to solve this problem: “If a
test to detect a disease whose prevalence is 1/1,000 has
a false positive rate of 5%, what is the chance that a
person found to have a positive result actually has the
disease, assuming you know nothing about the person’s
symptoms or signs?”
Nearly half the sample estimated this probability as
95 percent; only 11 gave the correct response of 2 percent.
Most participants neglected the base rate of the disease
(it strikes 1 in 1,000 people) and formed a judgment solely
from the characteristics of the test.
88 CHAPTER 3. PROBABILITY
8. Mike owns a box that contains 6 pairs of 14-carat gold, cubic zirconia
earrings. The earrings are of three sizes: 3mm, 4mm, and 5mm. There
are 2 pairs of each size.
Each time that Mike needs an inexpensive gift for a female friend, he
randomly selects a pair of earrings from the box. If the selected pair is
4mm, then he buys an identical pair to replace it. If the selected pair
is 3mm, then he does not replace it. If the selected pair is 5mm, then
he tosses a fair coin. If he observes Heads, then he buys two identical
pairs of earrings to replace the selected pair; if he observes Tails, then
he does not replace the selected pair.
(a) What is the probability that the second pair selected will be 4mm?
(b) If the second pair was not 4mm, then what is the probability that
the first pair was 5mm?
9. The following puzzle was presented on National Public Radio’s Car
Talk:
RAY: Three different numbers are chosen at random, and
one is written on each of three slips of paper. The slips are
then placed face down on the table. The objective is to
choose the slip upon which is written the largest number.
Here are the rules: You can turn over any slip of paper
and look at the amount written on it. If for any reason you
think this is the largest, you’re done; you keep it. Otherwise
you discard it and turn over a second slip. Again, if you
think this is the one with the biggest number, you keep that
one and the game is over. If you don’t, you discard that one
too.
TOM: And you’re stuck with the third. I get it.
RAY: The chance of getting the highest number is one in
three. Or is it? Is there a strategy by which you can improve
the odds?
Solve the puzzle, i.e., determine an optimal strategy for finding the
highest number. What is the probability that your strategy will find
the highest number? Explain your answer.
10. It is a curious fact that approximately 85% of all U.S. residents who are
struck by lightning are men. Consider the population of U.S. residents,
3.7. EXERCISES 89
from which a person is randomly selected. Let A denote the event that
the person is male and let B denote the event that the person will be
struck by lightning.
(a) Estimate P(A|B) and P(Ac|B).
(b) Compare P(A|B) and P(A). Are A and B independent events?
(c) Suggest reasons why P(A|B) is so much larger than P(Ac|B). It
is tempting to joke that men don’t know enough to come in out
of the rain! Why might there be some truth to this possibility,
i.e., why might men be more reluctant to take precautions than
women? Can you suggest other explanations?
11. For each of the following pairs of events, explain why A and B are
dependent or independent.
(a) Consider the population of U.S. citizens, from which a person is
randomly selected. Let A denote the event that the person is a
member of a chess club and let B denote the event that the person
is a woman.
(b) Consider the population of male U.S. citizens who are 30 years of
age. A man is selected at random from this population. Let A
denote the event that he will be bald before reaching 40 years of
age and let B denote the event that his father went bald before
reaching 40 years of age.
(c) Consider the population of students who attend high school in
the U.S. A student is selected at random from this population.
Let A denote the event that the student speaks Spanish and let
B denote the event that the student lives in Texas.
(d) Consider the population of months in the 20th century. A month
is selected at random from this population. Let A denote the
event that a hurricane crossed the North Carolina coastline during
this month and let B denote the event that it snowed in Denver,
Colorado, during this month.
(e) Consider the population of Hollywood feature films produced dur-
ing the 20th century. A movie is selected at random from this
population. Let A denote the event that the movie was filmed in
color and let B denote the event that the movie is a western.
90 CHAPTER 3. PROBABILITY
(f) Consider the population of U.S. college freshmen, from which a
student is randomly selected. Let A denote the event that the
student attends the College of William & Mary, and let B denote
the event that the student graduated from high school in Virginia.
(g) Consider the population of all persons (living or dead) who have
earned a Ph.D. from an American university, from which one is
randomly selected. Let A denote the event that the person’s
Ph.D. was earned before 1950 and let B denote the event that
the person is female.
(h) Consider the population of persons who resided in New Orleans
before Hurricane Katrina. A person is selected at random from
this population. Let A denote the event that the person left New
Orleans before Katrina arrived, and let B denote the event that
the person belonged to a household whose 2004 income was below
the federal poverty line.
(i) Consider the population of all couples who married in the United
States in 1995. A couple is selected at random from this popu-
lation. Let A denote the event that the couple cohabited (lived
together) before marrying, and let B denote the event that the
couple had divorced by 2005.
12. Two graduate students are renting a house. Before leaving town for
winter break, each writes a check for her share of the rent. Emilie
writes her check on December 16. By chance, it happens that the
number of her check ends with the digits 16. Anne writes her check
on December 18. By chance, it happens that the number of her check
ends with the digits 18. What is the probability of such a coincidence,
i.e., that both students would use checks with numbers that end in the
same two digits as the date?
13. Suppose that X is a random variable with cdf
F(y) =













0 y ≤ 0
y/3 y ∈ [0, 1)
2/3 y ∈ [1, 2]
y/3 y ∈ [2, 3]
1 y ≥ 3













.
Graph F and compute the following probabilities:
3.7. EXERCISES 91
(a) P(X > 0.5)
(b) P(2 < X ≤ 3)
(c) P(0.5 < X ≤ 2.5)
(d) P(X = 1)
14. In Section 3.6, we calculated the probability that the shooter will win
a fair game of craps. In so doing, we glossed a subtle point.
Suppose that the shooter’s first roll results in x = 8. Now the shooter
must roll until he rolls another 8, in which cases he makes his point
and wins, or until he rolls a 7, in which case he craps out and loses.
We argued that “there are 5 ways to roll 8 versus 6 ways to roll 7, so
the conditional probability of making point is 5/11.” This argument
appears to ignore the possibility that the shooter might roll indefi-
nitely, never rolling 8 or 7. The following calculations eliminate that
possibility.
For i = 1, 2, 3, . . ., let Xi denote the result of roll i in a fair game of
craps. Assume that we have observed X1 = x = 8.
(a) Calculate the probability that X2 ∈ {7, 8}.
(b) Calculate the probability that X2 ∈ {7, 8} and that X3 ∈ {7, 8}.
(c) Calculate the probability that X2 ∈ {7, 8} and that X3 ∈ {7, 8}
and that X4 ∈ {7, 8}.
(d) What is the probability that the shooter will never roll another 7
or 8?
15. In the final chapter of All I asking for is my body, Kiyoshi places an
initial, double-or-nothing bet of $200. If he wins, he will have $400.
If he then wins a second double-or-nothing bet of $400, he will have
$800. And so on. If he wins five consecutive times, he will have $6400,
enough to pay his family’s debt.
(a) Calculate the probability that the shooter will win five consecutive
games of Craps if each of the games is fair.
(b) Calculate the probability that the shooter will win five consecutive
games of Craps if the shooter is allowed to use Kiyoshi’s padrolling
system.
(c) Kiyoshi recalls that “Hiroshi never lost.” Does this seem plausi-
ble?
92 CHAPTER 3. PROBABILITY
Chapter 4
Discrete Random Variables
4.1 Basic Concepts
Our introduction of random variables in Section 3.5 was completely general,
i.e., the principles that we discussed apply to all random variables. In this
chapter, we will study an important special class of random variables, the
discrete random variables. One of the advantages of restricting attention to
discrete random variables is that the mathematics required to define various
fundamental concepts for this class is fairly minimal.
We begin with a formal definition.
Definition 4.1 A random variable X is discrete if X(S), the set of possible
values of X, is countable.
Our primary interest will be in random variables for which X(S) is finite;
however, there are many important random variables for which X(S) is denu-
merable. The methods described in this chapter apply to both possibilities.
In contrast to the cumulative distribution function (cdf) defined in Sec-
tion 3.5, we now introduce the probability mass function (pmf).
Definition 4.2 Let X be a discrete random variable. The probability mass
function (pmf) of X is the function f : ℜ → ℜ defined by
f(x) = P(X = x).
If f is the pmf of X, then f necessarily possesses several properties worth
noting:
1. f(x) ≥ 0 for every x ∈ ℜ.
93
94 CHAPTER 4. DISCRETE RANDOM VARIABLES
2. If x 6∈ X(S), then f(x) = 0.
3. By the definition of X(S),
X
x∈X(S)
f(x) =
X
x∈X(S)
P(X = x) = P


[
x∈X(S)
{x}


= P (X ∈ X(S)) = 1.
There is an important relation between the pmf and the cdf. For each
y ∈ ℜ, let
L(y) = {x ∈ X(S) : x ≤ y}
denote the values of X that are less than or equal to y. Then
F(y) = P(X ≤ y) = P (X ∈ L(y))
=
X
x∈L(y)
P(X = x) =
X
x∈L(y)
f(x). (4.1)
Thus, the value of the cdf at y can be obtained by summing the values of
the pmf at all values x ≤ y.
More generally, we can compute the probability that X assumes its value
in any set B ⊂ ℜ by summing the values of the pmf over all values of X
that lie in B. Here is the formula:
P(X ∈ B) =
X
x∈X(S)∩B
P(X = x) =
X
x∈X(S)∩B
f(x). (4.2)
We now turn to some elementary examples of discrete random variables
and their pmfs.
4.2 Examples
Example 4.1 A fair coin is tossed and the outcome is Heads or Tails.
Define a random variable X by X(Heads) = 1 and X(Tails) = 0.
The pmf of X is the function f defined by
f(0) = P(X = 0) = 0.5,
f(1) = P(X = 1) = 0.5,
and f(x) = 0 for all x 6∈ X(S) = {0, 1}.
4.2. EXAMPLES 95
Example 4.2 A typical penny is spun and the outcome is Heads or
Tails. Define a random variable X by X(Heads) = 1 and X(Tails) = 0.
Assuming that P(Heads) = 0.3 (see Section 1.1.1), the pmf of X is the
function f defined by
f(0) = P(X = 0) = 0.7,
f(1) = P(X = 1) = 0.3,
and f(x) = 0 for all x 6∈ X(S) = {0, 1}.
Example 4.3 A fair die is tossed and the number of dots on the upper
face is observed. The sample space is S = {1, 2, 3, 4, 5, 6}. Define a random
variable X by X(s) = 1 if s is a prime number and X(s) = 0 if s is not a
prime number.
The pmf of X is the function f defined by
f(0) = P(X = 0) = P({4, 6}) = 1/3,
f(1) = P(X = 1) = P({1, 2, 3, 5}) = 2/3,
and f(x) = 0 for all x 6∈ X(S) = {0, 1}.
Examples 4.1–4.3 have a common structure that we proceed to generalize.
Definition 4.3 A random variable X is a Bernoulli trial if X(S) = {0, 1}.
Traditionally, we call X = 1 a “success” and X = 0 a “failure”.
The family of probability distributions of Bernoulli trials is parametrized
(indexed) by a real number p ∈ [0, 1], usually by setting p = P(X = 1).
We communicate that X is a Bernoulli trial with success probability p by
writing X ∼ Bernoulli(p). The pmf of such a random variable is the function
f defined by
f(0) = P(X = 0) = 1 − p,
f(1) = P(X = 1) = p,
and f(x) = 0 for all x 6∈ X(S) = {0, 1}.
Several important families of random variables can be derived from Ber-
noulli trials. Consider, for example, the familiar experiment of tossing a
fair coin twice and counting the number of Heads. In Section 4.4, we will
generalize this experiment and count the number of successes in n Bernoulli
trials. This will lead to the family of binomial probability distributions.
96 CHAPTER 4. DISCRETE RANDOM VARIABLES
Bernoulli trials are also a fundamental ingredient of the St. Petersburg
Paradox, described in Example 4.13. In this experiment, a fair coin is tossed
until Heads was observed and the number of Tails was counted. More gen-
erally, consider an experiment in which a sequence of independent Bernoulli
trials, each with success probability p, is performed until the first success is
observed. Let X1, X2, X3, . . . denote the individual Bernoulli trials and let
Y denote the number of failures that precede the first success. Then the
possible values of Y are Y (S) = {0, 1, 2, . . .} and the pmf of Y is
f(j) = P(Y = j) = P (X1 = 0, . . . , Xj = 0, Xj+1 = 1)
= P (X1 = 0) · · · P (Xj = 0) · P (Xj+1 = 1)
= (1 − p)j
p
if j ∈ Y (S) and f(j) = 0 if j 6∈ Y (S). This family of probability distributions
is also parametrized by a real number p ∈ [0, 1]. It is called the geometric
family and a random variable with a geometric distribution is said to be a
geometric random variable, written Y ∼ Geometric(p).
If Y ∼ Geometric(p) and k ∈ Y (S), then
F(k) = P(Y ≤ k) = 1 − P(Y > k) = 1 − P(Y ≥ k + 1).
Because the event {Y ≥ k + 1} occurs if and only if X1 = · · · Xk+1 = 0, we
conclude that
F(k) = 1 − (1 − p)k+1
.
Example 4.4 Gary is a college student who is determined to have a
date for an approaching formal. He believes that each woman he asks is twice
as likely to decline his invitation as to accept it, but he resolves to extend
invitations until one is accepted. However, each of his first ten invitations is
declined. Assuming that Gary’s assumptions about his own desirability are
correct, what is the probability that he would encounter such a run of bad
luck?
Gary evidently believes that he can model his invitations as a sequence
of independent Bernoulli trials, each with success probability p = 1/3. If
so, then the number of unsuccessful invitations that he extends is a random
variable Y ∼ Geometric(1/3) and
P(Y ≥ 10) = 1 − P(Y ≤ 9) = 1 − F(9) = 1 −
"
1 −
µ
2
3
¶10
#
.
= 0.0173.
4.2. EXAMPLES 97
Either Gary is very unlucky or his assumptions are flawed. Perhaps
his probability model is correct, but p < 1/3. Perhaps, as seems likely,
the probability of success depends on who he asks. Or perhaps the trials
were not really independent.1 If Gary’s invitations cannot be modelled as
independent and identically distributed Bernoulli trials, then the geometric
distribution cannot be used.
Another important family of random variables is often derived by con-
sidering an urn model. Imagine an urn that contains m red balls and n black
balls. The experiment of present interest involves selecting k balls from the
urn in such a way that each of the
¡m+n
k
¢
possible outcomes that might be
obtained are equally likely. Let X denote the number of red balls selected
in this manner. If we observe X = x, then x red balls were selected from
a total of m red balls and k − x black balls were selected from a total of n
black balls. Evidently, x ∈ X(S) if and only if x is an integer that satisfies
x ≤ min(m, k) and k − x ≤ min(n, k). Furthermore, if x ∈ X(S), then the
pmf of X is
f(x) = P(X = x) =
#{X = x}
#S
=
¡m
x
¢¡ n
k−x
¢
¡m+n
k
¢ . (4.3)
This family of probability distributions is parametrized by a triple of integers,
(m, n, k), for which m, n ≥ 0, m + n ≥ 1, and 0 ≤ k ≤ m + n. It is called
the hypergeometric family and a random variable with a hypergeometric
distribution is said to be a hypergeometric random variable, written Y ∼
Hypergeometric(m, n, k).
The trick to using the hypergeometric distribution in applications is to
recognize a correspondence between the actual experiment and an idealized
urn model, as in. . .
Example 4.5 Consider the hypothetical example described in Section 1.2,
in which 30 freshman and 10 non-freshmen are randomly assigned exam A
or B. What is the probability that exactly 15 freshmen (and therefore exactly
5 non-freshmen) receive exam A?
In Example 2.5 we calculated that the probability in question is
¡30
15
¢¡10
5
¢
¡40
20
¢ =
39, 089, 615, 040
137, 846, 528, 820
.
= 0.28. (4.4)
1
In the actual incident on which this example is based, the women all lived in the same
residential college. It seems doubtful that each woman was completely unaware of the
invitation that preceded hers.
98 CHAPTER 4. DISCRETE RANDOM VARIABLES
Let us re-examine this calculation. Suppose that we write each student’s
name on a slip of paper, mix the slips in a jar, then draw 20 slips without
replacement. These 20 students receive exam A; the remaining 20 students
receive exam B. Now drawing slips of paper from a jar is exactly like drawing
balls from an urn. There are m = 30 slips with freshman names (red balls)
and n = 10 slips with non-freshman names (black balls), of which we are
drawing k = 20 without replacement. Using the hypergeometric pmf defined
by (4.3), the probability of drawing exactly x = 15 freshman names is
¡m
x
¢¡ n
k−x
¢
¡m+n
k
¢ =
¡30
15
¢¡10
5
¢
¡40
20
¢ ,
the left-hand side of (4.4).
Example 4.6 (Adapted from an example analyzed by R.R. Sokal and
F.J. Rohlf (1969), Biometry: The Principles and Practice of Statistics in
Biological Research, W.H. Freeman and Company, San Francisco.)
All but 28 acacia trees (of the same species) were cleared from a study
area in Central America. The 28 remaining trees were freed from ants by one
of two types of insecticide. The standard insecticide (A) was administered
to 15 trees; an experimental insecticide (B) was administered to the other 13
trees. The assignment of insectides to trees was completely random. At issue
was whether or not the experimental insecticide was more effective than the
standard insecticide in inhibiting future ant infestations.
Next, 16 separate ant colonies were situated roughly equidistant from the
acacia trees and permitted to invade them. Unless food is scarce, different
colonies will not compete for the same resources; hence, it could be presumed
that each colony would invade a different tree. In fact, the ants invaded 13
of the 15 trees treated with the standard insecticide and only 3 of the 13
trees treated with the experimental insecticide. If the two insecticides were
equally effective in inhibiting future infestations, then what is the probability
that no more than 3 ant colonies would have invaded trees treated with the
experimental insecticide?
This is a potentially confusing problem that is simplified by construct-
ing an urn model for the experiment. There are m = 13 trees with the
experimental insecticide (red balls) and n = 15 trees with the standard
insecticide (black balls). The ants choose k = 16 trees (balls). Let X de-
note the number of experimental trees (red balls) invaded by the ants; then
4.2. EXAMPLES 99
X ∼ Hypergeometric(13, 15, 16) and its pmf is
f(x) = P(X = x) =
¡13
x
¢¡ 15
16−x
¢
¡28
16
¢ .
Notice that there are not enough standard trees for each ant colony to invade
one; hence, at least one ant colony must invade an experimental tree and
X = 0 is impossible. Thus,
P(X ≤ 3) = f(1) + f(2) + f(3) =
¡13
1
¢¡15
15
¢
¡28
16
¢ +
¡13
2
¢¡15
14
¢
¡28
16
¢ +
¡13
3
¢¡15
13
¢
¡28
16
¢
.
= 0.0010.
This reasoning illustrates the use of a statistical procedure called Fisher’s
exact test. The probability that we have calculated is an example of what
we will later call a significance probability. In the present example, the fact
that the significance probability is so small would lead us to challenge an
assertion that the experimental insecticide is no better than the standard
insecticide.
It is evident that calculations with the hypergeometric distribution can
become rather tedious. Accordingly, this is a convenient moment to in-
troduce computer software for the purpose of evaluating certain pmfs and
cdfs. The statistical programming language R includes functions that evalu-
ate pmfs and cdfs for a variety of distributions, including the geometric and
hypergeometric.2 For the geometric, these functions are dgeom and pgeom;
for the hypergeometric, these functions are dhyper and phyper. We can
calculate the probability in Example 4.4 as follows:
> 1-pgeom(q=9,prob=1/3)
[1] 0.01734153
Similarly, we can calculate the probability in Example 4.6 as follows:
> phyper(q=3,m=13,n=15,k=16)
[1] 0.001026009
2
R is a free, open-source implementation of S, developed at AT&T Bell Laboratories.
See Appendix R for information about obtaining, installing, and using R.
100 CHAPTER 4. DISCRETE RANDOM VARIABLES
4.3 Expectation
Sometime in the early 1650s, the eminent theologian and amateur mathe-
matician Blaise Pascal found himself in the company of the Chevalier de
Méré.3 De Méré posed to Pascal a famous problem: how to divide the pot
of an interrupted dice game. Pascal communicated the problem to Pierre
de Fermat in 1654, beginning a celebrated correspondence that established
a foundation for the mathematics of probability.
Pascal and Fermat began by agreeing that the pot should be divided
according to each player’s chances of winning it. For example, suppose that
each of two players has selected a number from the set S = {1, 2, 3, 4, 5, 6}.
For each roll of a fair die that produces one of their respective numbers, the
corresponding player receives a token. The first player to accumulate five
tokens wins a pot of $100. Suppose that the game is interrupted with Player
A having accumulated four tokens and Player B having accumulated only
one. The probability that Player B would have won the pot had the game
been completed is the probability that B’s number would have appeared
four more times before A’s number appeared one more time. Because we can
ignore rolls that produce neither number, this is equivalent to the probability
that a fair coin will have a run of four consecutive Heads, i.e., 0.5 · 0.5 · 0.5 ·
0.5 = 0.0625. Hence, according to Pascal and Fermat, Player B is entitled to
0.0625 · $100 = $6.25 from the pot and Player A is entitled to the remaining
$93.75.
The crucial concept in Pascal’s and Fermat’s analysis is the notion that
each prospect should be weighted by the chance of realizing that prospect.
This notion motivates
Definition 4.4 The expected value of a discrete random variable X, which
we will denote E(X) or simply EX, is the probability-weighted average of the
possible values of X, i.e.,
EX =
X
x∈X(S)
xP(X = x) =
X
x∈X(S)
xf(x).
Remark The expected value of X, EX, is often called the population
mean and denoted µ.
3
This account of the origins of modern probability can be found in Chapter 6 of David
Bergamini’s Mathematics, Life Science Library, Time Inc., New York, 1963.
4.3. EXPECTATION 101
Example 4.7 If X ∼ Bernoulli(p), then
µ = EX =
X
x∈{0,1}
xP(X = x) = 0·P(X = 0)+1·P(X = 1) = P(X = 1) = p.
Notice that, in general, the expected value of X is not the average of its
possible values. In this example, the possible values are X(S) = {0, 1} and
the average of these values is (always) 0.5. In contrast, the expected value
depends on the probabilities of the values.
Fair Value The expected payoff of a game of chance is sometimes called
the fair value of the game. For example, suppose that you own a slot ma-
chine that pays a jackpot of $1000 with probability p = 0.0005 and $0 with
probability 1−p = 0.9995. How much should you charge a customer to play
this machine? Letting X denote the payoff (in dollars), the expected payoff
per play is
EX = 1000 · 0.0005 + 0 · 0.9995 = 0.5;
hence, if you want to make a profit, then you should charge more than $0.50
per play. Suppose, however, that a rival owner of an identical slot machine
attempted to compete for the same customers. According to the theory of
microeconomics, competition would cause each of you to try to undercut the
other, eventually resulting in an equilibrium price of exactly $0.50 per play,
the fair value of the game.
We proceed to illustrate both the mathematics and the psychology of fair
value by considering several lotteries. A lottery is a choice between receiving
a certain payoff and playing a game of chance. In each of the following
examples, we emphasize that the value accorded the game of chance by a
rational person may be very different from the game’s expected value. In
this sense, the phrase “fair value” is often a misnomer.
Example 4.8a You are offered the choice between receiving a certain
$5 and playing the following game: a fair coin is tossed and you receive $10
or $0 according to whether Heads or Tails is observed.
The expected payoff from the game (in dollars) is
EX = 10 · 0.5 + 0 · 0.5 = 5,
so your options are equivalent with respect to expected earnings. One might
therefore suppose that a rational person would be indifferent to which option
102 CHAPTER 4. DISCRETE RANDOM VARIABLES
he or she selects. Indeed, in my experience, some students prefer to take the
certain $5 and some students prefer to gamble on perhaps winning $10. For
this example, the phrase “fair value” seems apt.
Example 4.8b You are offered the choice between receiving a certain
$5000 and playing the following game: a fair coin is tossed and you receive
$10,000 or $0 according to whether Heads or Tails is observed.
The mathematical structure of this lottery is identical to that of the
preceding lottery, except that the stakes are higher. Again, the options are
equivalent with respect to expected earnings; again, one might suppose that
a rational person would be indifferent to which option he or she selects.
However, many students who opt to gamble on perhaps winning $10 in
Example 4.8a opt to take the certain $5000 in Example 4.8b.
Example 4.8c You are offered the choice between receiving a certain
$1 million and playing the following game: a fair coin is tossed and you
receive $2 million or $0 according to whether Heads or Tails is observed.
The mathematical structure of this lottery is identical to that of the
preceding two lotteries, except that the stakes are now much higher. Again,
the options are equivalent with respect to expected earnings; however, almost
every student to whom I have presented this lottery has expressed a strong
preference for taking the certain $1 million.
Example 4.9 You are offered the choice between receiving a certain $1
million and playing the following game: a fair coin is tossed and you receive
$5 million or $0 according to whether Heads or Tails is observed.
The expected payoff from this game (in millions of dollars) is
EX = 5 · 0.5 + 0 · 0.5 = 2.5,
so playing the game is the more attractive option with respect to expected
earnings. Nevertheless, most students opt to take the certain $1 million. This
should not be construed as an irrational decision. For example, the addition
of $1 million to my own modest estate would secure my eventual retirement.
The addition of an extra $4 million would be very pleasant indeed, allowing
me to increase my current standard of living. However, I do not value the
additional $4 million nearly as much as I value the initial $1 million. As
Aesop observed, “A little thing in hand is worth more than a great thing in
prospect.” For this example, the phrase “fair value” introduces normative
connotations that are not appropriate.
4.3. EXPECTATION 103
Example 4.10 Consider the following passage from a recent article
about investing:
“. . . it’s human nature to overweight low probabilities that offer
high returns. In one study, subjects were given a choice between
a 1-in-1000 chance to win $5000 or a sure thing to win $5; or a 1-
in-1000 chance of losing $5000 versus a sure loss of $5. In the first
case, the expected value (mathematically speaking) is making $5.
In the second case, it’s losing $5. Yet in the first situation, which
mimics a lottery, more than 70% of people asked chose to go for
the $5000. In the second situation, more than 80% would take
the $5 hit.”4
The author evidently considered the reported preferences paradoxical, but
are they really surprising? Plus or minus $5 will not appreciably alter the
financial situations of most subjects, but plus or minus $5000 will. It is
perfectly rational to risk a negligible amount on the chance of winning $5000
while declining to risk a negligible amount on the chance of losing $5000.
The following examples further explicate this point.
Example 4.11 The same article advises, “To limit completely irra-
tional risks, such as lottery tickets, try speculating only with money you
would otherwise use for simple pleasures, such as your morning coffee.”
Consider a hypothetical state lottery, in which 6 numbers are drawn
(without replacement) from the set {1, 2, . . . , 39, 40}. For $2, you can pur-
chase a ticket that specifies 6 such numbers. If the numbers on your ticket
match the numbers selected by the state, then you win $1 million; otherwise,
you win nothing. (For the sake of simplicity, we ignore the possibility that
you might have to split the jackpot with other winners and the possibility
that you might win a lesser prize.) Is buying a lottery ticket “completely
irrational”?
The probability of winning the lottery in question is
p =
1
¡40
6
¢ =
1
3, 838, 380
.
= 2.6053 × 10−7
,
so your expected prize (in dollars) is approximately
106
· 2.6053 × 10−7 .
= 0.26,
4
Robert Frick, “The 7 Deadly Sins of Investing,” Kiplinger’s Personal Finance Maga-
zine, March 1998, p. 138.
104 CHAPTER 4. DISCRETE RANDOM VARIABLES
which is considerably less than the cost of a ticket. Evidently, it is completely
irrational to buy tickets for this lottery as an investment strategy. Suppose,
however, that I buy one ticket per week and reason as follows: I will almost
certainly lose $2 per week, but that loss will have virtually no impact on my
standard of living; however, if by some miracle I win, then gaining $1 million
will revolutionize my standard of living. This can hardly be construed as
irrational behavior, although Robert Frick’s advice to speculate only with
funds earmarked for entertainment is well-taken.
In most state lotteries, the fair value of the game is less than the cost of
a lottery ticket. This is only natural—lotteries exist because they generate
revenue for the state that runs them! (By the same reasoning, gambling must
favor the house because casinos make money for their owners.) However, on
very rare occasions a jackpot is so large that the typical situation is reversed.
Several years ago, an Australian syndicate noticed that the fair value of a
Florida state lottery exceeded the price of a ticket and purchased a large
number of tickets as an (ultimately successful) investment strategy. And
Voltaire once purchased every ticket in a raffle upon noting that the prize
was worth more than the total cost of the tickets being sold!
Example 4.12 If the first case described in Example 4.10 mimics a
lottery, then the second case mimics insurance. Mindful that insurance
companies (like casinos) make money, Ambrose Bierce offered the follow-
ing definition:
“INSURANCE, n. An ingenious modern game of chance in which
the player is permitted to enjoy the comfortable conviction that
he is beating the man who keeps the table.”5
However, while it is certainly true that the fair value of an insurance policy
is less than the premiums required to purchase it, it does not follow that
buying insurance is irrational. I can easily afford to pay $200 per year for
homeowners insurance, but I would be ruined if all of my possessions were
destroyed by fire and I received no compensation for them. My decision that
a certain but affordable loss is preferable to an unlikely but catastrophic loss
is an example of risk-averse behavior.
Before presenting our concluding example of fair value, we derive a useful
formula. Suppose that X : S → ℜ is a discrete random variable and φ : ℜ →
5
Ambrose Bierce, The Devil’s Dictionary, 1881–1906. In The Collected Writings of
Ambrose Bierce, Citadel Press, Secaucus, NJ, 1946.
4.3. EXPECTATION 105
ℜ is a function. Let Y = φ(X). Then Y : ℜ → ℜ is a random variable and
Eφ(X) = EY =
X
y∈Y (S)
yP(Y = y)
=
X
y∈Y (S)
yP (φ(X) = y)
=
X
y∈Y (S)
yP
³
X ∈ φ−1
(y)
´
=
X
y∈Y (S)
y


X
x∈φ−1(y)
P(X = x)


=
X
y∈Y (S)
X
x∈φ−1(y)
yP(X = x)
=
X
y∈Y (S)
X
x∈φ−1(y)
φ(x)P(X = x)
=
X
x∈X(S)
φ(x)P(X = x)
=
X
x∈X(S)
φ(x)f(x). (4.5)
Example 4.13 Consider a game in which the jackpot starts at $1 and
doubles each time that Tails is observed when a fair coin is tossed. The game
terminates when Heads is observed for the first time. How much would you
pay for the privilege of playing this game? How much would you charge if
you were responsible for making the payoff?
This is a curious game. With high probability, the payoff will be rather
small; however, there is a small chance of a very large payoff. In response to
the first question, most students discount the latter possibility and respond
that they would only pay a small amount, rarely more than $4. In response to
the second question, most students recognize the possibility of a large payoff
and demand payment of a considerably greater amount. Let us consider if
the notion of fair value provides guidance in reconciling these perspectives.
Let X denote the number of Tails that are observed before the game
terminates. Then X(S) = {0, 1, 2, . . .} and the geometric random variable
X has pmf
f(x) = P(x consecutive Tails) = 0.5x
.
The payoff from this game (in dollars) is Y = 2X; hence, the expected
106 CHAPTER 4. DISCRETE RANDOM VARIABLES
payoff is
E2X
=
∞
X
x=0
2x
· 0.5x
=
∞
X
x=0
1 = ∞.
This is quite startling! The “fair value” of this game provides very little
insight into the value that a rational person would place on playing it. This
remarkable example is quite famous—it is known as the St. Petersburg Para-
dox.
Properties of Expectation We now state (and sometimes prove) some
useful consequences of Definition 4.4 and Equation 4.5.
Theorem 4.1 Let X denote a discrete random variable and suppose that
P(X = c) = 1. Then EX = c.
Theorem 4.1 states that, if a random variable always assumes the same
value c, then the probability-weighted average of the values that it assumes
is c. This should be obvious.
Theorem 4.2 Let X denote a discrete random variable and suppose that
c ∈ ℜ is constant. Then
E [cφ(X)] =
X
x∈X(S)
cφ(x)f(x) = c
X
x∈X(S)
φ(x)f(x) = cE [φ(X)] .
Theorem 4.2 states that we can interchange the order of multiplying by
a constant and computing the expected value. Notice that this property of
expectation follows directly from the analogous property for summation.
Theorem 4.3 Let X denote a discrete random variable. Then
E [φ1(X) + φ2(X)] =
X
x∈X(S)
[φ1(x) + φ2(x)]f(x)
=
X
x∈X(S)
[φ1(x)f(x) + φ2(x)f(x)]
=
X
x∈X(S)
φ1(x)f(x) +
X
x∈X(S)
φ2(x)f(x)
= E [φ1(X)] + E [φ2(X)] .
4.3. EXPECTATION 107
Theorem 4.3 states that we can interchange the order of adding functions
of a random variable and computing the expected value. Again, this property
of expectation follows directly from the analogous property for summation.
Theorem 4.4 Let X1 and X2 denote discrete random variables. Then
E [X1 + X2] = EX1 + EX2.
Theorem 4.4 states that the expected value of a sum equals the sum of
the expected values.
Variance Now suppose that X is a discrete random variable, let µ = EX
denote its expected value, or population mean, and define a function φ :
ℜ → ℜ by
φ(x) = (x − µ)2
.
For any x ∈ ℜ, φ(x) is the squared deviation of x from the expected value
of X. If X always assumes the value µ, then φ(X) always assumes the value
0; if X tends to assume values near µ, then φ(X) will tend to assume small
values; if X often assumes values far from µ, then φ(X) will often assume
large values. Thus, Eφ(X), the expected squared deviation of X from its
expected value, is a measure of the variability of the population X(S). We
summarize this observation in
Definition 4.5 The variance of a discrete random variable X, which we
will denote Var(X) or simply Var X, is the probability-weighted average of
the squared deviations of X from EX = µ, i.e.,
Var X = E(X − µ)2
=
X
x∈X(S)
(x − µ)2
f(x).
Remark The variance of X, Var X, is often called the population vari-
ance and denoted σ2.
Denoting the population variance by σ2 may strike the reader as awk-
ward notation, but there is an excellent reason for it. Because the variance
measures squared deviations from the population mean, it is measured in
different units than either the random variable itself or its expected value.
For example, if X measures length in meters, then so does EX, but Var X is
measured in meters squared. To recover a measure of population variability
in the original units of measurement, we take the square root of the variance
and obtain σ.
108 CHAPTER 4. DISCRETE RANDOM VARIABLES
Definition 4.6 The standard deviation of a random variable is the square
root of its variance.
Remark The standard deviation of X, often denoted σ, is often called
the population standard deviation.
Example 4.1 (continued) If X ∼ Bernoulli(p), then
σ2
= Var X = E(X − µ)2
= (0 − µ)2
· P(X = 0) + (1 − µ)2
· P(X = 1)
= (0 − p)2
(1 − p) + (1 − p)2
p
= p(1 − p)(p + 1 − p)
= p(1 − p).
Before turning to a more complicated example, we establish a useful fact.
Theorem 4.5 If X is a discrete random variable, then
Var X = E(X − µ)2
= E(X2
− 2µX + µ2
)
= EX2
+ E(−2µX) + Eµ2
= EX2
− 2µEX + µ2
= EX2
− 2µ2
+ µ2
= EX2
− (EX)2
.
A straightforward way to calculate the variance of a discrete random variable
that assumes a fairly small number of values is to exploit Theorem 4.5 and
organize one’s calculations in the form of a table.
Example 4.14 Suppose that X is a random variable whose possible
values are X(S) = {2, 3, 5, 10}. Suppose that the probability of each of these
values is given by the formula f(x) = P(X = x) = x/20.
(a) Calculate the expected value of X.
(b) Calculate the variance of X.
(c) Calculate the standard deviation of X.
4.3. EXPECTATION 109
Solution
x f(x) xf(x) x2 x2f(x)
2 0.10 0.20 4 0.40
3 0.15 0.45 9 1.35
5 0.25 1.25 25 6.25
10 0.50 5.00 100 50.00
6.90 58.00
(a) µ = EX = 0.2 + 0.45 + 1.25 + 5 = 6.9.
(b) σ2 = Var X = EX2 − (EX)2 = (0.4 + 1.35 + 6.25 + 50) − 6.92 =
58 − 47.61 = 10.39.
(c) σ =
√
10.39
.
= 3.2234.
Now suppose that X : S → ℜ is a discrete random variable and φ : ℜ → ℜ
is a function. Let Y = φ(X). Then Y is a discrete random variable and
Var φ(X) = Var Y = E [Y − EY ]2
= E [φ(X) − Eφ(X)]2
. (4.6)
We conclude this section by stating (and sometimes proving) some useful
consequences of Definition 4.5 and Equation 4.6.
Theorem 4.6 Let X denote a discrete random variable and suppose that
c ∈ ℜ is constant. Then
Var(X + c) = Var X.
Although possibly startling at first glance, this result is actually quite
intuitive. The variance depends on the squared deviations of the values of
X from the expected value of X. If we add a constant to each value of X,
then we shift both the individual values of X and the expected value of X
by the same amount, preserving the squared deviations. The variability of
a population is not affected by shifting each of the values in the population
by the same amount.
110 CHAPTER 4. DISCRETE RANDOM VARIABLES
Theorem 4.7 Let X denote a discrete random variable and suppose that
c ∈ ℜ is constant. Then
Var(cX) = E [cX − E(cX)]2
= E [cX − cEX]2
= E [c(X − EX)]2
= E
h
c2
(X − EX)2
i
= c2
E(X − EX)2
= c2
Var X.
To understand this result, recall that the variance is measured in the
original units of measurement squared. If we take the square root of each
expression in Theorem 4.7, then we see that one can interchange multiplying
a random variable by a nonnegative constant with computing its standard
deviation.
Theorem 4.8 If the discrete random variables X1 and X2 are independent,
then
Var(X1 + X2) = Var X1 + Var X2.
Theorem 4.8 is analogous to Theorem 4.4. However, in order to ensure
that the variance of a sum equals the sum of the variances, the random
variables must be independent.
4.4 Binomial Distributions
Suppose that a fair coin is tossed twice and the number of Heads is counted.
Let Y denote the total number of Heads. Because the sample space has four
equally likely outcomes, viz.,
S = {HH, HT, TH, TT},
the pmf of Y is easily determined:
f(0) = P(Y = 0) = P ({HH}) = 0.25,
f(1) = P(Y = 1) = P ({HT, TH}) = 0.5,
f(2) = P(Y = 2) = P ({TT}) = 0.25,
and f(y) = 0 if y 6∈ Y (S) = {0, 1, 2}.
4.4. BINOMIAL DISTRIBUTIONS 111
Referring to representation (c) of Example 3.18, the above experiment
has the following characteristics:
• Let X1 denote the number of Heads observed on the first toss and let
X2 denote the number of Heads observed on the second toss. Then
the random variable of interest is Y = X1 + X2.
• The random variables X1 and X2 are independent.
• The random variables X1 and X2 have the same distribution, viz.
X1, X2 ∼ Bernoulli(0.5).
We proceed to generalize this example in two ways:
1. We allow any finite number of trials.
2. We allow any success probability p ∈ [0, 1].
Definition 4.7 Let X1, . . . , Xn be mutually independent Bernoulli trials,
each with success probability p. Then
Y =
n
X
i=1
Xi
is a binomial random variable, denoted
Y ∼ Binomial(n; p).
Applying Theorem 4.4, we see that the expected value of a binomial
random variable is the product of the number of trials and the probability
of success:
EY = E
à n
X
i=1
Xi
!
=
n
X
i=1
EXi =
n
X
i=1
p = np.
Furthermore, because the trials are independent, we can apply Theorem 4.8
to calculate the variance:
Var Y = Var
à n
X
i=1
Xi
!
=
à n
X
i=1
Var Xi
!
=
à n
X
i=1
p(1 − p)
!
= np(1 − p).
112 CHAPTER 4. DISCRETE RANDOM VARIABLES
Because Y counts the total number of successes in n Bernoulli trials, it
should be apparent that Y (S) = {0, 1, . . . , n}. Let f denote the pmf of Y .
For fixed n, p, and j ∈ Y (S), we wish to determine
f(j) = P(Y = j).
To illustrate the reasoning required to make this determination, suppose that
there are n = 6 trials, each with success probability p = 0.3, and that we
wish to determine the probability of observing exactly j = 2 successes. Some
examples of experimental outcomes for which Y = 2 include the following:
110000 000011 010010
Because the trials are mutually independent, we see that
P(110000) = 0.3 · 0.3 · 0.7 · 0.7 · 0.7 · 0.7 = 0.32
· 0.74
,
P(000011) = 0.7 · 0.7 · 0.7 · 0.7 · 0.3 · 0.3 = 0.32
· 0.74
,
P(010010) = 0.7 · 0.3 · 0.7 · 0.7 · 0.3 · 0.7 = 0.32
· 0.74
.
It should be apparent that the probability of each outcome for which Y = 2
is the product of j = 2 factors of p = 0.3 and n−j = 4 factors of 1−p = 0.7.
Furthermore, the number of such outcomes is the number of ways of choosing
j = 2 successes from a total of n = 6 trials. Thus,
f(2) = P(Y = 2) =
Ã
6
2
!
0.32
0.74
for the specific example in question and the general formula for the binomial
pmf is
f(j) = P(Y = j) =
Ã
n
j
!
pj
(1 − p)n−j
.
It follows, of course, that the general formula for the binomial cdf is
F(k) = P(Y ≤ k) =
k
X
j=0
P(Y = j) =
k
X
j=0
f(j)
=
k
X
j=0
Ã
n
j
!
pj
(1 − p)n−j
. (4.7)
Except for very small numbers of trials, direct calculation of (4.7) is
rather tedious. Fortunately, tables of the binomial cdf for selected values of
4.4. BINOMIAL DISTRIBUTIONS 113
n and p are widely available, as is computer software for evaluating (4.7). In
the examples that follow, we will evaluate (4.7) using the R function pbinom.
As the following examples should make clear, the trick to evaluating bi-
nomial probabilities is to write them in expressions that only involve prob-
abilities of the form P(Y ≤ k).
Example 4.15 In 10 trials with success probability 0.5, what is the
probability that no more than 4 successes will be observed?
Here, n = 10, p = 0.5, and we want to calculate
P(Y ≤ 4) = F(4).
We do so in R as follows:
> pbinom(4,size=10,prob=.5)
[1] 0.3769531
Example 4.16 In 12 trials with success probability 0.3, what is the
probability that more than 6 successes will be observed?
Here, n = 12, p = 0.3, and we want to calculate
P(Y > 6) = 1 − P(Y ≤ 6) = 1 − F(6).
We do so in R as follows:
> 1-pbinom(6,12,.3)
[1] 0.03860084
Example 4.17 In 15 trials with success probability 0.6, what is the
probability that at least 5 but no more than 10 successes will be observed?
Here, n = 15, p = 0.6, and we want to calculate
P(5 ≤ Y ≤ 10) = P(Y ≤ 10) − P(Y ≤ 4) = F(10) − F(4).
We do so in R as follows:
> pbinom(10,15,.6)-pbinom(4,15,.6)
[1] 0.7733746
114 CHAPTER 4. DISCRETE RANDOM VARIABLES
Example 4.18 In 20 trials with success probability 0.9, what is the
probability that exactly 16 successes will be observed?
Here, n = 20, p = 0.9, and we want to calculate
P(Y = 16) = P(Y ≤ 16) − P(Y ≤ 15) = F(16) − F(15).
We do so in R as follows:
> pbinom(16,20,.9)-pbinom(15,20,.9)
[1] 0.08977883
Example 4.19 In 81 trials with success probability 0.64, what is the
probability that the proportion of observed successes will be between 60 and
70 percent?
Here, n = 81, p = 0.64, and we want to calculate
P(0.6 < Y/81 < 0.7) = P(0.6 · 81 < Y < 0.7 · 81)
= P(48.6 < Y < 56.7)
= P(49 ≤ Y ≤ 56)
= P(Y ≤ 56) − P(Y ≤ 48)
= F(56) − F(48).
We do so in R as follows:
> pbinom(56,81,.64)-pbinom(48,81,.64)
[1] 0.6416193
Many practical situations can be modelled using a binomial distribution.
Doing so typically requires one to perform the following steps.
1. Identify what constitutes a Bernoulli trial and what constitutes a suc-
cess. Verify or assume that the trials are mutually independent with a
common probability of success.
2. Identify the number of trials (n) and the common probability of success
(p).
3. Identify the event whose probability is to be calculated.
4. Calculate the probability of the event in question, e.g., by using the
pbinom function in R.
4.5. EXERCISES 115
Example 4.20 RD Airlines flies planes that seat 58 passengers. Years
of experience have revealed that 20 percent of the persons who purchase tickets
fail to claim their seat. (Such persons are called “no-shows”.) Because of
this phenomenon, RD routinely overbooks its flights, i.e., RD typically sells
more than 58 tickets per flight. If more than 58 passengers show, then the
“extra” passengers are “bumped” to another flight. Suppose that RD sells 64
tickets for a certain flight from Washington to New York. How might RD
estimate the probability that at least one passenger will have to be bumped?
1. Each person who purchased a ticket must decide whether or not to
claim his or her seat. This decision represents a Bernoulli trial, for
which we will declare a decision to claim the seat a success. Strictly
speaking, the Bernoulli trials in question are neither mutually inde-
pendent nor identically distributed. Some individuals, e.g., families,
travel together and make a common decision as to whether or not
to claim their seats. Furthermore, some travellers are more likely to
change their plans than others. Nevertheless, absent more detailed in-
formation, we should be able to compute an approximate answer by
assuming that the total number of persons who claim their seats has
a binomial distribution.
2. The problem specifies that n = 64 persons have purchased tickets.
Appealing to past experience, we assume that the probability that
each person will show is p = 1 − 0.2 = 0.8.
3. At least one passenger will have to be bumped if more than 58 passen-
gers show, so the desired probability is
P(Y > 58) = 1 − P(Y ≤ 58) = 1 − F(58).
4. The necessary calculation can be performed in R as follows:
> 1-pbinom(58,64,.8)
[1] 0.006730152
4.5 Exercises
1. Suppose that a weighted die is tossed. Let X denote the number
of dots that appear on the upper face of the die, and suppose that
P(X = x) = (7 − x)/20 for x = 1, 2, 3, 4, 5 and P(X = 6) = 0.
Determine each of the following:
116 CHAPTER 4. DISCRETE RANDOM VARIABLES
(a) The probability mass function of X.
(b) The cumulative distribution function of X.
(c) The expected value of X.
(d) The variance of X.
(e) The standard deviation of X.
2. Suppose that a jury of 12 persons is to be selected from a pool of 25
persons who were called for jury duty. The pool comprises 12 retired
persons, 6 employed persons, 5 unemployed persons, and 2 students.
Assuming that each person is equally likely to be selected, answer the
following:
(a) What is the probability that both students will be selected?
(b) What is the probability that the jury will contain exactly twice
as many retired persons as employed persons?
3. When casting four astragali, a throw that results in four different
uppermost sides is called a venus. (See Section 1.4.) Suppose that
four astragali, {A, B, C, D} each have the following probabilities of
producing the four possible uppermost faces: P(1) = P(6) = 0.1,
P(3) = P(4) = 0.4.
(a) Suppose that we write A = 1 to indicate the event that A produces
side 1, etc. Compute P(A = 1, B = 3, C = 4, D = 6).
(b) Compute P(A = 1, B = 6, C = 3, D = 4).
(c) What is the probability that one throw of these four astragali will
produce a venus?
Hint: See Exercise 2.5.3.
(d) For k = 2, k = 3, and k = 100, what is the probability that k
throws of these four astragali will produce a run of k venuses?
4. Suppose that each of five astragali have the probabilities specified in
the previous exercise. When throwing these five astagali,
(a) What is the probability of obtaining the throw of child-eating
Cronos, i.e., of obtaining three fours and two sixes?
(b) What is the probability of obtaining the throw of Saviour Zeus,
i.e., of obtaining one one, two threes, and two fours?
4.5. EXERCISES 117
Hint: See Exercise 2.5.4.
5. Koko (a cat) is trying to catch a mouse who lives under Susan’s house.
The mouse has two exits, one outside and one inside, and randomly
selects the outside exit 60% of the time. Each midnight, the mouse
emerges for a constitutional. If Koko waits outside and the mouse
chooses the outside exit, then Koko has a 20% chance of catching the
mouse. If Koko waits inside, then there is a 30% chance that he will fall
asleep. However, if he stays awake and the mouse chooses the inside
exit, then Koko has a 40% chance of catching the mouse.
(a) Is Koko more likely to catch the mouse if he waits inside or out-
side? Why?
(b) If Koko decides to wait outside each midnight, then what is the
probability that he will catch the mouse within a week (no more
than 7 nights)?
6. Three urns each contain ten gems:
• Urn 1 contains 6 rubies and 4 emeralds.
• Urn 2r contains 8 rubies and 2 emeralds.
• Urn 2e contains 4 rubies and 6 emeralds.
The following procedure is used to select two gems. First, one gem is
drawn at random from urn 1. If this first gem is a ruby, then a second
gem is drawn at random from urn 2r; however, if the first gem is an
emerald, then the second gem is drawn at random from urn 2e.
(a) Construct a tree diagram that describes this procedure.
(b) What is the probability that a ruby is obtained on the second
draw?
(c) Suppose that the second gem is a ruby. What then is the proba-
bility that the first gem was also a ruby?
(d) Suppose that this procedure is independently replicated three
times. What is the probability that a ruby is obtained on the
second draw exactly once?
(e) Suppose that this procedure is independently replicated three
times and that a ruby is obtained on the second draw each time.
What then is the probability that the first gem was a ruby each
time?
118 CHAPTER 4. DISCRETE RANDOM VARIABLES
7. Arlen is planning a dinner party at which he will be able to accommo-
date seven guests. From past experience, he knows that each person
invited to the party will accept his invitation with probability 0.5. He
also knows that each person who accepts will actually attend with
probability 0.8. Suppose that Arlen invites twelve people. Assuming
that they behave independently of one another, what is the probability
that he will end up with more guests than he can accommodate?
8. Hotels that host conferences routinely overbook their rooms because
some people who plan to attend conferences fail to arrive. A common
assumption is that 10 percent of the hotel rooms reserved by conference
attendees will not be claimed. In contrast, only 4 percent of the per-
sons who reserve hotel rooms for the annual Joint Statistical Meetings
(JSM) fail to claim them.
Suppose that a certain hotel has 100 rooms. Incorrectly believing that
statisticians behave like normal people, the hotel accepts 110 room
reservations for JSM. What is the probability that the hotel will have
to turn away statisticians who have reserved rooms?
9. A small liberal arts college receives applications for admission from
1000 high school seniors. The college has dormitory space for a fresh-
man class of 95 students and will have to arrange for off-campus hous-
ing for any additional freshmen. In previous years, an average of 64
percent of the students that the college has accepted have elected to
attend another school. Clearly the college should accept more than
95 students, but its administration does not want to take too big a
chance that it will have to accommodate more than 95 students. Af-
ter some delibration, the administrators decide to accept 225 students.
Answer the following questions as well as you can with the information
provided.
(a) How many freshmen do you expect that the college will have to
accommodate?
(b) What is the the probability that the college will have to arrange
for some freshmen to live off-campus?
10. In NCAA tennis matches, line calls are made by the players. If an um-
pire is observing the match, then a player can challenge an opponent’s
call. The umpire will either affirm or overrule the challenged call. In
one of their recent team matches, the William & Mary women’s tennis
4.5. EXERCISES 119
team challenged 38 calls by their opponents. The umpires overruled 12
of the challenged calls. This struck Nina and Delphine as significant,
as it is their impression that approximately 20 percent of all challenged
calls in NCAA tennis matches are overruled. Let us assume that their
impression is correct.
(a) What is the probability that chance variation would result in at
least 12 of 38 challenged calls being overruled?
(b) Suppose that the William & Mary women’s tennis team plays 25
team matches next year and challenges exactly 38 calls in each
match. (In fact, the number of challenged calls varies from match
to match.) What is the probability that they will play at least one
team match in which at least 12 challenged calls are overruled?
11. The Association for Research and Enlightenment (ARE) in Virginia
Beach, VA, offers daily demonstrations of a standard technique for
testing extrasensory perception (ESP). A “sender” is seated before a
box on which one of five symbols (plus, square, star, circle, wave) can
be illuminated. A random mechanism selects symbols in such a way
that each symbol is equally likely to be illuminated. When a symbol
is illuminated, the sender concentrates on it and a “receiver” attempts
to identify which symbol has been selected. The receiver indicates a
symbol on the receiver’s box, which sends a signal to the sender’s box
that cues it to select and illuminate another symbol. This process of
illuminating, sending, and receiving a symbol is repeated 25 times.
Each selection of a symbol to be illuminated is independent of the
others. The receiver’s score (for a set of 25 trials) is the number of
symbols that s/he correctly identifies. For the purpose of this exercise,
please suppose that ESP does not exist.
(a) How many symbols should we expect the receiver to identify cor-
rectly?
(b) The ARE considers a score of more than 7 matches to be indica-
tive of ESP. What is the probability that the receiver will provide
such an indication?
(c) The ARE provides all audience members with scoring sheets and
invites them to act as receivers. Suppose that, as on August 31,
2002, there are 21 people in attendance: 1 volunteer sender, 1
volunteer receiver, and 19 additional receivers in the audience.
120 CHAPTER 4. DISCRETE RANDOM VARIABLES
What is the probability that at least one of the 20 receivers will
attain a score indicative of ESP?
12. Mike teaches two sections of Applied Statistics each year for thirty
years, for a total of 1500 students. Each of his students spins a penny
89 times and counts the number of Heads. Assuming that each of
these 1500 pennies has P(Heads) = 0.3 for a single spin, what is the
probability that Mike will encounter at least one student who observes
no more than two Heads?
Chapter 5
Continuous Random
Variables
5.1 A Motivating Example
Some of the concepts that were introduced in Chapter 4 pose technical diffi-
culties when the random variable is not discrete. In this section, we illustrate
some of these difficulties by considering a random variable X whose set of
possible values is the unit interval, i.e., X(S) = [0, 1]. Specifically, we ask
the following question:
What probability distribution formalizes the notion of “equally
likely” outcomes in the unit interval [0, 1]?
When studying finite sample spaces in Section 3.3, we formalized the
notion of “equally likely” by assigning the same probability to each individual
outcome in the sample space. Thus, if S = {s1, . . . , sN }, then P({si}) =
1/N. This construction sufficed to define probabilities of events: if E ⊂ S,
then
E = {si1 , . . . , sik
};
and consequently
P(E) = P


k
[
j=1
n
sij
o

 =
k
X
j=1
P
³n
sij
o´
=
k
X
j=1
1
N
=
k
N
.
Unfortunately, the present example does not work out quite so neatly.
121
122 CHAPTER 5. CONTINUOUS RANDOM VARIABLES
How should we assign P(X = 0.5)? Of course, we must have 0 ≤ P(X =
0.5) ≤ 1. If we try P(X = 0.5) = ǫ for any real number ǫ > 0, then
a difficulty arises. Because we are assuming that every value in the unit
interval is equally likely, it must be that P(X = x) = ǫ for every x ∈ [0, 1].
Consider the event
E =
½
1
2
,
1
3
,
1
4
, . . .
¾
. (5.1)
Then we must have
P(E) = P


∞
[
j=2
½
1
j
¾

 =
∞
X
j=2
P
µ½
1
j
¾¶
=
∞
X
j=2
ǫ = ∞, (5.2)
which we cannot allow. Hence, we must assign a probability of zero to the
outcome x = 0.5 and, because all outcomes are equally likely, P(X = x) = 0
for every x ∈ [0, 1].
Because every x ∈ [0, 1] is a possible outcome, our conclusion that P(X =
x) = 0 is initially somewhat startling. However, it is a mistake to identify
impossibility with zero probability. In Section 3.2, we established that the
impossible event (empty set) has probability zero, but we did not say that
it is the only such event. To avoid confusion, we now emphasize:
If an event is impossible, then it necessarily has probability zero;
however, having probability zero does not necessarily mean that
an event is impossible.
If P(X = x) = ǫ = 0, then the calculation in (5.2) reveals that the event
defined by (5.1) has probability zero. Furthermore, there is nothing special
about this particular event—the probability of any countable event must be
zero! Hence, to obtain positive probabilities, e.g., P(X ∈ [0, 1]) = 1, we
must consider events whose cardinality is more than countable.
Consider the events [0, 0.5] and [0.5, 1]. Because all outcomes are equally
likely, these events must have the same probability, i.e.,
P (X ∈ [0, 0.5]) = P (X ∈ [0.5, 1]) .
Because [0, 0.5] ∪ [0.5, 1] = [0, 1] and P(X = 0.5) = 0, we have
1 = P (X ∈ [0, 1]) = P (X ∈ [0, 0.5]) + P (X ∈ [0.5, 1]) − P (X = 0.5)
= P (X ∈ [0, 0.5]) + P (X ∈ [0.5, 1]) .
Combining these equations, we deduce that each event has probability 1/2.
This is an intuitively pleasing conclusion: it says that, if outcomes are equally
5.1. A MOTIVATING EXAMPLE 123
likely, then the probability of each subinterval equals the proportion of the
entire interval occupied by the subinterval. In mathematical notation, our
conclusion can be expressed as follows:
Suppose that X(S) = [0, 1] and each x ∈ [0, 1] is equally likely.
If 0 ≤ a ≤ b ≤ 1, then P (X ∈ [a, b]) = b − a.
Notice that statements like P(X ∈ [0, 0.5]) = 0.5 cannot be deduced from
knowledge that each P(X = x) = 0. To construct a probability distribution
for this situation, it is necessary to assign probabilities to intervals, not just
to individual points. This fact reveals the reason that, in Section 3.2, we
introduced the concept of an event and insisted that probabilities be assigned
to events rather than to outcomes.
The probability distribution that we have constructed is called the con-
tinuous uniform distribution on the interval [0, 1], denoted Uniform[0, 1]. If
X ∼ Uniform[0, 1], then the cdf of X is easily computed:
• If y < 0, then
F(y) = P(X ≤ y)
= P (X ∈ (−∞, y])
= 0.
• If y ∈ [0, 1], then
F(y) = P(X ≤ y)
= P (X ∈ (−∞, 0)) + P (X ∈ [0, y])
= 0 + (y − 0)
= y.
• If y > 1, then
F(y) = P(X ≤ y)
= P (X ∈ (−∞, 0)) + P (X ∈ [0, 1]) + P (X ∈ (1, y))
= 0 + (1 − 0) + 0
= 1.
This function is plotted in Figure 5.1.
124 CHAPTER 5. CONTINUOUS RANDOM VARIABLES
−1 0 1 2
0.0
0.5
1.0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Figure 5.1: The cumulative distribution function of X ∼ Uniform(0, 1).
What about the pmf of X? In Section 4.1, we defined the pmf of a discrete
random variable by f(x) = P(X = x); we then used the pmf to calculate the
probabilities of arbitrary events. In the present situation, P(X = x) = 0 for
every x, so the pmf is not very useful. Instead of representing the probabilites
of individual points, we need to represent the probabilities of intervals.
Consider the function
f(x) =





0 x ∈ (−∞, 0)
1 x ∈ [0, 1]
0 x ∈ (1, ∞)





, (5.3)
which is plotted in Figure 5.2. Notice that f is constant on X(S) = [0, 1], the
set of equally likely possible values, and vanishes elsewhere. If 0 ≤ a ≤ b ≤ 1,
then the area under the graph of f between a and b is the area of a rectangle
with sides b − a (horizontal direction) and 1 (vertical direction). Hence, the
area in question is
(b − a) · 1 = b − a = P(X ∈ [a, b]),
so that the probabilities of intervals can be determined from f. In the next
section, we will base our definition of continuous random variables on this
observation.
5.2. BASIC CONCEPTS 125
−1 0 1 2
0.0
0.5
1.0
◦ ◦
• •
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
Figure 5.2: The probability density function of X ∼ Uniform(0, 1).
5.2 Basic Concepts
Consider the graph of a function f : ℜ → ℜ, as depicted in Figure 5.3. Our
interest is in the area of the shaded region. This region is bounded by the
graph of f, the horizontal axis, and vertical lines at the specified endpoints
a and b. We denote this area by Area[a,b](f). Our intent is to identify such
areas with the probabilities that random variables assume certain values.
For a very few functions, such as the one defined in (5.3), it is possible
to determine Area[a,b](f) by elementary geometric calculations. For most
functions, some knowledge of calculus is required to determine Area[a,b](f).
Because we assume no previous knowledge of calculus, we will not be con-
cerned with such calculations. Nevertheless, for the benefit of those readers
who know some calculus, we find it helpful to borrow some notation and
write
Area[a,b](f) =
Z b
a
f(x)dx. (5.4)
Readers who have no knowledge of calculus should interpret (5.4) as a def-
inition of its right-hand side, which is pronounced “the integral of f from
a to b”. Readers who are familiar with the Riemann (or Lebesgue) integral
should interpret this notation in its conventional sense.
We now introduce an alternative to the probability mass function.
Definition 5.1 A probability density function (pdf) is a function f : ℜ → ℜ
such that
126 CHAPTER 5. CONTINUOUS RANDOM VARIABLES
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a b
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Figure 5.3: A continuous probability density function.
1. f(x) ≥ 0 for every x ∈ ℜ.
2. Area(−∞,∞)(f) =
R ∞
−∞ f(x)dx = 1.
Notice that the definition of a pdf is analogous to the definition of a pmf.
Each is nonnegative and assigns unit probability to the set of possible values.
The only difference is that summation in the definition of a pmf is replaced
with integration in the case of a pdf.
Definition 5.1 was made without reference to a random variable—we now
use it to define a new class of random variables.
Definition 5.2 A random variable X is continuous if there exists a proba-
bility density function f such that
P (X ∈ [a, b]) =
Z b
a
f(x)dx.
It is immediately apparent from this definition that the cdf of a continuous
random variable X is
F(y) = P(X ≤ y) = P (X ∈ (−∞, y]) =
Z y
−∞
f(x)dx. (5.5)
5.2. BASIC CONCEPTS 127
Equation (5.5) should be compared to equation (4.1). In both cases,
the value of the cdf at y is represented as the accumulation of values of the
pmf/pdf at x ≤ y. The difference lies in the nature of the accumulating pro-
cess: summation for the discrete case (pmf), integration for the continuous
case (pdf).
Remark for Calculus Students: By applying the Fundamen-
tal Theorem of Calculus to (5.5), we deduce that the pdf of a
continuous random variable is the derivative of its cdf:
d
dy
F(y) =
d
dy
Z y
−∞
f(x)dx = f(y).
Remark on Notation: It may strike the reader as curious that
we have used f to denote both the pmf of a discrete random
variable and the pdf of a continuous random variable. However,
as our discussion of their relation to the cdf is intended to sug-
gest, they play analogous roles. In advanced, measure-theoretic
courses on probability, one learns that our pmf and pdf are ac-
tually two special cases of one general construction.
Likewise, the concept of expectation for continuous random variables
is analogous to the concept of expectation for discrete random variables.
Because P(X = x) = 0 if X is a continuous random variable, the notion of
a probability-weighted average is not very useful in the continuous setting.
However, if X is a discrete random variable, then P(X = x) = f(x) and
a probability-weighted average is identical to a pmf-weighted average. The
notion of a pmf-weighted average is easily extended to the continuous setting:
if X is a continuous random variable, then we introduce a pdf-weighted
average of the possible values of X, where averaging is accomplished by
replacing summation with integration.
Definition 5.3 Suppose that X is a continuous random variable with prob-
ability density function f. Then the expected value of X is
µ = EX =
Z ∞
−∞
xf(x)dx,
assuming that this quantity exists.
128 CHAPTER 5. CONTINUOUS RANDOM VARIABLES
If the function g : ℜ → ℜ is such that Y = g(X) is a random variable, then
it can be shown that
EY = Eg(X) =
Z ∞
−∞
g(x)f(x)dx,
assuming that this quantity exists. In particular,
Definition 5.4 If µ = EX exists and is finite, then the variance of X is
σ2
= VarX = E(X − µ)2
=
Z ∞
−∞
(x − µ)2
f(x)dx.
Thus, for discrete and continuous random variables, the expected value is
the pmf/pdf-weighted average of the possible values and the variance is the
pmf/pdf-weighted average of the squared deviations of the possible values
from the expected value.
Because calculus is required to compute the expected value and variance
of most continuous random variables, our interest in these concepts lies not
in computing them but in understanding what information they convey. We
will return to this subject in Chapter 6.
5.3 Elementary Examples
In this section we consider some examples of continuous random variables
for which probabilities can be calculated without recourse to calculus.
Example 5.1 What is the probability that a battery-powered wristwatch
will stop with its minute hand positioned between 10 and 20 minutes past the
hour?
To answer this question, let X denote the number of minutes past the
hour to which the minute hand points when the watch stops. Then the
possible values of X are X(S) = [0, 60) and it is reasonable to assume that
each value is equally likely. We must compute P(X ∈ (10, 20)). Because
these values occupy one sixth of the possible values, it should be obvious
that the answer is going to be 1/6.
To obtain the answer using the formal methods of probability, we require
a generalization of the Uniform[0, 1] distribution that we studied in Section
5.1. The pdf that describes the notion of equally likely values in the interval
5.3. ELEMENTARY EXAMPLES 129
[0, 60) is
f(x) =





0 x ∈ (−∞, 0)
1/60 x ∈ [0, 60)
0 x ∈ [60, ∞)





. (5.6)
To check that f is really a pdf, observe that f(x) ≥ 0 for every x ∈ ℜ and
that
Area[0,60)(f) = (60 − 0)
1
60
= 1.
Notice the analogy between the pdfs (5.6) and (5.3). The present pdf defines
the continuous uniform distribution on the interval [0, 60); thus, we describe
the present situation by writing X ∼ Uniform[0, 60). To calculate the spec-
ified probability, we must determine the area of the shaded region in Figure
5.4, i.e.,
P(X ∈ (10, 20)) = Area(10,20)(f) = (20 − 10)
1
60
=
1
6
.
−10 0 10 20 30 40 50 60 70
0
1/60
2/60
3/60
◦
◦
•
•
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Figure 5.4: The probability density function of X ∼ Uniform[0, 60).
Example 5.2 Consider two battery-powered watches. Let X1 denote
the number of minutes past the hour at which the first watch stops and let
X2 denote the number of minutes past the hour at which the second watch
stops. What is the probability that the larger of X1 and X2 will be between
30 and 50?
130 CHAPTER 5. CONTINUOUS RANDOM VARIABLES
Here we have two independent random variables, each distributed as
Uniform[0, 60), and a third random variable,
Y = max(X1, X2).
Let F denote the cdf of Y . We want to calculate
P(30 < Y < 50) = F(50) − F(30).
We proceed to derive the cdf of Y . It is evident that Y (S) = [0, 60),
so F(y) = 0 if y < 0 and F(y) = 1 if y ≥ 1. If y ∈ [0, 60), then (by the
independence of X1 and X2)
F(y) = P(Y ≤ y) = P (max(X1, X2) ≤ y) = P (X1 ≤ y, X2 ≤ y)
= P (X1 ≤ y) · P (X2 ≤ y) =
y − 0
60 − 0
·
y − 0
60 − 0
=
y2
3600
.
Thus, the desired probability is
P(30 < Y < 50) = F(50) − F(30) =
502
3600
−
302
3600
=
4
9
.
−10 0 10 20 30 40 50 60 70
0
1/60
2/60
3/60
◦
•
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Figure 5.5: The probability density function for Example 5.2.
5.3. ELEMENTARY EXAMPLES 131
In preparation for Example 5.3, we claim that the pdf of Y is
f(y) =





0 y ∈ (−∞, 0)
y/1800 y ∈ [0, 60)
0 y ∈ [60, ∞)





,
which is graphed in Figure 5.5. To check that f is really a pdf, observe that
f(y) ≥ 0 for every y ∈ ℜ and that
Area[0,60)(f) =
1
2
(60 − 0)
60
1800
= 1.
To check that f is really the pdf of Y , observe that f(y) = 0 if y 6∈ [0, 60)
and that, if y ∈ [0, 60), then
P (Y ∈ [0, y)) = P(Y ≤ y) = F(y) =
y2
3600
=
1
2
(y − 0)
y
1800
= Area[0,y)(f).
If the pdf had been specified, then instead of deriving the cdf we would
have simply calculated
P(30 < Y < 50) = Area(30,50)(f)
by any of several convenient geometric arguments.
Example 5.3 Consider two battery-powered watches. Let X1 denote
the number of minutes past the hour at which the first watch stops and let
X2 denote the number of minutes past the hour at which the second watch
stops. What is the probability that the sum of X1 and X2 will be between 45
and 75?
Again we have two independent random variables, each distributed as
Uniform[0, 60), and a third random variable,
Z = X1 + X2.
We want to calculate
P(45 < Z < 75) = P (Z ∈ (45, 75)) .
It is apparent that Z(S) = [0, 120). Although we omit the derivation, it
can be determined mathematically that the pdf of Z is
f(z) =









0 z ∈ (−∞, 0)
z/3600 z ∈ [0, 60)
(120 − z)/3600 z ∈ [60, 120)
0 z ∈ [120, ∞)









.
132 CHAPTER 5. CONTINUOUS RANDOM VARIABLES
0 20 40 60 80 100 120
0
1/60
2/60
3/60
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Figure 5.6: The probability density function for Example 5.3.
This pdf is graphed in Figure 5.6, in which it is apparent that the area of
the shaded region is
P(45 < Z < 75) = P (Z ∈ (45, 75)) = Area(45,75)(f)
= 1 −
1
2
(45 − 0)
45
3600
−
1
2
(120 − 75)
120 − 75
3600
= 1 −
452
602
=
7
16
.
5.4 Normal Distributions
We now introduce the most important family of distributions in probability
or statistics, the familiar bell-shaped curve.
Definition 5.5 A continuous random variable X is normally distributed
with mean µ and variance σ2 > 0, denoted X ∼ Normal(µ, σ2), if the pdf of
X is
f(x) =
1
√
2πσ
exp
"
−
1
2
µ
x − µ
σ
¶2
#
. (5.7)
Although we will not make extensive use of (5.7), a great many useful
properties of normal distributions can be deduced directly from it. Most of
the following properties can be discerned in Figure 5.7.
5.4. NORMAL DISTRIBUTIONS 133
µ − 2σ µ − σ µ µ + σ µ + 2σ
0.0
0.1
0.2
0.3
0.4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Figure 5.7: The probability density function of X ∼ Normal(µ, σ2).
1. f(x) > 0. It follows that, for any nonempty interval (a, b),
P (X ∈ (a, b)) = Area(a,b)(f) > 0,
and hence that X(S) = (−∞, +∞).
2. f is symmetric about µ, i.e., f(µ + x) = f(µ − x).
3. f(x) decreases as |x − µ| increases. In fact, the decrease is very rapid.
We express this by saying that f has very light tails.
4. P(µ − σ < X < µ + σ)
.
= 0.683.
5. P(µ − 2σ < X < µ + 2σ)
.
= 0.954.
6. P(µ − 3σ < X < µ + 3σ)
.
= 0.997.
Notice that there is no one normal distribution, but a 2-parameter fam-
ily of uncountably many normal distributions. In fact, if we plot µ on a
horizontal axis and σ > 0 on a vertical axis, then there is a distinct normal
distribution for each point in the upper half-plane. However, Properties 4–6
134 CHAPTER 5. CONTINUOUS RANDOM VARIABLES
above, which hold for all choices of µ and σ, suggest that there is a funda-
mental equivalence between different normal distributions. It turns out that,
if one can compute probabilities for any one normal distribution, then one
can compute probabilities for any other normal distribution. In anticipation
of this fact, we distinguish one normal distribution to serve as a reference
distribution:
Definition 5.6 The standard normal distribution is Normal(0, 1).
The following result is of enormous practical value:
Theorem 5.1 If X ∼ Normal(µ, σ2), then
Z =
X − µ
σ
∼ Normal(0, 1).
The transformation Z = (X − µ)/σ is called conversion to standard units.
Detailed tables of the standard normal cdf are widely available, as is
computer software for calculating specified values. Combined with Theorem
5.1, this availability allows us to easily compute probabilities for arbitrary
normal distributions. In the following examples, we let Φ denote the cdf of
Z ∼ Normal(0, 1) and we make use of the R function pnorm.
Example 5.4a If X ∼ Normal(1, 4), then what is the probability that
X assumes a value no more than 3?
Here, µ = 1, σ = 2, and we want to calculate
P(X ≤ 3) = P
µ
X − µ
σ
≤
3 − µ
σ
¶
= P
µ
Z ≤
3 − 1
2
= 1
¶
= Φ(1).
We do so in R as follows:
> pnorm(1)
[1] 0.8413447
Remark The R function pnorm accepts optional arguments that specify
a mean and standard deviation. Thus, in Example 5.4a, we could directly
evaluate P(X ≤ 3) as follows:
> pnorm(3,mean=1,sd=2)
[1] 0.8413447
This option, of course, is not available if one is using a table of the standard
normal cdf. Because the transformation to standard units plays such a
fundamental role in probability and statistics, we will emphasize computing
normal probabilities via the standard normal distribution.
5.4. NORMAL DISTRIBUTIONS 135
Example 5.4b If X ∼ Normal(−1, 9), then what is the probability that
X assumes a value of at least −7?
Here, µ = −1, σ = 3, and we want to calculate
P(X ≥ −7) = P
µ
X − µ
σ
≥
−7 − µ
σ
¶
= P
µ
Z ≥
−7 + 1
3
= −2
¶
= 1 − P(Z < −2)
= 1 − Φ(−2).
We do so in R as follows:
> 1-pnorm(-2)
[1] 0.9772499
Example 5.4c If X ∼ Normal(2, 16), then what is the probability that
X assumes a value between 0 and 10?
Here, µ = 2, σ = 4, and we want to calculate
P(0 < X < 10) = P
µ
0 − µ
σ
<
X − µ
σ
<
10 − µ
σ
¶
= P
µ
−0.5 =
0 − 2
4
< Z <
10 − 2
4
= 2
¶
= P(Z < 2) − P(Z < −0.5)
= Φ(2) − Φ(−0.5).
We do so in R as follows:
> pnorm(2)-pnorm(-.5)
[1] 0.6687123
Example 5.4d If X ∼ Normal(−3, 25), then what is the probability
that |X| assumes a value greater than 10?
Here, µ = −3, σ = 5, and we want to calculate
P(|X| > 10) = P(X > 10 or X < −10)
= P(X > 10) + P(X < −10)
= P
µ
X − µ
σ
>
10 − µ
σ
¶
+ P
µ
X − µ
σ
<
−10 − µ
σ
¶
= P
µ
Z >
10 + 3
5
= 2.6
¶
+ P
µ
Z <
−10 + 3
5
= −1.2
¶
= 1 − Φ(2.6) + Φ(−1.2).
136 CHAPTER 5. CONTINUOUS RANDOM VARIABLES
We do so in R as follows:
> 1-pnorm(2.6)+pnorm(-1.2)
[1] 0.1197309
Example 5.4e If X ∼ Normal(4, 16), then what is the probability that
X2 assumes a value less than 36?
Here, µ = 4, σ = 4, and we want to calculate
P(X2
< 36) = P(−6 < X < 6)
= P
µ
−6 − µ
σ
<
X − µ
σ
<
6 − µ
σ
¶
= P
µ
−2.5 =
−6 − 4
4
< Z <
6 − 4
4
= 0.5
¶
= P(Z < 0.5) − P(Z < −2.5)
= Φ(0.5) − Φ(−2.5).
We do so in R as follows:
> pnorm(.5)-pnorm(-2.5)
[1] 0.6852528
We defer an explanation of why the family of normal distributions is so
important until Section 8.3, concluding the present section with the following
useful result:
Theorem 5.2 If X1 ∼ Normal(µ1, σ2
1) and X2 ∼ Normal(µ2, σ2
2) are inde-
pendent, then
X1 + X2 ∼ Normal(µ1 + µ2, σ2
1 + σ2
2).
5.5 Normal Sampling Distributions
A number of important probability distributions can be derived by consider-
ing various functions of normal random variables. These distributions play
important roles in statistical inference. They are rarely used to describe
data; rather, they arise when analyzing data that is sampled from a normal
distribution. For this reason, they are sometimes called sampling distribu-
tions.
This section collects some definitions of and facts about several important
sampling distributions. It is not important to read this section until you
encounter these distributions in later chapters; however, it is convenient to
collect this material in one easy-to-find place.
5.5. NORMAL SAMPLING DISTRIBUTIONS 137
Chi-Squared Distributions Suppose that Z1, . . . , Zn ∼ Normal(0, 1)
and consider the continuous random variable
Y = Z2
1 + · · · + Z2
n.
Because each Z2
i ≥ 0, the set of possible values of Y is Y (S) = [0, ∞). We
are interested in the distribution of Y .
The distribution of Y belongs to a family of probability distributions
called the chi-squared family. This family is indexed by a single real-valued
parameter, ν ∈ [1, ∞), called the degrees of freedom parameter. We will
denote a chi-squared distribution with ν degrees of freedom by χ2(ν). Figure
5.8 displays the pdfs of several chi-squared distributions.
y
f(y)
0 2 4 6 8 10
0.0
0.5
1.0
1.5
Figure 5.8: The probability density functions of Y ∼ χ2(ν) for ν = 1, 3, 5.
The following fact is quite useful:
Theorem 5.3 If Z1, . . . , Zn ∼ Normal(0, 1) and Y = Z2
1 + · · · + Z2
n, then
Y ∼ χ2(n).
In theory, this fact allows one to compute the probabilities of events defined
by values of Y , e.g., P(Y > 4.5). In practice, this requires evaluating the
138 CHAPTER 5. CONTINUOUS RANDOM VARIABLES
cdf of χ2(ν), a function for which there is no simple formula. Fortunately,
there exist efficient algorithms for numerically evaluating these cdfs. The
R function pchisq returns values of the cdf of any specified chi-squared
distribution. For example, if Y ∼ χ2(2), then P(Y > 4.5) is
> 1-pchisq(4.5,df=2)
[1] 0.1053992
Finally, if Zi ∼ Normal(0, 1), then
EZ2
i = Var Zi + (EZi)2
= 1.
It follows that
EY = E
à n
X
i=1
Z2
i
!
=
n
X
i=1
EZ2
i =
n
X
i=1
1 = n;
thus,
Corollary 5.1 If Y ∼ χ2(n), then EY = n.
Student’s t Distributions Now let Z ∼ Normal(0, 1) and Y ∼ χ2(ν) be
independent random variables and consider the continuous random variable
T =
Z
p
Y/ν
.
The set of possible values of T is T(S) = (−∞, ∞). We are interested in the
distribution of T.
Definition 5.7 The distribution of T is called a t distribution with ν degrees
of freedom. We will denote this distribution by t(ν).
The standard normal distribution is symmetric about the origin; i.e., if
Z ∼ Normal(0, 1), then −Z ∼ Normal(0, 1). It follows that T = Z/
p
Y/ν
and −T = −Z/
p
Y/ν have the same distribution. Hence, if p is the pdf of
T, then it must be that p(t) = p(−t). Thus, t pdfs are symmetric about the
origin, just like the standard normal pdf.
Figure 5.9 displays the pdfs of two t distributions. They can be dis-
tinguished by virtue of the fact that the variance of t(ν) decreases as ν
increases. It may strike you that t pdfs closely resemble normal pdfs. In
fact, the standard normal pdf is a limiting case of the t pdfs:
5.5. NORMAL SAMPLING DISTRIBUTIONS 139
y
f(y)
-4 -2 0 2 4
0.0
0.1
0.2
0.3
0.4
Figure 5.9: The probability density functions of T ∼ t(ν) for ν = 5, 30.
Theorem 5.4 Let Fν denote the cdf of t(ν) and let Φ denote the cdf of
Normal(0, 1). Then
lim
ν→∞
Fν(t) = Φ(t)
for every t ∈ (−∞, ∞).
Thus, when ν is sufficiently large (ν > 40 is a reasonable rule of thumb), t(ν)
is approximately Normal(0, 1) and probabilities involving the former can be
approximated by probabilities involving the latter.
In R, it is just as easy to calculate t(ν) probabilities as it is to calculate
Normal(0, 1) probabilities. The R function pt returns values of the cdf of
any specified t distribution. For example, if T ∼ t(14), then P(T ≤ −1.5) is
> pt(-1.5,df=14)
[1] 0.07791266
140 CHAPTER 5. CONTINUOUS RANDOM VARIABLES
Fisher’s F Distributions Finally, let Y1 ∼ χ2(ν1) and Y2 ∼ χ2(ν2) be
independent random variables and consider the continuous random variable
F =
Y1/ν1
Y2/ν2
.
Because Yi ≥ 0, the set of possible values of F is F(S) = [0, ∞). We are
interested in the distribution of F.
Definition 5.8 The distribution of F is called an F distribution with ν1
and ν2 degrees of freedom. We will denote this distribution by F(ν1, ν2).
It is customary to call ν1 the “numerator” degrees of freedom and ν2 the
“denominator” degrees of freedom.
Figure 5.10 displays the pdfs of several F distributions.
y
f(y)
0 2 4 6 8
0.0
0.2
0.4
0.6
0.8
Figure 5.10: The probability density functions of F ∼ F(ν1, ν2) for (ν1, ν2) =
(2, 12), (4, 20), (9, 10).
There is an important relation between t and F distributions. To antic-
ipate it, suppose that Z ∼ Normal(0, 1) and Y2 ∼ χ2(ν2) are independent
5.6. EXERCISES 141
random variables. Then Y1 = Z2 ∼ χ2(1), so
T =
Z
p
Y2/ν2
∼ t (ν2)
and
T2
=
Z2
Y2/ν2
=
Y1/1
Y2/ν2
∼ F (1, ν2) .
More generally,
Theorem 5.5 If T ∼ t(ν), then T2 ∼ F(1, ν).
The R function pf returns values of the cdf of any specified F distribution.
For example, if F ∼ F(2, 27), then P(F > 2.5) is
> 1-pf(2.5,df1=2,df2=27)
[1] 0.1008988
5.6 Exercises
1. In this problem you will be asked to examine two equations. Several
symbols from each equation will be identified. Your task will be to
decide which symbols represent real numbers and which symbols rep-
resent functions. If a symbol represents a function, then you should
state the domain and the range of that function.
Recall: A function is a rule of assignment. The set of labels that the
function might possibly assign is called the range of the function; the
set of objects to which labels are assigned is called the domain. For
example, when I grade your test, I assign a numeric value to your
name. Grading is a function that assigns real numbers (the range) to
students (the domain).
(a) In the equation p = P (Z > 1.96), please identify each of the
following symbols as a real number or a function:
i. p
ii. P
iii. Z
(b) In the equation σ2 = E (X − µ)2
, please identify each of the fol-
lowing symbols as a real number or a function:
142 CHAPTER 5. CONTINUOUS RANDOM VARIABLES
i. σ
ii. E
iii. X
iv. µ
2. Suppose that X is a continuous random variable with probability den-
sity function (pdf) f defined as follows:
f(x) =





0 if x < 1
2(x − 1) if 1 ≤ x ≤ 2
0 if x > 2





.
(a) Graph f.
(b) Verify that f is a pdf.
(c) Compute P(1.50 < X < 1.75).
3. Consider the function f : ℜ → ℜ defined by
f(x) =









0 x < 0
cx 0 < x < 1.5
c(3 − x) 1.5 < x < 3
0 x > 3









,
where c is an undetermined constant.
(a) For what value of c is f a probability density function?
(b) Suppose that a continuous random variable X has probability
density function f. Compute EX. (Hint: Draw a picture of the
pdf.)
(c) Compute P(X > 2).
(d) Suppose that Y ∼ Uniform(0, 3). Which random variable has the
larger variance, X or Y ? (Hint: Draw a picture of the two pdfs.)
(e) Determine and graph the cumulative distribution function of X.
4. Imagine throwing darts at a circular dart board, B. Let us measure
the dart board in units for which the radius of B is 1, so that the area
of B is π. Suppose that the darts are thrown in such a way that they
are certain to hit a point in B, and that each point in B is equally
5.6. EXERCISES 143
likely to be hit. Thus, if A ⊂ B, then the probability of hitting a point
in A is
P(A) =
area(A)
area(B)
=
area(A)
π
.
Define the random variable X to be the distance from the center of B
to the point that is hit.
(a) What are the possible values of X?
(b) Compute P(X ≤ 0.5).
(c) Compute P(0.5 < X ≤ 0.7).
(d) Determine and graph the cumulative distribution function of X.
(e) [Optional—for those who know a little calculus.] Determine and
graph the probability density function of X.
5. Imagine throwing darts at a triangular dart board,
B = {(x, y) : 0 ≤ y ≤ x ≤ 1} .
Suppose that the darts are thrown in such a way that they are certain
to hit a point in B, and that each point in B is equally likely to be
hit. Define the random variable X to be the value of the x-coordinate
of the point that is hit, and define the random variable Y to be the
value of the y-coordinate of the point that is hit.
(a) Draw a picture of B.
(b) Compute P(X ≤ 0.5).
(c) Determine and graph the cumulative distribution function of X.
(d) Are X and Y independent?
6. Let X be a normal random variable with mean µ = −5 and standard
deviation σ = 10. Compute the following:
(a) P(X < 0)
(b) P(X > 5)
(c) P(−3 < X < 7)
(d) P(|X + 5| < 10)
(e) P(|X − 3| > 2)
144 CHAPTER 5. CONTINUOUS RANDOM VARIABLES
Chapter 6
Quantifying Population
Attributes
The distribution of a random variable is a mathematical abstraction of the
possible outcomes of an experiment. Indeed, having identified a random
variable of interest, we will often refer to its distribution as the population. If
one’s goal is to represent an entire population, then one can hardly do better
than to display its entire probability mass or density function. Usually,
however, one is interested in specific attributes of a population. This is true
if only because it is through specific attributes that one comprehends the
entire population, but it is also easier to draw inferences about a specific
population attribute than about the entire population. Accordingly, this
chapter examines several population attributes that are useful in statistics.
We will be especially concerned with measures of centrality and mea-
sures of dispersion. The former provide quantitative characterizations of
where the “middle” of a population is located; the latter provide quanti-
tative characterizations of how widely the population is spread. We have
already introduced one important measure of centrality, the expected value
of a random variable (the population mean, µ), and one important measure
of dispersion, the standard deviation of a random variable (the population
standard deviation, σ). This chapter discusses these measures in greater
depth and introduces other, complementary measures.
6.1 Symmetry
We begin by considering the following question:
145
146 CHAPTER 6. QUANTIFYING POPULATION ATTRIBUTES
Where is the “middle” of a normal distribution?
It is quite evident from Figure 5.7 that there is only one plausible answer to
this question: if X ∼ Normal(µ, σ2), then the “middle” of the distribution
of X is µ.
Let f denote the pdf of X. To understand why µ is the only plausible
middle of f, recall a property of f that we noted in Section 5.4: for any x,
f(µ+x) = f(µ−x). This property states that f is symmetric about µ. It is
the property of symmetry that restricts the plausible locations of “middle”
to the central value µ.
To generalize the above example of a measure of centrality, we introduce
an important qualitative property that a population may or may not possess:
Definition 6.1 Let X be a continuous random variable with probability den-
sity function f. If there exists a value θ ∈ ℜ such that
f(θ + x) = f(θ − x)
for every x ∈ ℜ, then X is a symmetric random variable and θ is its center
of symmetry.
We have already noted that X ∼ Normal(µ, σ2) has center of symmetry µ.
Another example of symmetry is illustrated in Figure 6.1: X ∼ Uniform[a, b]
has center of symmetry (a + b)/2.
◦
a
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. ◦
b
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
• •
a+b
2
Figure 6.1: X ∼ Uniform[a, b] has center of symmetry (a + b)/2.
For symmetric random variables, the center of symmetry is the only
plausible measure of centrality—of where the “middle” of the distribution
is located. Symmetry will play an important role in our study of statistical
6.2. QUANTILES 147
inference. Our primary concern will be with continuous random variables,
but the concept of symmetry can be used with other random variables as
well. Here is a general definition:
Definition 6.2 Let X be a random variable. If there exists a value θ ∈ ℜ
such that the random variables X − θ and θ − X have the same distribution,
then X is a symmetric random variable and θ is its center of symmetry.
Suppose that we attempt to compute the expected value of a symmetric
random variable X with center of symmetry θ. Thinking of the expected
value as a weighted average, we see that each θ+x will be weighted precisely
as much as the corresponding θ − x. Thus, if the expected value exists
(there are a few pathological random variables for which the expected value
is undefined), then it must equal the center of symmetry, i.e., EX = θ. Of
course, we have already seen that this is the case for X ∼ Normal(µ, σ2) and
for X ∼ Uniform[a, b].
6.2 Quantiles
In this section we introduce population quantities that can be used for a
variety of purposes. As in Section 6.1, these quantities are most easily un-
derstood in the case of continuous random variables:
Definition 6.3 Let X be a continuous random variable and let α ∈ (0, 1).
If q = q(X; α) is such that P(X < q) = α and P(X > q) = 1 − α, then q is
called an α quantile of X.
If we express the probabilities in Definition 6.3 as percentages, then we see
that q is the 100α percentile of the distribution of X.
Example 6.1 Suppose that X ∼ Uniform[a, b] has pdf f, depicted in
Figure 6.2. Then q is the value in (a, b) for which
α = P(X < q) = Area[a,q](f) = (q − a) ·
1
b − a
,
i.e., q = a + α(b − a). This expression is easily interpreted: to the lower
endpoint a, add 100α% of the distance b − a to obtain the 100α percentile.
148 CHAPTER 6. QUANTIFYING POPULATION ATTRIBUTES
◦
a
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. ◦
b
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
• •
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
α
q
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Figure 6.2: A quantile of a Uniform distribution.
Example 6.2 Suppose that X has pdf
f(x) =
(
x/2 x ∈ [0, 2]
0 otherwise
)
,
depicted in Figure 6.3. Then q is the value in (0, 2) for which
α = P(X < q) = Area[a,q](f) =
1
2
· (q − 0) ·
µ
q
2
− 0
¶
=
q2
4
,
i.e., q = 2
√
α.
Example 6.3 Suppose that X ∼ Normal(0, 1) has cdf Φ. Then q is the
value in (−∞, ∞) for which α = P(X < q) = Φ(q), i.e., q = Φ−1(α). Unlike
the previous examples, we cannot compute q by elementary calculations.
Fortunately, the R function qnorm computes quantiles of normal distribu-
tions. For example, we compute the α = 0.95 quantile of X as follows:
> qnorm(.95)
[1] 1.644854
Example 6.4 Suppose that X has pdf
f(x) =
(
1/2 x ∈ [0, 1] ∪ [2, 3]
0 otherwise
)
,
depicted in Figure 6.4. Notice that P(X ∈ [0, 1]) = 0.5 and P(X ∈ [2, 3]) =
0.5. If α ∈ (0, 0.5), then we can use the same reasoning that we employed
6.2. QUANTILES 149
−0.5 0.0 0.5 1.0 1.5 2.0 2.5
0.0
0.2
0.4
0.6
0.8
1.0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
α
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
•
◦
q
. ...
...
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Figure 6.3: A quantile of another distribution.
−1 0 1 2 3 4
0.0
0.5
1.0
◦ ◦ ◦ ◦
• • • •
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
Figure 6.4: A distribution for which the α = 0.5 quantile is not unique.
in Example 6.1 to deduce that q = 2α. Similarly, if α ∈ (0.5, 1), then
q = 2 + 2(α − 0.5) = 2α + 1. However, if α = 0.5, then we encounter an
ambiguity: the equalities P(X < q) = 0.5 and P(X > q) = 0.5 hold for any
q ∈ [1, 2]. Accordingly, any q ∈ [1, 2] is an α = 0.5 quantile of X. Thus,
quantiles are not always unique.
To avoid confusion when a quantile is not unique, it is nice to have a
convention for selecting one of the possible quantile values. In the case that
α = 0.5, there is a universal convention:
Definition 6.4 The midpoint of the interval of all values of the α = 0.5
quantile is called the population median.
150 CHAPTER 6. QUANTIFYING POPULATION ATTRIBUTES
In Example 6.4, the population median is q = 1.5.
Working with the quantiles of a continuous random variable X is straight-
forward because P(X = q) = 0 for any choice of q. This means that P(X <
q) + P(X > q) = 1; hence, if P(X < q) = α, then P(X > q) = 1 − α.
Furthermore, it is always possible to find a q for which P(X < q) = α. This
is not the case if X is discrete.
Example 6.5 Let X be a discrete random variable that assumes values
in the set {1, 2, 3} with probabilities p(1) = 0.4, p(2) = 0.4, and p(3) = 0.2.
What is the median of X?
Imagine accumulating probability as we move from −∞ to ∞. At what
point do we find that we have acquired half of the total probability? The
answer is that we pass from having 40% of the probability to having 80% of
the probability as we occupy the point q = 2. It makes sense to declare this
value to be the median of X.
Here is another argument that appeals to Definition 6.3. If q < 2, then
P(X > q) = 0.6 > 0.5. Hence, it would seem that the population median
should not be less than 2. Similarly, if q > 2, then P(X < q) = 0.8 > 0.5.
Hence, it would seem that the population median should not be greater than
2. We conclude that the population median should equal 2. But notice that
P(X < 2) = 0.4 < 0.5 and P(X > 2) = 0.2 < 0.5! We conclude that
Definition 6.3 will not suffice for discrete random variables. However, we
can generalize the reasoning that we have just employed as follows:
Definition 6.5 Let X be a random variable and let α ∈ (0, 1). If q =
q(X; α) is such that P(X < q) ≤ α and P(X > q) ≤ 1 − α, then q is called
an α quantile of X.
The remainder of this section describes how quantiles are often used to
measure centrality and dispersion. The following three quantiles will be of
particular interest:
Definition 6.6 Let X be a random variable. The first, second, and third
quartiles of X, denoted q1(X), q2(X), and q3(X), are the α = 0.25, α = 0.50,
and α = 0.75 quantiles of X. The second quartile is also called the median
of X.
6.2. QUANTILES 151
6.2.1 The Median of a Population
If X is a symmetric random variable with center of symmetry θ, then
P(X < θ) = P(X > θ) =
1 − P(X = θ)
2
≤
1
2
and q2(X) = θ. Even if X is not symmetric, the median of X is an excellent
way to define the “middle” of the population. Many statistical procedures
use the median as a measure of centrality.
Example 6.6 One useful property of the median is that it is rather in-
sensitive to the influence of extreme values that occur with small probability.
For example, let Xk denote a discrete random variable that assumes values
in {−1, 0, 1, 10k} for k = 1, 2, 3, . . .. Suppose that Xk has the following pmf:
x pk(x)
−1 0.19
0 0.60
1 0.19
10k 0.02
Most of the probability (98%) is concentrated on the values {−1, 0, 1}. This
probability is centered at x = 0. A small amount of probability is con-
centrated at a large value, x = 10, 100, 1000, . . .. If we want to treat these
large values as aberrations (perhaps our experiment produces a physically
meaningful value x ∈ {−1, 0, 1} with probability 0.98, but our equipment
malfunctions and produces a physically meaningless value x = 10k with
probability 0.02), then we might prefer to declare that x = 0 is the central
value of X. In fact, no matter how large we choose k, the median refuses to
be distracted by the aberrant value: P(X < 0) = 0.19 and P(X > 0) = 0.21,
so the median of X is q2(X) = 0.
6.2.2 The Interquartile Range of a Population
Now we turn our attention from the problem of measuring centrality to the
problem of measuring dispersion. Can we use quantiles to quantify how
widely spread are the values of a random variable? A natural approach is
to choose two values of α and compute the corresponding quantiles. The
distance between these quantiles is a measure of dispersion.
152 CHAPTER 6. QUANTIFYING POPULATION ATTRIBUTES
To avoid comparing apples and oranges, let us agree on which two values
of α we will choose. Statisticians have developed a preference for α = 0.25
and α = 0.75, in which case the corresponding quantiles are the first and
third quartiles.
Definition 6.7 Let X be a random variable with first and third quartiles q1
and q3. The interquartile range of X is the quantity
iqr(X) = q3 − q1.
If X is a continuous random variable, then P(q1 < X < q3) = 0.5, so the
interquartile range is the interval of values on which is concentrated the
central 50% of the probability.
Like the median, the interquartile range is rather insensitive to the in-
fluence of extreme values that occur with small probability. In Example 6.6,
the central 50% of the probability is concentrated on the single value x = 0.
Hence, the interquartile range is 0 − 0 = 0, regardless of where the aberrant
2% of the probability is located.
6.3 The Method of Least Squares
Let us return to the case of a symmetric random variable X, in which case
the “middle” of the distribution is unambiguously the center of symmetry
θ. Given this measure of centrality, how might we construct a measure of
dispersion? One possibility is to measure how far a “typical” value of X
lies from its central value, i.e., to compute E|X − θ|. This possibility leads
to several remarkably fertile approaches to describing both dispersion and
centrality.
Given a designated central value c and another value x, we say that the
absolute deviation of x from c is |x − c| and that the squared deviation of x
from c is (x−c)2. The magnitude of a typical absolute deviation is E|X −c|
and the magnitude of a typical squared deviation is E(X − c)2. A natural
approach to measuring centrality is to choose a value of c that typically
results in small deviations, i.e., to choose c either to minimize E|X − c| or
to minimize E(X − c)2. The second possibility is a simple example of the
method of least squares.
Measuring centrality by minimizing the magnitude of a typical absolute
or squared deviation results in two familiar quantities:
Theorem 6.1 Let X be a random variable with population median q2 and
population mean µ = EX. Then
6.3. THE METHOD OF LEAST SQUARES 153
1. The value of c that minimizes E|X − c| is c = q2.
2. The value of c that minimizes E(X − c)2 is c = µ.
It follows that medians are naturally associated with absolute deviations
and that means are naturally associated with squared deviations. Having
discussed the former in Section 6.2.1, we now turn to the latter.
6.3.1 The Mean of a Population
Imagine creating a physical model of a probability distribution by distribut-
ing weights along the length of a board. The location of the weights are the
values of the random variable and the weights represent the probabilities of
those values. After gluing the weights in place, we position the board atop
a fulcrum. How must the fulcrum be positioned in order that the board be
perfectly balanced? It turns out that one should position the fulcrum at the
mean of the probability distribution. For this reason, the expected value of
a random variable is sometimes called its center of mass.
Thus, like the population median, the population mean has an appealing
interpretation that commends its use as a measure of centrality. If X is a
symmetric random variable with center of symmetry θ, then µ = EX = θ
and q2 = q2(X) = θ, so the population mean and the population median
agree. In general, this is not the case. If X is not symmetric, then one should
think carefully about whether one is interested in the population mean and
the population median. Of course, computing both measures and examining
the discrepancy between them may be highly instructive. In particular, if
EX 6= q2(X), then X is not a symmetric random variable.
In Section 6.2.1 we noted that the median is rather insensitive to the
influence of extreme values that occur with small probability. The mean
lacks this property. In Example 6,
EXk = −0.19 + 0.00 + 0.19 + 10k
· 0.02 = 2 · 10k−2
,
which equals 0.2 if k = 1, 2 if k = 2, 20 if k = 3, 200 if k = 4, and so on.
No matter how reluctantly, the population mean follows the aberrant value
toward infinity as k increases.
6.3.2 The Standard Deviation of a Population
Suppose that X is a random variable with EX = µ and Var X = σ2. If we
adopt the method of least squares, then we obtain c = µ as our measure
154 CHAPTER 6. QUANTIFYING POPULATION ATTRIBUTES
of centrality, in which case the magnitude of a typical squared deviation is
E(X −µ)2 = σ2, the population variance. The variance measures dispersion
in squared units. For example, if X measures length in meters, then Var X
is measured in meters squared. If, as in Section 6.2.2, we prefer to measure
dispersion in the original units of measurement, then we must take the square
root of the variance. Accordingly, we will emphasize the population standard
deviation, σ, as a measure of dispersion.
Just as it is natural to use the median and the interquartile range to-
gether, so is it natural to use the mean and the standard deviation together.
In the case of a symmetric random variable, the median and the mean agree.
However, the interquartile range and the standard deviation measure disper-
sion in two fundamentally different ways. To gain insight into their relation
to each other, suppose that X ∼ Normal(0, 1), in which case the population
standard deviation is σ = 1. We use R to compute iqr(X):
> qnorm(.75)-qnorm(.25)
[1] 1.348980
We have derived a useful fact: the interquartile range of a normal random
variable is approximately 1.35 standard deviations. If we encounter a random
variable for which this is not the case, then that random variable is not
normally distributed.
Like the mean, the standard deviation is sensitive to the influence of
extreme values that occur with small probability. Consider Example 6. The
variance of Xk is
σ2
k = EX2
k − (EXk)2
=
³
0.19 + 0.00 + 0.19 + 100k
· 0.02
´
−
³
2 · 10k−2
´2
= 0.38 + 2 · 100k−1
− 4 · 100k−2
= 0.38 + 196 · 100k−2
,
so σ1 =
√
2.34, σ2 =
√
196.38, σ3 =
√
19600.38, and so on. The population
standard deviation tends toward infinity as the aberrant value tends toward
infinity.
6.4 Exercises
1. Refer to the random variable X defined in Exercise 2 of Chapter 5.
Compute the following two quantities: q2(X), the population median;
and iqr(X), the population interquartile range.
6.4. EXERCISES 155
2. Consider the function g : ℜ → ℜ defined by
g(x) =













0 x < 0
x x ∈ [0, 1]
1 x ∈ [1, 2]
3 − x x ∈ [2, 3]
0 x > 3













.
Let f(x) = cg(x), where c is an undetermined constant.
(a) For what value of c is f a probability density function?
(b) Suppose that a continuous random variable X has probability
density function f. Compute P(1.5 < X < 2.5).
(c) Compute EX.
(d) Let F denote the cumulative distribution function of X. Compute
F(1).
(e) Determine the 0.90 quantile of f.
3. Suppose that X is a continuous random variable with probability den-
sity function
f(x) =









0 x < 0
x x ∈ (0, 1)
(3 − x)/4 x ∈ (1, 3)
0 x > 3









.
(a) Compute q2(X), the population median.
(b) Which is greater, q2(X) or EX? Explain your reasoning.
(c) Compute P(0.5 < X < 1.5).
(d) Compute iqr(X), the population interquartile range.
4. Consider the dart-throwing experiment described in Exercise 5.6.5 and
compute the following quantities:
(a) q2(X)
(b) q2(Y )
(c) iqr(X)
(d) iqr(Y )
156 CHAPTER 6. QUANTIFYING POPULATION ATTRIBUTES
5. Lynn claims that Lulu is the cutest dog in the world. Slightly more
circumspect, Michael allows that Lulu is “one in a million.” Seizing
the opportunity to revel in Lulu’s charm, Lynn devises a procedure
for measuring CCQ (canine cuteness quotient), which she calibrates
so that CCQ ∼ Normal(100, 400). Assuming that Michael is correct,
what is Lulu’s CCQ score?
6. A random variable X ∼ Uniform(5, 15) has population mean µ =
EX = 10 and population variance σ2 = Var X = 225. Let Y denote a
normal random variable with the same mean and variance.
(a) Consider X. What is the ratio of its interquartile range to its
standard deviation?
(b) Consider Y . What is the ratio of its interquartile range to its
standard deviation?
7. Identify each of the following statements as True or False. Briefly
explain each of your answers.
(a) For every symmetric random variable X, the median of X equals
the average of the first and third quartiles of X.
(b) For every random variable X, the interquartile range of X is
greater than the standard deviation of X.
(c) For every random variable X, the expected value of X lies between
the first and third quartile of X.
(d) If the standard deviation of a random variable equals zero, then
so does its interquartile range.
(e) If the median of a random variable equals its expected value, then
the random variable is symmetric.
8. For each of the following random variables, discuss whether the median
or the mean would be a more useful measure of centrality:
(a) The annual income of U.S. households.
(b) The lifetime of 75-watt light bulbs.
9. The R function qbinom returns quantiles of the binomial distribution.
For example, quartiles of X ∼ Binomial(n = 3; p = 0.5) can be com-
puted as follows:
6.4. EXERCISES 157
> alpha <- c(.25,.5,.75)
> qbinom(alpha,size=3,prob=.5)
[1] 1 1 2
Notice that X is a symmetric random variable with center of symmetry
θ = 1.5, but qbinom computes q2(X) = 1. This reveals that R may
produce unexpected results when it computes the quantiles of discrete
random variables. By experimenting with various choices of n and p,
try to discover a rule according to which qbinom computes quartiles of
the binomial distribution.
158 CHAPTER 6. QUANTIFYING POPULATION ATTRIBUTES
Chapter 7
Data
Chapters 3–6 developed mathematical tools for studying populations. Ex-
periments are performed for the purpose of obtaining information about a
population that is imperfectly understood. Experiments produce data, the
raw material from which statistical procedures draw inferences about the
population under investigation.
The probability distribution of a random variable X is a mathematical
abstraction of an experimental procedure for sampling from a population.
When we perform the experiment, we observe one of the possible values of
X. To distinguish an observed value of a random variable from the ran-
dom variable itself, we designate random variables by uppercase letters and
observed values by corresponding lowercase letters.
Example 7.1 A coin is tossed and Heads is observed. The mathemat-
ical abstraction of this experiment is X ∼ Bernoulli(p) and the observed
value of X is x = 1.
We will be concerned with experiments that are replicated a fixed number
of times. By replication, we mean that each repetition of the experiment is
performed under identical conditions and that the repetitions are mutually
independent. Mathematically, we write X1, . . . , Xn ∼ P. Let xi denote the
observed value of Xi. The set of observed values, ~
x = {x1, . . . , xn}, is called
a sample.
This chapter introduces several useful techniques for extracting informa-
tion from samples. This information will be used to draw inferences about
populations (for example, to guess the value of the population mean) and
to assess assumptions about populations (for example, to decide whether
159
160 CHAPTER 7. DATA
or not the population can plausibly be modelled by a normal distribution).
Drawing inferences about population attributes (especially means) is the pri-
mary subject of subsequent chapters, which will describe specific procedures
for drawing specific types of inferences. However, deciding which procedure
is appropriate often involves assessing the validity of certain statistical as-
sumptions. The methods described in this chapter will be our primary tools
for making such assessments.
To assess whether or not an assumption is plausible, one must be able
to investigate what happens when the assumption holds. For example, if
a scientist needs to decide whether or not it is plausible that her sample
was drawn from a normal distribution, then she needs to be able to recog-
nize normally distributed data. For this reason, the samples studied in this
chapter were generated under carefully controlled conditions, by computer
simulation. This allows us to investigate how samples drawn from specified
distributions should behave, thereby providing a standard against which to
compare experimental data for which the true distribution can never be
known. Fortunately, R provides several convenient functions for simulating
random sampling.
Example 7.2 Consider the experiment of tossing a fair die n = 20
times. We can simulate this experiment as follows:
> SampleSpace <- c(1,2,3,4,5,6)
> sample(x=SampleSpace,size=20,replace=T)
[1] 1 6 3 2 2 3 5 3 6 4 3 2 5 3 2 2 3 2 4 2
Example 7.3 Consider the experiment of drawing a sample of size
n = 5 from Normal(2, 3). We can simulate this experiment as follows:
> rnorm(5,mean=2,sd=sqrt(3))
[1] 1.3274812 0.5901923 2.5881013 1.2222812 3.4748139
7.1 The Plug-In Principle
We will employ a general methodology for relating samples to populations.
In Chapters 3–6 we developed a formidable apparatus for studying popu-
lations (probability distributions). We would like to exploit this apparatus
fully. Given a sample, we will pretend that the sample is a finite population
(discrete probability distribution) and then we will use methods for studying
7.1. THE PLUG-IN PRINCIPLE 161
finite populations to learn about the sample. This approach is sometimes
called the Plug-In Principle.
The Plug-In Principle employs a fundamental construction:
Definition 7.1 Let ~
x = (x1, . . . , xn) be a sample. The empirical proba-
bility distribution associated with ~
x, denoted P̂n, is the discrete probability
distribution defined by assigning probability 1/n to each {xi}.
Notice that, if a sample contains several copies of the same numerical value,
then each copy is assigned probability 1/n. This is illustrated in the following
example.
Example 7.2 (continued) A fair die is rolled n = 20 times, resulting
in the sample
~
x = {1, 6, 3, 2, 2, 3, 5, 3, 6, 4, 3, 2, 5, 3, 2, 2, 3, 2, 4, 2}. (7.1)
The empirical distribution P̂20 is the discrete distribution that assigns the
following probabilities:
xi #{xi} P̂20({xi})
1 1 0.05
2 7 0.35
3 6 0.30
4 2 0.10
5 2 0.10
6 2 0.10
Notice that, although the true probabilities are P({xi}) = 1/6, the empirical
probabilities range from 0.05 to 0.35. The fact that P̂20 differs from P is
an example of sampling variation. Statistical inference is concerned with
determining what the empirical distribution (the sample) tells us about the
true distribution (the population).
The empirical distribution, P̂n, is an intuitively appealing approximation
of the actual probability distribution, P, from which the sample was drawn.
Notice that the empirical probability of any event A is just
P̂n(A) = # {xi ∈ A} ·
1
n
,
162 CHAPTER 7. DATA
the observed frequency with which A occurs in the sample. Because the em-
pirical distribution is an authentic probability distribution, all of the meth-
ods that we developed for studying (discrete) distributions are available for
studying samples. For example,
Definition 7.2 The empirical cdf, usually denoted F̂n, is the cdf associated
with P̂n, i.e.
F̂n(y) = P̂n(X ≤ y) =
# {xi ≤ y}
n
.
The empirical cdf of sample (7.1) is graphed in Figure 7.1.
y
F(y)
-2 -1 0 1 2 3 4 5 6 7 8 9
0.0
0.2
0.4
0.6
0.8
1.0
Figure 7.1: An empirical cdf.
In R, one can graph the empirical cdf of a sample x with the following
command:
> plot.ecdf(x)
7.2. PLUG-IN ESTIMATES OF MEAN AND VARIANCE 163
7.2 Plug-In Estimates of Mean and Variance
Population quantities defined by expected values are easily estimated by the
plug-in principle. For example, suppose that X1, . . . , Xn ∼ P and that we
observe a sample ~
x = {x1, . . . , xn}. Let µ = EXi denote the population
mean. Then
Definition 7.3 The plug-in estimate of µ, denoted µ̂n, is the mean of the
empirical distibution:
µ̂n =
n
X
i=1
xi ·
1
n
=
1
n
n
X
i=1
xi = x̄n.
This quantity is called the sample mean.
Example 7.2 (continued) The population mean is
µ = EXi = 1·
1
6
+2·
1
6
+3·
1
6
+4·
1
6
+5·
1
6
+6·
1
6
=
1 + 2 + 3 + 4 + 5 + 6
6
= 3.5.
The sample mean of sample (7.1) is
µ̂20 = x̄20 = 1 ·
1
20
+ 6 ·
1
20
+ · · · + 4 ·
1
20
+ 2 ·
1
20
= 1 × 0.05 + 2 × 0.35 + 3 × 0.30 + 4 × 0.10 +
5 × 0.10 + 6 × 0.10
= 3.15.
Notice that µ̂20 6= µ. This is another example of sampling variation.
The variance can be estimated in the same way. Let σ2 = Var Xi denote
the population variance; then
Definition 7.4 The plug-in estimate of σ2, denoted c
σ2
n, is the variance of
the empirical distribution:
c
σ2
n =
n
X
i=1
(xi − µ̂n)2
·
1
n
=
1
n
n
X
i=1
(xi − x̄n)2
=
1
n
n
X
i=1
x2
i −
Ã
1
n
n
X
i=1
xi
!2
.
Notice that we do not refer to c
σ2
n as the sample variance. As will be discussed
in Section 9.2.2, most authors designate another, equally plausible estimate
of the population variance as the sample variance.
164 CHAPTER 7. DATA
Example 7.2 (continued) The population variance is
σ2
= EX2
i − (EXi)2
=
12 + 22 + 32 + 42 + 52 + 62
6
− 3.52
=
35
12
.
= 2.9167.
The plug-in estimate of the variance is
d
σ2
20 =
³
12
× 0.05 + 22
× 0.35 + 32
× 0.30+
42
× 0.10 + 52
× 0.10 + 62
× 0.10
´
− 3.152
= 1.9275.
Again, notice that d
σ2
20 6= σ2, yet another example of sampling variation.
There are many ways to compute the preceding plug-in estimates using
R. Assuming that x contains the sample, here are two possibilities:
> n <- length(x)
> plug.mean <- sum(x)/n
> plug.var <- sum(x^2)/n - plug.mean^2
> plug.mean <- mean(x)
> plug.var <- mean(x^2) - plug.mean^2
7.3 Plug-In Estimates of Quantiles
Population quantities defined by quantiles can also be estimated by the plug-
in principle. Again, suppose that X1, . . . , Xn ∼ P and that we observe a
sample ~
x = {x1, . . . , xn}. Then
Definition 7.5 The plug-in estimate of a population quantile is the corre-
sponding quantile of the empirical distribution. In particular, the sample
median is the median of the empirical distribution. The sample interquartile
range is the interquartile range of the empirical distribution.
Example 7.4 Consider the experiment of drawing a sample of size
n = 20 from Uniform(1, 5). This probability distribution has a population
median of 3 and a population interquartile range of 4 − 2 = 2. I simulated
this experiment (and listed the sample in increasing order) with the following
R command:
> x <- sort(runif(20,min=1,max=5))
7.3. PLUG-IN ESTIMATES OF QUANTILES 165
This resulted in the following sample:
1.124600 1.161286 1.445538 1.828181 1.853359
1.934939 1.943951 2.107977 2.372500 2.448152
2.708874 3.297806 3.418913 3.437485 3.474940
3.698471 3.740666 4.039637 4.073617 4.195613
The sample median is
2.448152 + 2.708874
2
= 2.578513,
which also can be computed with the following R command:
> median(x)
[1] 2.578513
Notice that the sample median does not exactly equal the population median.
This is another example of sampling variation.
To compute the sample interquartile range, we require the first and
third sample quartiles, i.e., the α = 0.25 and α = 0.75 sample quantiles.
We must now confront the fact that Definition 6.5 may not specify unique
quantile values. For the empirical distribution of the sample above, any
number in [1.853359, 1.934939] is a sample first quartile and any number in
[3.474940, 3.698471] is a sample third quartile.
The statistical community has not agreed on a convention for resolving
the ambiguity in the definition of quartiles. One natural and popular possi-
bility is to use the central value in each interval of possible quartiles. If we
adopt that convention here, then the sample interquartile range is
3.474940 + 3.698471
2
−
1.853359 + 1.934939
2
= 1.692556.
R adopts a slightly different convention, illustrated below. The following
command computes the 0.25 and 0.75 quantiles:
> quantile(x,probs=c(.25,.75))
25% 75%
1.914544 3.530823
The following command computes several useful sample quantities:
> summary(x)
Min. 1st Qu. Median Mean 3rd Qu. Max.
1.124600 1.914544 2.578513 2.715325 3.530823 4.195613
166 CHAPTER 7. DATA
If we use the R definition of quantile, then the sample interquartile range
is 3.530823 − 1.914544 = 1.616279. Rather than typing the quartiles into R,
we can compute the sample interquartile range as follows:
> q <- as.vector(quantile(x,probs=c(.25,.75)))
> q[2]-q[1]
[1] 1.616279
This is sufficiently complicated that we might prefer to create a function
that computes the interquartile range of a sample:
> iqr <- function(x) {
+ q <- as.vector(quantile(x,probs=c(.25,.75)))
+ return(q[2]-q[1])
+ }
> iqr(x)
[1] 1.616279
Notice that the sample quantities do not exactly equal the population
quantities that they estimate, regardless of which convention we adopt for
defining quartiles. This is another example of sampling variation.
Used judiciously, sample quantiles can be extremely useful when trying
to discern various features of the population from which the sample was
drawn. The remainder of this section describes two graphical techniques for
assimilating and displaying sample quantile information.
7.3.1 Box Plots
Information about sample quartiles is often displayed visually, in the form
of a box plot. A box plot of a sample consists of a rectangle that extends
from the first to the third sample quartile, thereby drawing attention to the
central 50% of the data. Thus, the length of the rectangle equals the sample
interquartile range. The location of the sample median is also identified, and
its location within the rectangle often provides insight into whether or not
the population from which the sample was drawn is symmetric. Whiskers
extend from the ends of the rectangle, either to the extreme values of the data
or to 1.5 times the sample interquartile range, whichever is less. Values that
lie beyond the whiskers are called outliers and are individually identified.
7.3. PLUG-IN ESTIMATES OF QUANTILES 167
0
2
4
6
8
10
Figure 7.2: A box plot of a sample from χ2(3).
Example 7.5 The pdf of the asymmetric distribution χ2(3) was graphed
in Figure 5.8. The following R commands draw a random sample of n = 100
observed values from this population, then construct a box plot of the sam-
ple:
> x <- rchisq(100,df=3)
> boxplot(x)
An example of a box plot produced by these commands is displayed in Figure
7.2. In this box plot, the numerical values in the sample are represented by
the vertical axis.
The third quartile of the box plot in Figure 7.2 is farther above the
median than the first quartile is below it. The short lower whisker extends
168 CHAPTER 7. DATA
from the first quartile to the minimal value in the sample, whereas the long
upper whisker extends 1.5 interquartile ranges beyond the third quartile.
Furthermore, there are 4 outliers beyond the upper whisker. Once we learn
to discern these key features of the box plot, we can easily recognize that
the population from which the sample was drawn is not symmetric.
The frequency of outliers in a sample often provides useful diagnostic
information. Recall that, in Section 6.3, we computed that the interquartile
range of a normal distribution is 1.34898 standard deviations. A value is an
outlier if it lies more than
z =
1.34898
2
+ 1.5 · 1.34898 = 2.69796
standard deviations from the mean. Hence, the probability that an observa-
tion drawn from a normal distribution is an outlier is
> 2*pnorm(-2.69796)
[1] 0.006976582
and we would expect a sample drawn from a normal distribution to contain
approximately 7 outliers per 1000 observations. A sample that contains a
dramatically different proportion of outliers, as in Example 7.5, is not likely
to have been drawn from a normal distribution.
Box plots are especially useful for comparing several populations.
Example 7.6 We drew samples of 100 observations from three normal
populations: Normal(0, 1), Normal(2, 1), and Normal(1, 4). To attempt to
discern in the samples the various differences in population mean and stan-
dard deviation, we examined side-by-side box plots. This was accomplished
by the following R commands:
> z1 <- rnorm(100)
> z2 <- rnorm(100,mean=2,sd=1)
> z3 <- rnorm(100,mean=1,sd=2)
> boxplot(z1,z2,z3)
An example of the output of these commands is displayed in Figure 7.3.
7.3.2 Normal Probability Plots
Another powerful graphical technique that relies on quantiles are quantile-
quantile (QQ) plots, which plot the quantiles of one distribution against the
7.3. PLUG-IN ESTIMATES OF QUANTILES 169
1 2 3
−4
−2
0
2
4
6
Figure 7.3: Box plots of samples from three normal distributions.
quantiles of another. QQ plots are used to compare the shapes of two distri-
butions, most commonly by plotting the observed quantiles of an empirical
distribution against the corresponding quantiles of a theoretical normal dis-
tribution. In this case, a QQ plot is often called a normal probability plot. If
the shape of the empirical distribution resembles a normal distribution, then
the points in a normal probability plot should tend to fall on a straight line.
If they do not, then we should be skeptical that the sample was drawn from
a normal distribution. Extracting useful information from normal probabil-
ity plots requires some practice, but the patient data analyst will be richly
rewarded.
Example 7.4 (continued) A normal probability plot of the sample
generated in Example 7.5 against a theoretical normal distribution is dis-
played in Figure 7.4. This plot was created using the following R command:
> qqnorm(x)
Notice the systematic and asymmetric bending away from linearity in this
plot. In particular, the smaller quantiles are much closer to the central values
170 CHAPTER 7. DATA
−2 −1 0 1 2
0
2
4
6
8
10
12
14
Normal Q−Q Plot
Theoretical Quantiles
Sample
Quantiles
Figure 7.4: A normal probability plot of a sample from χ2(3).
than should be the case for a normal distribution. This suggests that this
sample was drawn from a nonnormal distribution that is skewed to the right.
Of course, we know that this sample was drawn from χ2(3), which is in fact
skewed to the right.
When using normal probability plots, one must guard against overinter-
preting slight departures from linearity. Remember: some departures from
linearity will result from sampling variation. Consequently, before drawing
definitive conclusions, the wise data analyst will generate several random
samples from the theoretical distribution of interest in order to learn how
much sampling variation is to be expected. Before dismissing the possibil-
ity that the sample in Example 7.5 was drawn from a normal distribution,
one should generate several normal samples of the same size for comparison.
The normal probability plots of four such samples are displayed in Figure
7.5. In none of these plots did the points fall exactly on a straight line.
However, upon comparing the normal probability plot in Figure 7.4 to the
normal probability plots in Figure 7.5, it is abundantly clear that the sample
in Example 7.5 was not drawn from a normal distribution.
7.4. KERNEL DENSITY ESTIMATES 171
−2 −1 0 1 2
−2
−1
0
1
2
3
Normal Q−Q Plot
Theoretical Quantiles
Sample
Quantiles
−2 −1 0 1 2
−2
−1
0
1
2
3
Normal Q−Q Plot
Theoretical Quantiles
Sample
Quantiles
−2 −1 0 1 2
−3
−2
−1
0
1
2
Normal Q−Q Plot
Theoretical Quantiles
Sample
Quantiles
−2 −1 0 1 2
−1
0
1
2
Normal Q−Q Plot
Theoretical Quantiles
Sample
Quantiles
Figure 7.5: Normal probability plots of four samples from Normal(0, 1).
7.4 Kernel Density Estimates
Suppose that ~
x = {x1, . . . , xn} is a sample drawn from an unknown pdf f.
Box plots and normal probability plots are extremely useful graphical tech-
niques for discerning in ~
x certain important attributes of f, e.g., centrality,
dispersion, asymmetry, nonnormality. To discern more subtle features of f,
we now ask if it is possible to reconstruct from ~
x a pdf ˆ
fn that approximates
f. This is a difficult problem, one that remains a vibrant topic of research
and about which little is said in introductory courses. However, using the
concept of the empirical distribution, one can easily motivate one of the most
popular techniques for nonparametric probability density estimation.
The logic of the empirical distribution is this: by assigning probability
1/n to each xi, one accumulates more probability in regions that produced
more observed values. However, because the entire amount 1/n is placed
exactly on the value xi, the resulting empirical distribution is necessarily
discrete. If the population from which the sample was drawn is discrete, then
the empirical distribution estimates the probability mass function. However,
172 CHAPTER 7. DATA
if the population from which the sample was drawn is continuous, then all
possible values occur with zero probability. In this case, there is nothing
special about the precise values that were observed—what is important are
the regions in which they occurred.
Instead of placing all of the probability 1/n assigned to xi exactly on the
value xi, we now imagine distributing it in a neighborhood of xi according
to some probability density function. This construction will also result in
more probability accumulating in regions that produced more values, but it
will produce a pdf instead of a pmf. Here is a general description of this
approach, usually called kernel density estimation:
1. Choose a probability density function K, the kernel. Typically, K is a
symmetric pdf centered at the origin. Common choices of K include
the Normal(0, 1) and Uniform[−0.5, 0.5] pdfs.
2. At each xi, center a rescaled copy of the kernel. This pdf,
1
h
K
µ
x − xi
h
¶
, (7.2)
will control the distribution of the 1/n probability assigned to xi. The
parameter h is variously called the smoothing parameter, the window
width, or the bandwidth.
3. The difficult decision in constructing a kernel density estimate is the
choice of h. The technical details of this issue are beyond the scope of
this book, but the underlying principles are quite simple:
• Small values of h mean that the standard deviation of (7.2) will be
small, so that the 1/n probability assigned to xi will be distributed
close to xi. This is appropriate when n is large and the xi are
tightly packed.
• Large values of h mean that the standard deviation of (7.2) will
be large, so that the 1/n probability assigned to xi will be widely
distributed in the general vicinity of xi. This is appropriate when
n is small and the xi are sparse.
4. After choosing K and h, the kernel density estimate of f is
ˆ
fn(x) =
n
X
i=1
1
n
1
h
K
µ
x − xi
h
¶
=
1
nh
n
X
i=1
K
µ
x − xi
h
¶
.
Such estimates are easily computed and graphed using the R functions
density and plot.
7.4. KERNEL DENSITY ESTIMATES 173
Example 7.7 Consider the probability density function f displayed in
Figure 7.6. The most striking feature of f is that it is bimodal. Can we
detect this feature using a sample drawn from f?
x
f(x)
-3 -2 -1 0 1 2 3 4
0.0
0.05
0.10
0.15
0.20
0.25
0.30
Figure 7.6: A bimodal probability density function.
We drew a sample of size n = 100 from f. A box plot and a normal
probability plot of this sample are displayed in Figure 7.7. It is difficult to
discern anything unusual from the box plot. The normal probability plot
contains all of the information in the sample, but it is encoded in such a way
that the feature of interest is not easily extracted. In contrast, the kernel
density estimate displayed in Figure 7.8 clearly reveals that the sample was
drawn from a bimodal population. After storing the sample in the vector x,
this estimate was computed and plotted using the following R command:
> plot(density(x))
174 CHAPTER 7. DATA
-3
-2
-1
0
1
2
3
•
• •
•
••
••
••••••••
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
••
••••
••••••••• • • •
Quantiles of Standard Normal
x
-2 -1 0 1 2
-3
-2
-1
0
1
2
3
Figure 7.7: A box plot and a normal probability plot for Example 7.7.
7.5 Case Study: Are Forearm Lengths Normally
Distributed?
Many of the inferential procedures that statisticians have developed assume
that the data to be analyzed were drawn from one (or more) normal distri-
butions. These procedures are often the most elegant and powerful methods
available to the data analyst, but they can easily mislead when applied
to nonnormal data. Conveniently, the normality assumption is often quite
plausible; just as often, however, it is not. It is therefore essential that the
data analyst be able to make informed decisions about whether or not to
assume normality. One of our primary uses for the methods introduced in
the present chapter will be to assist us in making such decisions. Be warned:
because we cannot know the true distributions of (most) random variables
encountered in scientific experimentation, we cannot know whether or not
they are in fact normally distributed. Such ignorance should humble and can
intimidate, but it should not paralyze. To analyze data, one must proceed
somehow, and it is best to do so with as much information as possible.
It is often (but not universally) the case that measurements of linear
7.5. CASE STUDY: ARE FOREARM LENGTHS NORMALLY DISTRIBUTED?175
• • • • •
•
•
•
•
•
•
•
•
•
•
•
•
•
•
• •
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
• •
•
•
•
•
•
•
•
•
• • •
density(x)$x
density(x)$y
-4 -2 0 2 4
0.0
0.05
0.10
0.15
0.20
0.25
Figure 7.8: A kernel density estimate for Example 7.7.
dimension (height, length, width, depth, breadth, etc.) are normally dis-
tributed. To further illustrate the methods introduced in the present chap-
ter, we apply them to a famous data set, measurements of forearm length
made on n = 140 adult males, inquiring whether or not it appears plausi-
ble to assume that forearm lengths are normally distributed. These data,
displayed in Table 7.1, were studied by K. Pearson and A. Lee1 and subse-
quently reproduced as Data Set 139 in A Handbook of Small Data Sets.
Examining the numbers in Table 7.1, we note that the measurements
were made with a precision of 0.1 inches and that many values occur several
times. For example, 9 of the 140 men had forearms with a measured length
of 18.5 inches. Because the probability that any two continuous random
variables will be equal is zero, the existence of equal values in the sample
should cause one to consider whether or not these measurements should
be modelled as observed values of continuous random variables. In this
case, it makes sense to proceed. Actual (as opposed to measured) forearm
1
K. Pearson and A. Lee (1903). On the laws of inheritance in man. I. Inheritance of
physical characters. Biometrika, 2:357–462.
176 CHAPTER 7. DATA
17.3 18.4 20.9 16.8 18.7 20.5 17.9 20.4 18.3 20.5
19.0 17.5 18.1 17.1 18.8 20.0 19.1 19.1 17.9 18.3
18.2 18.9 19.4 18.9 19.4 20.8 17.3 18.5 18.3 19.4
19.0 19.0 20.5 19.7 18.5 17.7 19.4 18.3 19.6 21.4
19.0 20.5 20.4 19.7 18.6 19.9 18.3 19.8 19.6 19.0
20.4 17.3 16.1 19.2 19.6 18.8 19.3 19.1 21.0 18.6
18.3 18.3 18.7 20.6 18.5 16.4 17.2 17.5 18.0 19.5
19.9 18.4 18.8 20.1 20.0 18.5 17.5 18.5 17.9 17.4
18.7 18.6 17.3 18.8 17.8 19.0 19.6 19.3 18.1 18.5
20.9 19.8 18.1 17.1 19.8 20.6 17.6 19.1 19.5 18.4
17.7 20.2 19.9 18.6 16.6 19.2 20.0 17.4 17.1 18.3
19.1 18.5 19.6 18.0 19.4 17.1 19.9 16.3 18.9 20.7
19.7 18.5 18.4 18.7 19.3 16.3 16.9 18.2 18.5 19.3
18.1 18.0 19.5 20.3 20.1 17.2 19.5 18.8 19.2 17.7
Table 7.1: Forearm lengths (in inches) of 140 adult males, studied by K.
Pearson and A. Lee (1903).
length is surely continuous, and there are 47 distinct values in Table 7.1. To
preserve important numerical relations, e.g., 19.5 − 18.5 = 2(18.5 − 18), we
can accomplish far more with continuous random variables than we might
with discrete random variables. We proceed to investigate the plausibility of
assuming that the continuous random variables are normal random variables.
Figure 7.9 displays a box plot, a normal probability plot, and a kernel
density estimate, constructed from the 140 forearm measurements in Ta-
ble 7.1 by the following R commands:
> par(mfrow=c(1,3))
> boxplot(forearms,main="Box Plot")
> qqnorm(forearms)
> plot(density(forearms),type="l",main="PDF Estimate")
Examining the box plot, we first note that the sample median lies roughly
halfway between the first and third sample quartiles, and that the whiskers
are of roughly equal length. This is precisely what we would expect to
observe if the data were drawn from a symmetric distribution. We also note
that these data contain no outliers. These features are consistent with the
7.5. CASE STUDY: ARE FOREARM LENGTHS NORMALLY DISTRIBUTED?177
16
17
18
19
20
21
Box Plot
−2 0 1 2
16
17
18
19
20
21
Normal Q−Q Plot
Theoretical Quantiles
Sample
Quantiles
16 18 20 22
0.00
0.10
0.20
0.30
PDF Estimate
N = 140 Bandwidth = 0.375
Density
Figure 7.9: Three displays of 140 forearm measurements.
possibility that these data were drawn from a normal distribution, but they
do not preclude other symmetric distributions.
Both normal probability plots and kernel density estimates reveal far
more about the data than do box plots. More information is generally de-
sirable, but seeing too much creates the danger that patterns created by
chance variation will be overinterpreted by the too-eager data analyst. Key
to the proper use of normal probability plots and kernel density estimates is
mature judgment about which features reflect on the population and which
features are due to chance variation.
The normal probability plot of the forearm data is generally straight,
but should we worry about the kink at the lower end? The kernel density
estimate of the forearm data is unimodal and nearly symmetric, but should
we be concerned by its apparent lack of inflection points at ±1 standard
deviations? The best way to investigate such concerns is to generate pseu-
dorandom normal samples, each of the same size as the observed sample
(here n = 140), and consider what—if anything—distinguishes the observed
sample from the normal samples. I generated three pseudorandom normal
samples using the rnorm function. The four normal probability plots are
displayed in Figure 7.10 and the four kernel density estimates are displayed
in Figure 7.11. I am unable to advance a credible argument that the forearm
sample looks any less normal than the three normal samples.
In addition to the admittedly subjective comparison of normal probabil-
178 CHAPTER 7. DATA
−2 −1 0 1 2
−2
−1
0
1
Normal Q−Q Plot
Theoretical Quantiles
Sample
Quantiles
−2 −1 0 1 2
16
17
18
19
20
21
Normal Q−Q Plot
Theoretical Quantiles
Sample
Quantiles
−2 −1 0 1 2
−3
−2
−1
0
1
2
Normal Q−Q Plot
Theoretical Quantiles
Sample
Quantiles
−2 −1 0 1 2
−3
−2
−1
0
1
2
Normal Q−Q Plot
Theoretical Quantiles
Sample
Quantiles
Figure 7.10: Normal probability plots of the forearm data and three pseu-
dorandom samples from Normal(0, 1).
ity plots and kernel density estimates, it may be helpful to compare certain
quantitative attributes of the sample to known quantitative attributes of
normal distributions. In Section 6.3, for example, we noted that the ra-
tio of population interquartile range to population standard deviation is
1.34898
.
= 1.35 for a normal distribution. The analogous ratio of sample
interquartile range to sample standard deviation can be quite helpful in de-
ciding whether or not the sample was drawn from a normal distribution.
It should be noted, however, that not all distributions with this ratio are
normal; thus, although a ratio substantially different from 1.35 may suggest
that the sample was not drawn from a normal distribution, a ratio close to
1.35 does not prove that it was.
7.5. CASE STUDY: ARE FOREARM LENGTHS NORMALLY DISTRIBUTED?179
−3 −2 −1 0 1 2
0.0
0.1
0.2
0.3
0.4
PDF Estimate
N = 140 Bandwidth = 0.2804
Density
16 18 20 22
0.00
0.10
0.20
0.30
PDF Estimate
N = 140 Bandwidth = 0.375
Density
−4 −2 0 2 4
0.0
0.1
0.2
0.3
PDF Estimate
N = 140 Bandwidth = 0.3359
Density
−4 −2 0 2
0.00
0.10
0.20
0.30
PDF Estimate
N = 140 Bandwidth = 0.3528
Density
Figure 7.11: Kernel density estimates constructed from the forearm data
and three pseudorandom samples from Normal(0, 1).
To facilitate the calculation of iqr:sd ratios, we define an R function that
performs the necessary operations. Here are the R commands that define our
new function, iqrsd:
> iqrsd <- function(x) {
+ x.mean <- mean(x)
+ x.var <- mean(x^2)-x.mean^2
+ q <- as.vector(quantile(x,probs=c(.25,.75)))
+ x.iqr <- q[2]-q[1]
+ return(x.iqr/sqrt(x.var))
+ }
180 CHAPTER 7. DATA
I generated 10 pseudorandom normal samples, each of size n = 140, using
the rnorm function, than applied the new iqrsd function to each sample.
The resulting ratios ranged from a minimum of 1.178 to a maximum of 1.545.
The ratio for the forearm data is 1.344, so one can hardly object to assuming
normality on the basis of this quantity.
Overall, the forearm data look about as normal as one could ever hope
to encounter with actual experimental data. If one is hoping to use an infer-
ential procedure that assumes normality, then this is the ideal case. Unfor-
tunately, one rarely encounters situations in which one can so comfortably
assume normality.
7.6 Exercises
1. The following independent samples were drawn from four populations:
Sample 1 Sample 2 Sample 3 Sample 4
5.098 4.627 3.021 7.390
2.739 5.061 6.173 5.666
2.146 2.787 7.602 6.616
5.006 4.181 6.250 7.868
4.016 3.617 1.875 2.428
9.026 3.605 6.996 6.740
4.965 6.036 4.850 7.605
5.016 4.745 6.661 10.868
6.195 2.340 6.360 1.739
4.523 6.934 7.052 1.996
(a) Use the boxplot function to create side-by-side box plots of these
samples. Does it appear that these samples were all drawn from
the same population? Why or why not?
(b) Use the rnorm function to draw four independent samples, each
of size n = 10, from one normal distribution. Examine box plots
of these samples. Is it possible that Samples 1–4 were all drawn
from the same normal distribution?
2. The following sample, ~
x, was collected and sorted:
0.246 0.327 0.423 0.425 0.434
0.530 0.583 0.613 0.641 1.054
1.098 1.158 1.163 1.439 1.464
2.063 2.105 2.106 4.363 7.517
7.6. EXERCISES 181
(a) Graph the empirical cdf of ~
x.
(b) Calculate the plug-in estimates of the mean, the variance, the
median, and the interquartile range.
(c) Take the square root of the plug-in estimate of the variance and
compare it to the plug-in estimate of the interquartile range. Do
you think that ~
x was drawn from a normal distribution? Why or
why not?
(d) Use the qqnorm function to create a normal probability plot. Do
you think that ~
x was drawn from a normal distribution? Why or
why not?
(e) Now consider the transformed sample ~
y produced by replacing
each xi with its natural logarithm. If ~
x is stored in the vector x,
then ~
y can be computed by the following R command:
> y <- log(x)
Do you think that ~
y was drawn from a normal distribution? Why
or why not?
3. In January 2002, twelve students enrolled in Math 351 (Applied Statis-
tics) at the College of William & Mary reported the following results
for the experiment described in Exercise 1.5.2. (Two students reported
more than one measurement, but only one measurement per student
is reported here.)
143 3
16 144 4
16 14014
16 144 7
16 14312
16 15313
16
11910
16 143 1
16 14314
16 144 3
16 144 7
16 148 3
16
(a) Do these measurements appear to be a sample from a normal
distribution? Why or why not?
(b) Suggest possible explanations for the surprising amount of varia-
tion in these measurements.
(c) Use these measurements to estimate the true length of the table.
Justify your estimation procedure.
4. Forty-one students taking Math 351 (Applied Statistics) at the College
of William & Mary were administered a test. The following test scores
182 CHAPTER 7. DATA
were observed and sorted:
90 90 89 88 85 85 84 82 82 82
81 81 81 80 79 79 78 76 75 74
72 71 70 66 65 63 62 62 61 59
58 58 57 56 56 53 48 44 40 35 33
(a) Do these numbers appear to be a random sample from a normal
distribution?
(b) Does this list of numbers have any interesting anomalies?
5. Do the numbers in Table 1.1 (Michelson’s measurements of the speed
of light) appear to be a random sample from a normal distribution?
6. Consider a box that contains 10 tickets, labelled
{1, 1, 1, 1, 2, 5, 5, 10, 10, 10}.
From this box, I propose to draw (with replacement) n = 40 tickets.
I am interested in the sum, Y , of the 40 ticket values that I draw.
Write an R function named box.model that simulates this experiment,
i.e., evaluating box.model is like observing a value, y, of the random
variable Y .
7. Experiment with using R to generate simulated random samples of
various sizes. Use the summary function to compute the quartiles of
these samples. Try to discern the convention that this function uses to
define sample quartiles.
Chapter 8
Lots of Data
Throughout Chapter 7 we emphasized that, because of sampling variation,
the plug-in estimate of a population quantity rarely equals the actual value
of the population quantity. The present chapter explores this phenomenon
in greater depth.
Suppose that X1, . . . , Xn ∼ P and that an experimental scientist wants
to estimate the population mean, µ = EXi. To do so, she observes values
x1, . . . , xn of X1, . . . , Xn, then computes
x̄n =
1
n
n
X
i=1
xi,
the plug-in estimate of µ. Mathematically, this is equivalent to first defining
a new random variable,
X̄n =
1
n
n
X
i=1
Xi,
then observing the value x̄n of X̄n. The random variable X̄n is the average
of the random variables X1, . . . , Xn. Both the random variable X̄n and the
observed value x̄n are called the sample mean. This is potentially confusing,
but the convention of using uppercase letters for random variables and low-
ercase letters for observed values allows us to be clear about which concept
we have in mind when we use the phrase “sample mean.” In this chapter,
we study the behavior of X̄n.
We begin with an example. Suppose that, unbeknownst to the scientist,
P is the asymmetric probability distribution χ2(3), with pdf depicted in
Figure 5.8. Because of Corollary 5.1, it follows that µ = 3. Hence, we can
183
184 CHAPTER 8. LOTS OF DATA
assess the quality of the scientist’s estimates of µ by comparing the estimates
to the correct value, µ = 3. We will use simulation to explore what might
occur in this situation.
First, consider drawing a small sample of n = 5 observations. Here is
what happened when I performed that experiment three times:
> x <- rchisq(5,df=3)
> mean(x)
[1] 3.650077
> x <- rchisq(5,df=3)
> mean(x)
[1] 2.963841
> x <- rchisq(5,df=3)
> mean(x)
[1] 2.063129
Due to sampling variation, the first estimate is too high, the second estimate
is just about right, and the third estimate is too low. These results suggest
that small samples may be unreliable. Of course, if we admit the possibility
that small samples are unreliable, then it might be wise to perform the
simulation more than three times! So, I performed the same simulation 1000
times, each time observing values of X1, . . . , X5 ∼ χ2(3) and then computing
x̄5, the observed value of X̄5. To display the results, I applied the method
described in Section 7.4 to the 1000 observed values of X̄5. This produced
a kernel density estimate, displayed in Figure 8.1, of the pdf of X̄5. Notice
the considerable variation in the observed values of X̄5.
Next, consider drawing a moderate sample of n = 20 observations. I
did this 1000 times, each time observing values of X1, . . . , X20 ∼ χ2(3) and
then computing x̄20, the observed value of X̄20. From these 1000 observed
values of X̄20, I constructed a kernel density estimate of the pdf of X̄20. This
estimated pdf is displayed in Figure 8.2. Notice that the observed values of
X̄20 tend to be more tightly clustered around µ = 3 than do the observed
values of X̄5, suggesting that moderate samples are more reliable than small
samples.
Finally, consider drawing a large sample of n = 80 observations. I did
this 1000 times, each time observing values of X1, . . . , X80 ∼ χ2(3) and then
computing x̄80, the observed value of X̄80. From these 1000 observed values
of X̄80, I constructed a kernel density estimate of the pdf of X̄80. This
8.1. AVERAGING DECREASES VARIATION 185
observed value of sample mean
estimated
density
0 1 2 3 4 5 6 7 8 9
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
Figure 8.1: Kernel density estimate constructed from 1000 observed values
of X̄n for n = 5. X1, . . . , Xn ∼ χ2(3) and µ = EXi = 3.
estimated pdf is displayed in Figure 8.3. Notice that the observed values of
X̄80 tend to be more tightly clustered around µ = 3 than do the observed
values of X̄20, suggesting that large samples are more reliable than moderate
samples.
The sections in this chapter generalize the preceding observations. We
consider any experiment that can be performed, independently and identi-
cally, as many times as we please. We describe this situation by supposing
the existence of a sequence of independent and identically distributed ran-
dom variables, X1, X2, . . ., and we assume that these random variables have
a finite mean µ = EXi and a finite variance σ2 = Var Xi. Under these
assumptions, we study the behavior of the sample mean, X̄n, as n increases.
8.1 Averaging Decreases Variation
By definition, EXi = µ. Thus, the population mean is the average value
assumed by the random variable Xi. This statement is also true of the
186 CHAPTER 8. LOTS OF DATA
observed value of sample mean
estimated
density
0 1 2 3 4 5 6 7 8 9
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
Figure 8.2: Kernel density estimate constructed from 1000 observed values
of X̄n for n = 20. X1, . . . , Xn ∼ χ2(3) and µ = EXi = 3.
sample mean:
EX̄n =
1
n
n
X
i=1
EXi =
1
n
n
X
i=1
µ = µ;
however, there is a crucial distinction between Xi and X̄n.
The tendency of a random variable to assume a value that is close to
its expected value is quantified by computing its variance. By definition,
Var Xi = σ2, but
Var X̄n = Var
Ã
1
n
n
X
i=1
Xi
!
=
1
n2
n
X
i=1
Var Xi =
1
n2
n
X
i=1
σ2
=
σ2
n
.
Hence, the sample mean has less variability than any of the individual ran-
dom variables that are being averaged. Averaging decreases variation. Fur-
thermore, as n → ∞, Var X̄n → 0. Thus, by repeating our experiment
enough times, we can make the variation in the sample mean as small as we
please.
8.2. THE WEAK LAW OF LARGE NUMBERS 187
observed value of sample mean
estimated
density
0 1 2 3 4 5 6 7 8 9
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
Figure 8.3: Kernel density estimate constructed from 1000 observed values
of X̄n for n = 80. X1, . . . , Xn ∼ χ2(3) and µ = EXi = 3.
The preceding remarks suggest that, if the population mean is unknown,
then we can draw inferences about it by observing the behavior of the sample
mean. This fundamental insight is the basis for a considerable portion of
this book. The remainder of this chapter refines the relation between the
population mean and the behavior of the sample mean.
8.2 The Weak Law of Large Numbers
Recall Definition 2.12 from Section 2.4: a sequence of real numbers {yn}
converges to a limit c ∈ ℜ if and only if, for every ǫ > 0, there exists
a natural number N such that yn ∈ (c − ǫ, c + ǫ) for each n ≥ N. Our
first task is to generalize from convergence of a sequence of real numbers to
convergence of a sequence of random variables.
If we replace {yn}, a sequence of real numbers, with {Yn}, a sequence of
random variables, then the event that Yn ∈ (c−ǫ, c+ǫ) is uncertain. Rather
than demand that this event must occur for n sufficiently large, we ask only
188 CHAPTER 8. LOTS OF DATA
that the probability of this event tend to unity as n tends to infinity. This
results in
Definition 8.1 A sequence of random variables {Yn} converges in probabil-
ity to a constant c, written Yn
P
→ c, if and only if, for every ǫ > 0,
lim
n→∞
P (Yn ∈ (c − ǫ, c + ǫ)) = 1.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
|
c
(
c − ǫ
)
c + ǫ
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
↓
Y5
ց
Y25
→
Y125
Figure 8.4: An example of convergence in probability.
Convergence in probability is depicted in Figure 8.4 using the pdfs fn of
continuous random variables Yn. (One could also use the pmfs of discrete
random variables.) We see that
pn = P (Yn ∈ (c − ǫ, c + ǫ)) =
Z c+ǫ
c−ǫ
fn(x) dx
is tending to unity as n increases. Notice, however, that each pn < 1.
The concept of convergence in probability allows us to state an important
result.
Theorem 8.1 (Weak Law of Large Numbers) Let X1, X2, . . . be any se-
quence of independent and identically distributed random variables having
8.2. THE WEAK LAW OF LARGE NUMBERS 189
finite mean µ and finite variance σ2. Then
X̄n
P
→ µ.
This result is of considerable consequence. It states that, as we average more
and more Xi, the average values that we observe tend to be distributed closer
and closer to the theoretical average of the Xi. This property of the sample
mean strengthens our contention that the behavior of X̄n provides more and
more information about the value of µ as n increases.
The Weak Law of Large Numbers (WLLN) has an important special
case.
Corollary 8.1 (Law of Averages) Let A be any event and consider a se-
quence of independent and identical experiments in which we observe whether
or not A occurs. Let p = P(A) and define independent and identically dis-
tributed random variables by
Xi =
(
1 A occurs
0 Ac occurs
)
.
Then Xi ∼ Bernoulli(p), X̄n is the observed frequency with which A occurs
in n trials, and µ = EXi = p = P(A) is the theoretical probability of A.
The WLLN states that the former tends to the latter as the number of trials
increases.
The Law of Averages formalizes our common experience that “things
tend to average out in the long run.” For example, we might be surprised
if we tossed a fair coin n = 10 times and observed X̄10 = 0.9; however,
if we knew that the coin was indeed fair (p = 0.5), then we would remain
confident that, as n increased, X̄n would eventually tend to 0.5.
Notice that the conclusion of the Law of Averages is the frequentist
interpretation of probability. Instead of defining probability via the notion
of long-run frequency, we defined probability via the Kolmogorov axioms.
Although our approach does not require us to interpret probabilities in any
one way, the Law of Averages states that probability necessarily behaves in
the manner specified by frequentists.
Finally, recall from Section 7.1 that the empirical probability of an event
A is the observed frequency with which A occurs in the sample:
P̂n(A) = # {xi ∈ A} ·
1
n
,
190 CHAPTER 8. LOTS OF DATA
By the Law of Averages, this quantity tends to the true probability of A as
the size of the sample increases. Thus, the theory of probability provides a
mathematical justification for approximating P with P̂n when P is unknown.
8.3 The Central Limit Theorem
The Weak Law of Large Numbers states a precise sense in which the dis-
tribution of values of the sample mean collapses to the population mean as
the size of the sample increases. As interesting and useful as this fact is, it
leaves several obvious questions unanswered:
1. How rapidly does the sample mean tend toward the population mean?
2. How does the shape of the sample mean’s distribution change as the
sample mean tends toward the population mean?
To answer these questions, we convert the random variables in which we are
interested to standard units.
We have supposed the existence of a sequence of independent and iden-
tically distributed random variables, X1, X2, . . ., with finite mean µ = EXi
and finite variance σ2 = Var Xi. We are interested in the sum and/or the
average of X1, . . . , Xn. It will be helpful to identify several crucial pieces of
information for each random variable of interest:
random expected standard standard
variable value deviation units
Xi µ σ (Xi − µ) /σ
Pn
i=1 Xi nµ
√
n σ (
Pn
i=1 Xi − nµ) ÷ (
√
n σ)
X̄n µ σ/
√
n
¡
X̄n − µ
¢
÷ (σ/
√
n)
First we consider Xi. Notice that converting to standard units does
not change the shape of the distribution of Xi. For example, if Xi ∼
Bernoulli(0.5), then the distribution of Xi assigns equal probability to each
of two values, x = 0 and x = 1. If we convert to standard units, then the
distribution of
Z1 =
Xi − µ
σ
=
Xi − 0.5
0.5
8.3. THE CENTRAL LIMIT THEOREM 191
also assigns equal probability to each of two values, z1 = −1 and z1 = 1. In
particular, notice that converting Xi to standard units does not automati-
cally result in a normally distributed random variable.
Next we consider the sum and the average of X1, . . . , Xn. Notice that,
after converting to standard units, these quantities are identical:
Zn =
Pn
i=1 Xi − nµ
√
nσ
=
(1/n)
(1/n)
Pn
i=1 Xi − nµ
√
n σ
=
X̄n − µ
σ/
√
n
.
It is this new random variable on which we shall focus our attention.
We begin by observing that
Var
£√
n
¡
X̄n − µ
¢¤
= Var (σZn) = σ2
Var (Zn) = σ2
is constant. The WLLN states that
¡
X̄n − µ
¢ P
→ 0,
so
√
n is a “magnification factor” that maintains random variables with a
constant positive variance. We conclude that 1/
√
n measures how rapidly
the sample mean tends toward the population mean.
Now we turn to the more refined question of how the distribution of
the sample mean changes as the sample mean tends toward the population
mean. By converting to standard units, we are able to distinguish changes in
the shape of the distribution from changes in its mean and variance. Despite
our inability to make general statements about the behavior of Z1, it turns
out that we can say quite a bit about the behavior of Zn as n becomes large.
The following theorem is one of the most remarkable and useful results in
all of mathematics. It is fundamental to the study of both probability and
statistics.
Theorem 8.2 (Central Limit Theorem) Let X1, X2, . . . be any sequence of
independent and identically distributed random variables having finite mean
µ and finite variance σ2. Let
Zn =
X̄n − µ
σ/
√
n
,
let Fn denote the cdf of Zn, and let Φ denote the cdf of the standard normal
distribution. Then, for any fixed value z ∈ ℜ,
P (Zn ≤ z) = Fn(z) → Φ(z)
as n → ∞.
192 CHAPTER 8. LOTS OF DATA
The Central Limit Theorem (CLT) states that the behavior of the average
(or, equivalently, the sum) of a large number of independent and identically
distributed random variables will resemble the behavior of a standard normal
random variable. This is true regardless of the distribution of the random
variables that are being averaged. Thus, the CLT allows us to approximate
a variety of probabilities that otherwise would be intractable. Of course, we
require some sense of how many random variables must be averaged in order
for the normal approximation to be reasonably accurate. This does depend
on the distribution of the random variables, but a popular rule of thumb is
that the normal approximation can be used if n ≥ 30. Often, the normal
approximation works quite well with even smaller n.
Example 8.1 A chemistry professor is attempting to determine the
conformation of a certain molecule. To measure the distance between a pair
of nearby hydrogen atoms, she uses NMR spectroscopy. She knows that this
measurement procedure has an expected value equal to the actual distance
and a standard deviation of 0.5 angstroms. If she replicates the experiment
36 times, then what is the probability that the average measured value will
fall within 0.1 angstroms of the true value?
Let Xi denote the measurement obtained from replication i, for i =
1, . . . , 36. We are told that µ = EXi is the actual distance between the
atoms and that σ2 = Var Xi = 0.52. Let Z ∼ Normal(0, 1). Then, applying
the CLT,
P
¡
µ − 0.1 < X̄36 < µ + 0.1
¢
= P
¡
µ − 0.1 − µ < X̄36 − µ < µ + 0.1 − µ
¢
= P
Ã
−0.1
0.5/6
<
X̄36 − µ
0.5/6
<
0.1
0.5/6
!
= P (−1.2 < Zn < 1.2)
.
= P (−1.2 < Z < 1.2)
= Φ(1.2) − Φ(−1.2).
Now we use R:
> pnorm(1.2)-pnorm(-1.2)
[1] 0.7698607
We conclude that there is a chance of approximately 77% that the average
of the measured values will fall with 0.1 angstroms of the true value.
Notice that it is not possible to compute the exact probability. To do so
would require knowledge of the distribution of the Xi.
8.3. THE CENTRAL LIMIT THEOREM 193
It is sometimes useful to rewrite the normal approximations derived from
the CLT as statements of the approximate distributions of the sum and the
average. For the sum we obtain
n
X
i=1
Xi
·
∼ Normal
³
nµ, nσ2
´
(8.1)
and for the average we obtain
¯
Xn
·
∼ Normal
Ã
µ,
σ2
n
!
. (8.2)
These approximations are especially useful when combined with Theorem
5.2.
Example 8.2 The chemistry professor in Example 8.1 asks her grad-
uate student to replicate the experiment that she performed an additional 64
times. What is the probability that the averages of their respective measured
values will fall within 0.1 angstroms of each other?
The professor’s measurements are
X1, . . . , X36 ∼
³
µ, 0.52
´
.
Applying (8.2), we obtain
X̄36
·
∼ Normal
µ
µ,
0.25
36
¶
.
Similarly, the student’s measurements are
Y1, . . . , Y64 ∼
³
µ, 0.52
´
.
Applying (8.2), we obtain
Ȳ64
·
∼ Normal
µ
µ,
0.25
64
¶
or −Ȳ64
·
∼ Normal
µ
−µ,
0.25
64
¶
.
Now we apply Theorem 5.2 to conclude that
X̄36 − Ȳ64 = X̄36 +
¡
−Ȳ64
¢ ·
∼ Normal
Ã
0,
0.25
36
+
0.25
64
=
52
482
!
.
194 CHAPTER 8. LOTS OF DATA
Converting to standard units, it follows that
P
¡
−0.1 < X̄36 − Ȳ64 < 0.1
¢
= P
Ã
−0.1
5/48
<
X̄36 − Ȳ64
5/48
<
0.1
5/48
!
.
= P (−0.96 < Z < 0.96)
= Φ(0.96) − Φ(−0.96).
Now we use R:
> pnorm(.96)-pnorm(-.96)
[1] 0.6629448
We conclude that there is a chance of approximately 66% that the two av-
erages will fall with 0.1 angstroms of each other.
The CLT has a long history. For the special case of Xi ∼ Bernoulli(p),
a version of the CLT was obtained by De Moivre in the 1730s. The first
attempt at a more general CLT was made by Laplace in 1810, but definitive
results were not obtained until the second quarter of the 20th century. The-
orem 8.2 is actually a very special case of far more general results established
during that period. However, with one exception to which we now turn, it
is sufficiently general for our purposes.
The astute reader may have noted that, in Examples 8.1 and 8.2, we
assumed that the population mean µ was unknown but that the population
variance σ2 was known. Is this plausible? In Examples 8.1 and 8.2, it might
be that the nature of the instrumentation is sufficiently well understood that
the population variance may be considered known. In general, however, it
seems somewhat implausible that we would know the population variance
and not know the population mean.
The normal approximations employed in Examples 8.1 and 8.2 require
knowledge of the population variance. If the variance is not known, then it
must be estimated from the measured values. Chapters 7 and 9 will introduce
procedures for doing so. In anticipation of those procedures, we state the
following generalization of Theorem 8.2:
Theorem 8.3 Let X1, X2, . . . be any sequence of independent and identi-
cally distributed random variables having finite mean µ and finite variance
σ2. Suppose that D1, D2, . . . is a sequence of random variables with the prop-
erty that D2
n
P
→ σ2 and let
Tn =
X̄n − µ
Dn/
√
n
.
8.3. THE CENTRAL LIMIT THEOREM 195
Let Fn denote the cdf of Tn, and let Φ denote the cdf of the standard normal
distribution. Then, for any fixed value t ∈ ℜ,
P (Tn ≤ t) = Fn(t) → Φ(t)
as n → ∞.
We conclude this section with a warning. Statisticians usually invoke
the CLT in order to approximate the distribution of a sum or an average
of random variables X1, . . . , Xn that are observed in the course of an ex-
periment. The Xi need not be normally distributed themselves—indeed,
the grandeur of the CLT is that it does not assume normality of the Xi.
Nevertheless, we will discover that many important statistical procedures
do assume that the Xi are normally distributed. Researchers who hope to
use these procedures naturally want to believe that their Xi are normally
distributed. Often, they look to the CLT for reassurance. Many think
that, if only they replicate their experiment enough times, then somehow
their observations will be drawn from a normal distribution. This is ab-
surd! Suppose that a fair coin is tossed once. Let X1 denote the number
of Heads, so that X1 ∼ Bernoulli(0.5). The Bernoulli distribution is not at
all like a normal distribution. If we toss the coin one million times, then
each Xi ∼ Bernoulli(0.5). The Bernoulli distribution does not miraculously
become a normal distribution. Remember,
The Central Limit Theorem does not say that a large sample was
necessarily drawn from a normal distribution!
On some occasions, it is possible to invoke the CLT to anticipate that the
random variable to be observed will behave like a normal random variable.
This involves recognizing that the observed random variable is the sum or the
average of lots of independent and identically distributed random variables
that are not observed.
Example 8.3 To study the effect of an insect growth regulator (IGR)
on termite appetite, an entomologist plans an experiment. Each replication
of the experiment will involve placing 100 ravenous termites in a container
with a dried block of wood. The block of wood will be weighed before the
experiment begins and after a fixed number of days. The random variable
of interest is the decrease in weight, the amount of wood consumed by the
termites. Can we anticipate the distribution of this random variable?
196 CHAPTER 8. LOTS OF DATA
The total amount of wood consumed is the sum of the amounts con-
sumed by each termite. Assuming that the termites behave independently
and identically, the CLT suggests that this sum should be approximately
normally distributed.
When reasoning as in Example 8.3, one should construe the CLT as no
more than suggestive. Most natural processes are far too complicated to
be modelled so simplistically with any guarantee of accuracy. One should
always examine the observed values to see if they are consistent with one’s
theorizing.
8.4 Exercises
1. Suppose that I toss a fair coin 100 times and observe 60 Heads. Now
I decide to toss the same coin another 100 times. Does the Law of
Averages imply that I should expect to observe another 40 Heads?
2. In Example 7.7, we observed a sample of size n = 100. A normal prob-
ability plot and kernel density estimate constructed from this sample
suggested that the observations had been drawn from a nonnormal
distribution. True or False: It follows from the Central Limit Theorem
that a kernel density estimate constructed from a much larger sample
would more closely resemble a normal distribution.
3. Suppose that an astragalus has the following probabilities of producing
the four possible uppermost faces: P(1) = P(6) = 0.1, P(3) = P(4) =
0.4. This astragalus is to be thrown 100 times. Let Xi denote the
value of the uppermost face that results from throw i.
(a) Compute the expected value and the variance of Xi.
(b) Compute the probability that the average value of the 100 throws
will exceed 3.6.
4. Chris owns a laser pointer that is powered by two AAAA batteries. A
pair of batteries will power the pointer for an average of five hours
use, with a standard deviation of 30 minutes. Chris decides to take
advantage of a sale and buys 20 2-packs of AAAA batteries. What is
the probability that he will get to use his laser pointer for at least 105
hours before he needs to buy more batteries?
8.4. EXERCISES 197
5. Consider a box that contains 10 tickets, labelled
{1, 1, 1, 1, 2, 5, 5, 10, 10, 10}.
From this box, I propose to draw (with replacement) n = 40 tickets.
Let Y denote the sum of the values on the tickets that are drawn.
(a) To approximate P(170.5 < Y < 199.5), one Math 351 student
writes an R function box.model that simulates the proposed ex-
periment. Evaluating box.model is like observing a value, y, of
the random variable Y . Then she writes a loop that repeat-
edly evaluates box.model and computes the proportion of times
that box.model produces y ∈ (170.5, 199.5). She reasons that,
if she evaluates box.model a large number of times, then the
observed proportion of y ∈ (170.5, 199.5) should approximate
P(170.5 < Y < 199.5). Is her reasoning justified? Why or why
not?
(b) Another student suggests that P(170.5 < Y < 199.5) can be
approximated by performing the following R commands:
> se <- sqrt(585.6)
> pnorm(199.5,mean=184,sd=se)-
+ pnorm(170.5,mean=184,sd=se)
Do you agree? Why or why not?
(c) Which approach will produce the more accurate approximation
of P(170.5 < Y < 199.5)? Explain your reasoning.
6. A certain financial theory posits that daily fluctuations in stock prices
are independent random variables. Suppose that the daily price fluc-
tuations (in dollars) of a certain blue-chip stock are independent and
identically distributed random variables X1, X2, X3, . . ., with EXi =
0.01 and Var Xi = 0.01. (Thus, if today’s price of this stock is $50,
then tomorrow’s price is $50 + X1, etc.) Suppose that the daily price
fluctuations (in dollars) of a certain internet stock are independent and
identically distributed random variables Y1, Y2, Y3, . . ., with EYj = 0
and Var Yj = 0.25.
Now suppose that both stocks are currently selling for $50 per share
and you wish to invest $50 in one of these two stocks for a period of
400 market days. Assume that the costs of purchasing and selling a
share of either stock are zero.
198 CHAPTER 8. LOTS OF DATA
(a) Approximate the probability that you will make a profit on your
investment if you purchase a share of the blue-chip stock.
(b) Approximate the probability that you will make a profit on your
investment if you purchase a share of the internet stock.
(c) Approximate the probability that you will make a profit of at
least $20 if you purchase a share of the blue-chip stock.
(d) Approximate the probability that you will make a profit of at
least $20 if you purchase a share of the internet stock.
(e) Assuming that the internet stock fluctuations and the blue-chip
stock fluctuations are independent, approximate the probability
that, after 400 days, the price of the internet stock will exceed
the price of the blue-chip stock.
Chapter 9
Inference
In Chapters 3–8 we developed methods for studying the behavior of ran-
dom variables. Given a specific probability distribution, we can calcu-
late the probabilities of various events. For example, knowing that Y ∼
Binomial(n = 100; p = 0.5), we can calculate P(40 ≤ Y ≤ 60). Roughly
speaking, statistics is concerned with the opposite sort of problem. For ex-
ample, knowing that Y ∼ Binomial(n = 100; p), where the value of p is
unknown, and having observed Y = y (say y = 32), what can we say about
p? The phrase statistical inference describes any procedure for extracting
information about a probability distribution from an observed sample.
The present chapter introduces the fundamental principles of statistical
inference. We will discuss three types of statistical inference—point esti-
mation, hypothesis testing, and set estimation—in the context of drawing
inferences about a single population mean. More precisely, we will consider
the following situation:
1. X1, . . . , Xn are independent and identically distributed random vari-
ables. We observe a sample, ~
x = {x1, . . . , xn}.
2. Both EXi = µ and Var Xi = σ2 exist and are finite. We are interested
in drawing inferences about the population mean µ, a quantity that is
fixed but unknown.
3. The sample size, n, is sufficiently large that we can use the normal
approximation provided by the Central Limit Theorem.
We begin, in Section 9.1, by examining a narrative that is sufficiently
nuanced to motivate each type of inferential technique. We then proceed to
199
200 CHAPTER 9. INFERENCE
discuss point estimation (Section 9.2), hypothesis testing (Sections 9.3 and
9.4), and set estimation (Section 9.5). Although we are concerned exclusively
with large-sample inferences about a single population mean, it should be
appreciated that this concern often arises in practice. More importantly,
the fundamental concepts that we introduce in this context are common to
virtually all problems that involve statistical inference.
9.1 A Motivating Example
We consider an artificial example that permits us to scrutinize the precise
nature of statistical reasoning. Two siblings, a magician (Arlen) and an at-
torney (Robin) agree to resolve their disputed ownership of an Erté painting
by tossing a penny. Arlen produces a penny and, just as Robin is about to
toss it in the air, Arlen smoothly suggests that spinning the penny on a table
might ensure better randomization. Robin assents and spins the penny. As
it spins, Arlen calls “Tails!” The penny comes to rest with Tails facing up
and Arlen takes possession of the Erté. Robin is left with the penny.
That evening, Robin wonders if she has been had. She decides to perform
an experiment. She spins the same penny on the same table 100 times and
observes 68 Tails. It occurs to Robin that perhaps spinning this penny was
not entirely fair, but she is reluctant to accuse her brother of impropriety
until she is convinced that the results of her experiment cannot be dismissed
as coincidence. How should she proceed?
It is easy to devise a mathematical model of Robin’s experiment: each
spin of the penny is a Bernoulli trial and the experiment is a sequence of
n = 100 trials. Let Xi denote the outcome of spin i, where Xi = 1 if Heads is
observed and Xi = 0 if Tails is observed. Then X1, . . . , X100 ∼ Bernoulli(p),
where p is the fixed but unknown (to Robin!) probability that a single
spin will result in Heads. The probability distribution Bernoulli(p) is our
mathematical abstraction of a population and the population parameter of
interest is µ = EXi = p, the population mean.
Let
Y =
100
X
i=1
Xi,
the total number of Heads obtained in n = 100 spins. Under the mathe-
matical model that we have proposed, Y ∼ Binomial(p). In performing her
9.1. A MOTIVATING EXAMPLE 201
experiment, Robin observes a sample ~
x = {x1, . . . , x100} and computes
y =
100
X
i=1
xi,
the total number of Heads in her sample. In our narrative, y = 32.
We emphasize that p ∈ [0, 1] is fixed but unknown. Robin’s goal is to
draw inferences about this fixed but unknown quantity. We consider three
questions that she might ask:
1. What is the true value of p? More precisely, what is a reasonable guess
as to the true value of p?
2. Is p = 0.5? Specifically, is the evidence that p 6= 0.5 so compelling that
Robin can comfortably accuse Arlen of impropriety?
3. What are plausible values of p? In particular, is there a subset of [0, 1]
that Robin can confidently claim contains the true value of p?
The first set of questions introduces a type of inference that statisticians
call point estimation. We have already encountered (in Chapter 7) a natural
approach to point estimation, the plug-in principle. In the present case, the
plug-in principle suggests estimating the theoretical probability of success,
p, by computing the observed proportion of successes,
p̂ =
y
n
=
32
100
= 0.32.
The second set of questions introduces a type of inference that statis-
ticians call hypothesis testing. Having calculated p̂ = 0.32 6= 0.5, Robin
is inclined to guess that p 6= 0.5. But how compelling is the evidence that
p 6= 0.5? Let us play devil’s advocate: perhaps p = 0.5, but chance produced
“only” y = 32 instead of a value nearer EY = np = 100 × 0.5 = 50. This
is a possibility that we can quantify. If Y ∼ Binomial(n = 100; p = 0.5),
then the probability that Y will deviate from its expected value by at least
|50 − 32| = 18 is
p = P (|Y − 50| ≥ 18)
= P(Y ≤ 32 or Y ≥ 68)
= P(Y ≤ 32) + P(Y ≥ 68)
= P(Y ≤ 32) + 1 − P(Y ≤ 67)
= pbinom(32,100,.5)+1-pbinom(67,100,.5)
= 0.0004087772.
202 CHAPTER 9. INFERENCE
This significance probability seems fairly small—perhaps small enough to
convince Robin that in fact p 6= 0.5.
The third set of questions introduces a type of inference that statisticians
call set estimation. We have just tested the possibility that p = p0 in the
special case p0 = 0.5. Now, imagine testing the possibility that p = p0 for
each p0 ∈ [0, 1]. Those p0 that are not rejected as inconsistent with the
observed data, y = 32, will constitute a set of plausible values of p.
To implement this procedure, Robin will have to adopt a standard of
implausibility. Perhaps she decides to reject p0 as implausible when the
corresponding significance probability,
p = P (|Y − 100p0| ≥ |32 − 100p0|)
= P (Y − 100p0 ≥ |32 − 100p0|) + P (Y − 100p0 ≤ −|32 − 100p0|)
= P (Y ≥ 100p0 + |32 − 100p0|) + P (Y ≤ 100p0 − |32 − 100p0|) ,
satisfies p ≤ 0.1. Recalling that Y ∼ Binomial(100; p0) and using the R
function pbinom, some trial and error reveals that p > 0.1 if p0 lies in the
interval [0.245, 0.404]. (The endpoints of this interval are included.) Notice
that this interval does not contain p0 = 0.5, which we had already rejected
as implausible.
9.2 Point Estimation
The goal of point estimation is to make a reasonable guess of the unknown
value of a designated population quantity, e.g., the population mean. The
quantity that we hope to guess is called the estimand.
9.2.1 Estimating a Population Mean
Suppose that the estimand is µ, the population mean. The plug-in principle
suggests estimating µ by computing the mean of the empirical distribution.
This leads to the plug-in estimate of µ, µ̂ = x̄n. Thus, we estimate the mean
of the population by computing the mean of the sample, which is certainly
a natural thing to do.
We will distinguish between
x̄n =
1
n
n
X
i=1
xi,
9.2. POINT ESTIMATION 203
a real number that is calculated from the sample ~
x = {x1, . . . , xn}, and
X̄n =
1
n
n
X
i=1
Xi,
a random variable that is a function of the random variables X1, . . . , Xn.
(Such a random variable is called a statistic.) The latter is our rule for
guessing, an estimation procedure or estimator. The former is the guess
itself, the result of applying our rule for guessing to the sample that we
observed, an estimate.
The quality of an individual estimate depends on the individual sample
from which it was computed and is therefore affected by chance variation.
Furthermore, it is rarely possible to assess how close to correct an individual
estimate may be. For these reasons, we study estimation procedures and
identify the statistical properties that these random variables possess. In
the present case, two properties are worth noting:
1. We know that EX̄n = µ. Thus, on the average, our procedure for
guessing the population mean produces the correct value. We express
this property by saying that X̄n is an unbiased estimator of µ.
The property of unbiasedness is intuitively appealing and sometimes
is quite useful. However, many excellent estimation procedures are
biased and some unbiased estimators are unattractive. For example,
EX1 = µ by definition, so X1 is also an unbiased estimator of µ; but
most researchers would find the prospect of estimating a population
mean with a single observation to be rather unappetizing. Indeed,
Var X̄n =
σ2
n
< σ2
= Var X1,
so the unbiased estimator X̄n has smaller variance than the unbiased
estimator X1.
2. The Weak Law of Large Numbers states that X̄n
P
→ µ. Thus, as the
sample size increases, the estimator X̄n converges in probability to the
estimand µ. We express this property by saying that X̄n is a consistent
estimator of µ.
The property of consistency is essential—it is difficult to conceive a
circumstance in which one would be willing to use an estimation proce-
dure that might fail regardless of how much data one collected. Notice
that the unbiased estimator X1 is not consistent.
204 CHAPTER 9. INFERENCE
9.2.2 Estimating a Population Variance
Now suppose that the estimand is σ2, the population variance. Although we
are concerned with drawing inferences about the population mean, we will
discover that hypothesis testing and set estimation may require knowing the
population variance. If the population variance is not known, then it must
be estimated from the sample.
The plug-in principle suggests estimating σ2 by computing the variance
of the empirical distribution. This leads to the plug-in estimate of σ2,
c
σ2 =
1
n
n
X
i=1
(xi − x̄n)2
.
The plug-in estimator of σ2 is biased; in fact,
E
"
1
n
n
X
i=1
¡
Xi − X̄n
¢2
#
=
n − 1
n
σ2
< σ2
.
This does not present any particular difficulties; however, if we desire an
unbiased estimator, then we simply multiply the plug-in estimator by the
factor n/(n − 1), obtaining
S2
n =
n
n − 1
"
1
n
n
X
i=1
¡
Xi − X̄n
¢2
#
=
1
n − 1
n
X
i=1
¡
Xi − X̄n
¢2
. (9.1)
The statistic S2
n is the most popular estimator of σ2 and many books
refer to the estimate
s2
n =
1
n − 1
n
X
i=1
(xi − x̄n)2
as the sample variance. (For example, the R command var computes s2
n.) In
fact, both estimators are perfectly reasonable, consistent estimators of σ2.
We will prefer S2
n for the rather mundane reason that using it will simplify
some of the formulas that we will encounter.
9.3 Heuristics of Hypothesis Testing
Hypothesis testing is appropriate for situations in which one wants to guess
which of two possible statements about a population is correct. For example,
in Section 9.1 we considered the possibility that spinning a penny is fair
(p = 0.5) versus the possibility that spinning a penny is not fair (p 6= 0.5).
The logic of hypothesis testing is of a familar sort:
9.3. HEURISTICS OF HYPOTHESIS TESTING 205
If an alleged coincidence seems too implausible, then we tend to
believe that it wasn’t really a coincidence.
Man has engaged in this kind of reasoning for millenia. In Cicero’s De
Divinatione, Quintus exclaims:
“They are entirely fortuitous you say? Come! Come! Do you
really mean that? . . . When the four dice [astragali] produce the
venus-throw you may talk of accident: but suppose you made a
hundred casts and the venus-throw appeared a hundred times;
could you call that accidental?”1
The essence of hypothesis testing is captured by the familiar saying,
“Where there’s smoke, there’s fire.” In this section we formalize such rea-
soning, appealing to three prototypical examples:
1. Assessing circumstantial evidence in a criminal trial.
For simplicity, suppose that the defendant has been charged with a
single count of pre-meditated murder and that the jury has been in-
structed to either convict of murder in the first degree or acquit. The
defendant had motive, means, and opportunity. Furthermore, two
types of blood were found at the crime scene. One type was evidently
the victim’s. Laboratory tests demonstrated that the other type was
not the victim’s, but failed to demonstrate that it was not the defen-
dant’s. What should the jury do?
The evidence used by the prosecution to try to establish a connection
between the blood of the defendant and blood found at the crime scene
is probabilistic, i.e., circumstantial. It will likely be presented to the
jury in the language of mathematics, e.g., “Both blood samples have
characteristics x, y and z; yet only 0.5% of the population has such
blood.” The defense will argue that this is merely an unfortunate
coincidence. The jury must evaluate the evidence and decide whether
or not such a coincidence is too extraordinary to be believed, i.e.,
they must decide if their assent to the proposition that the defendant
committed the murder rises to a level of certainty sufficient to convict.
1
Cicero rejected the conclusion that a run of one hundred venus-throws is so improbable
that it must have been caused by divine intervention; however, Cicero was castigating the
practice of divination. Quintus was entirely correct in suggesting that a run of one hundred
venus-throws should not be rationalized as “entirely fortuitous.” A modern scientist might
conclude that an unusual set of astragali had been used to produce this remarkable result.
206 CHAPTER 9. INFERENCE
If the combined weight of the evidence against the defendant is a chance
of one in ten, then the jury is likely to acquit; if it is a chance of one
in a million, then the jury is likely to convict.
2. Assessing data from a scientific experiment.
A study2 of termite foraging behavior reached the controversial conclu-
sion that two species of termites compete for scarce food resources. In
this study, a site in the Sonoran desert was cleared of dead wood and
toilet paper rolls were set out as food sources. The rolls were examined
regularly over a period of many weeks and it was observed that only
very rarely was a roll infested with both species of termites. Was this
just a coincidence or were the two species competing for food?
The scientists constructed a mathematical model of termite foraging
behavior under the assumption that the two species forage indepen-
dently of each other. This model was then used to quantify the prob-
ability that infestation patterns such as the one observed arise due to
chance. This probability turned out to be just one in many billions—
a coincidence far too extraordinary to be dismissed as such—and the
researchers concluded that the two species were competing.
3. Assessing the results of Robin’s penny-spinning experiment.
In Section 9.1, we noted that Robin observed only y = 32 Heads when
she would expect EY = 50 Heads if indeed p = 0.5. This is a dis-
crepancy of |32 − 50| = 18, and we considered that possibility that
such a large discrepancy might have been produced by chance. More
precisely, we calculated p = P(|Y − EY | ≥ 18) under the assumption
that p = 0.5, obtaining p
.
= 0.0004. On this basis, we speculated that
Robin might be persuaded to accuse her brother of cheating.
In each of the preceding examples, a binary decision was based on a level
of assent to probabilistic evidence. At least conceptually, this level can be
quantified as a significance probability, which we loosely interpret to mean
the probability that chance would produce a coincidence at least as extraor-
dinary as the phenomenon observed. This begs an obvious question, which
we pose now for subsequent consideration: how small should a significance
probability be for one to conclude that a phenomenon is not a coincidence?
2
S.C. Jones and M.W. Trosset (1991). Interference competition in desert subterranean
termites. Entomologia Experimentalis et Applicata, 61:83–90.
9.3. HEURISTICS OF HYPOTHESIS TESTING 207
We now proceed to explicate a formal model for statistical hypothesis
testing that was proposed by J. Neyman and E. S. Pearson in the late 1920s
and 1930s. Our presentation relies heavily on drawing simple analogies to
criminal law, which we suppose is a more familiar topic than statistics to
most students.
The States of Nature
The states of nature are the possible mechanisms that might have produced
the observed phenomenon. Mathematically, they are the possible probability
distributions under consideration. Thus, in the penny-spinning example, the
states of nature are the Bernoulli trials indexed by p ∈ [0, 1]. In hypothesis
testing, the states of nature are partitioned into two sets or hypotheses. In
the penny-spinning example, the hypotheses that we formulated were p = 0.5
(penny-spinning is fair) and p 6= 0.5 (penny-spinning is not fair); in the legal
example, the hypotheses are that the defendant did commit the murder (the
defendant is factually guilty) and that the defendant did not commit the
murder (the defendant is factually innocent).
The goal of hypothesis testing is to decide which hypothesis is correct,
i.e., which hypothesis contains the true state of nature. In the penny-
spinning example, Robin wants to determine whether or not penny-spinning
is fair. In the termite example, Jones and Trosset wanted to determine
whether or not termites were foraging independently. More generally, scien-
tists usually partition the states of nature into a hypothesis that corresponds
to a theory that the experiment is designed to investigate and a hypothesis
that corresponds to a chance explanation; the goal of hypothesis testing is
to decide which explanation is correct. In a criminal trial, the jury would
like to determine whether the defendant is factually innocent of factually
guilty—in the words of the United States Supreme Court in Bullington v.
Missouri (1981):
Underlying the question of guilt or innocence is an objective
truth: the defendant did or did not commit the crime. From
the time an accused is first suspected to the time the decision on
guilt or innocence is made, our system is designed to enable the
trier of fact to discover that truth.
Formulating appropriate hypotheses can be a delicate business. In the
penny-spinning example, we formulated hypotheses p = 0.5 and p 6= 0.5.
These hypotheses are appropriate if Robin wants to determine whether or
208 CHAPTER 9. INFERENCE
not penny-spinning is fair. However, one can easily imagine that Robin is
not interested in whether or not penny-spinning is fair, but rather in whether
or not her brother gained an advantage by using the procedure. If so, then
appropriate hypotheses would be p < 0.5 (penny-spinning favored Arlen)
and p ≥ 0.5 (penny-spinning did not favor Arlen).
The Actor
The states of nature having been partitioned into two hypotheses, it is neces-
sary for a decisionmaker (the actor) to choose between them. In the penny-
spinning example, the actor is Robin; in the termite example, the actor is
the team of researchers; in the legal example, the actor is the jury.
Statisticians often describe hypothesis testing as a game that they play
against Nature. To study this game in greater detail, it becomes necessary
to distinguish between the two hypotheses under consideration. In each
example, we declare one hypothesis to be the null hypothesis (H0) and the
other to be the alternative hypothesis (H1). Roughly speaking, the logic for
determining which hypothesis is H0 and which is H1 is the following: H0
should be the hypothesis to which one defaults if the evidence is equivocal
and H1 should be the hypothesis that one requires compelling evidence to
embrace.
We shall have a great deal more to say about distinguishing null and
alternative hypotheses, but for now suppose that we have declared the fol-
lowing: (1) H0: the defendant did not commit the murder, (2) H0: the
termites are foraging independently, and (3) H0: spinning the penny is fair.
Having done so, the game takes the following form:
State of Nature
H0 H1
Actor’s H0 Type II error
Choice H1 Type I error
There are four possible outcomes to this game, two of which are favorable
and two of which are unfavorable. If the actor chooses H1 when in fact H0
is true, then we say that a Type I error has been committed. If the actor
chooses H0 when in fact H1 is true, then we say that a Type II error has been
committed. In a criminal trial, a Type I error occurs when a jury convicts a
factually innocent defendant and a Type II error occurs when a jury acquits
a factually guilty defendant.
9.3. HEURISTICS OF HYPOTHESIS TESTING 209
Innocent Until Proven Guilty
Because we are concerned with probabilistic evidence, any decision proce-
dure that we devise will occasionally result in error. Obviously, we would
like to devise procedures that minimize the probabilities of committing er-
rors. Unfortunately, there is an inevitable tradeoff between Type I and Type
II error that precludes simultaneously minimizing the probabilities of both
types. To appreciate this, consider two juries. The first jury always acquits
and the second jury always convicts. Then the first jury never commits a
Type I error and the second jury never commits a Type II error. The only
way to simultaneously better both juries is to never commit an error of either
type, which is impossible with probabilistic evidence.
The distinguishing feature of hypothesis testing (and Anglo-American
criminal law) is the manner in which it addresses the tradeoff between Type
I and Type II error. The Neyman-Pearson formulation of hypothesis testing
accords the null hypothesis a privileged status: H0 will be maintained unless
there is compelling evidence against it. It is instructive to contrast the
asymmetry of this formulation with situations in which neither hypothesis is
privileged. In statistics, this is the problem of determining which hypothesis
better explains the data. This is discrimination, not hypothesis testing. In
law, this is the problem of determining whether the defendant or the plaintiff
has the stronger case. This is the criterion in civil suits, not in criminal trials.
In the penny-spinning example, Robin required compelling evidence
against the privileged null hypothesis that penny-spinning is fair to over-
come her scruples about accusing her brother of impropriety. In the termite
example, Jones and Trosset required compelling evidence against the privi-
leged null hypothesis that two termite species forage independently in order
to write a credible article claiming that two species were competing with each
other. In a criminal trial, the principle of according the null hypothesis a
privileged status has a familiar characterization: the defendant is “innocent
until proven guilty.”
According the null hypothesis a privileged status is equivalent to declar-
ing Type I errors to be more egregious than Type II errors. This connection
was eloquently articulated by Justice John Harlan in a 1970 Supreme Court
decision: “If, for example, the standard of proof for a criminal trial were a
preponderance of the evidence rather than proof beyond a reasonable doubt,
there would be a smaller risk of factual errors that result in freeing guilty
persons, but a far greater risk of factual errors that result in convicting the
innocent.”
210 CHAPTER 9. INFERENCE
A preference for Type II errors instead of Type I errors can often be
glimpsed in scientific applications. For example, because science is conserva-
tive, it is generally considered better to wrongly accept than to wrongly reject
the prevailing wisdom that termite species forage independently. Moreover,
just as this preference is the foundation of statistical hypothesis testing, so
is it a fundamental principle of criminal law. In his famous Commentaries,
William Blackstone opined that “it is better that ten guilty persons escape,
than that one innocent man suffer;” and in his influential Practical Treatise
on the Law of Evidence (1824), Thomas Starkie suggested that “The maxim
of the law. . . is that it is better that ninety-nine. . . offenders shall escape than
that one innocent man be condemned.” In Reasonable Doubts (1996), Alan
Dershowitz quotes both maxims and notes anecdotal evidence that jurors
actually do prefer committing Type II to Type I errors: on Prime Time
Live (October 4, 1995), O.J. Simpson juror Anise Aschenbach stated, “If
we made a mistake, I would rather it be a mistake on the side of a person’s
innocence than the other way.”
Beyond a Reasonable Doubt
To actualize its antipathy to Type I errors, the Neyman-Pearson formulation
imposes an upper bound on the maximal probability of Type I error that will
be tolerated. This bound is the significance level, conventionally denoted α.
The significance level is specified (prior to examining the data) and only
decision rules for which the probability of Type I error is no greater than α
are considered. Such tests are called level α tests.
To fix ideas, we consider the penny-spinning example and specify a signif-
icance level of α. Let p denote the significance probability that results from
performing the analysis in Section 9.1 and consider a rule that rejects the null
hypothesis H0 : p = 0.5 if and only if p ≤ α. Then a Type I error occurs if
and only if p = 0.5 and we observe y such that p = P(|Y −50| ≥ |y−50|) ≤ α.
We claim that the probability of observing such a y is just α, in which case
we have constructed a level α test.
To see why this is the case, let W = |Y −50| denote the test statistic. The
decision to accept or reject the null hypothesis H0 depends on the observed
value, w, of this random variable. Let
p(w) = PH0 (W ≥ w)
denote the significance probability associated with w. Notice that w is the
1 − p(w) quantile of the random variable W under H0. Let q denote the
9.3. HEURISTICS OF HYPOTHESIS TESTING 211
1 − α quantile of W under H0, i.e.,
α = PH0 (W ≥ q) .
We reject H0 if and only if we observe
PH0 (W ≥ w) = p(w) ≤ α = PH0 (W ≥ q) ,
i.e., if and only w ≥ q. If H0 is true, then the probability of committing a
Type I error is precisely
PH0 (W ≥ q) = α,
as claimed above. We conclude that α quantifies the level of assent that we
require to risk rejecting H0, i.e., the significance level specifies how small a
significance probability is required in order to conclude that a phenomenon
is not a coincidence.
In statistics, the significance level α is a number in the interval [0, 1]. It
is not possible to quantitatively specify the level of assent required for a jury
to risk convicting an innocent defendant, but the legal principle is identical:
in a criminal trial, the operative significance level is beyond a reasonable
doubt. Starkie (1824) described the possible interpretations of this phrase in
language derived from British empirical philosopher John Locke:
Evidence which satisfied the minds of the jury of the truth of the
fact in dispute, to the entire exclusion of every reasonable doubt,
constitute full proof of the fact. . . . Even the most direct evidence
can produce nothing more than such a high degree of probability
as amounts to moral certainty. From the highest it may decline,
by an infinite number of gradations, until it produces in the mind
nothing more than a preponderance of assent in favour of the
particular fact.
The gradations that Starkie described are not intrinsically numeric, but it is
evident that the problem of defining reasonable doubt in criminal law is the
problem of specifying a significance level in statistical hypothesis testing.
In both criminal law and statistical hypothesis testing, actions typically
are described in language that acknowledges the privileged status of the null
hypothesis and emphasizes that the decision criterion is based on the prob-
ability of committing a Type I error. In describing the action of choosing
H0, many statisticians prefer the phrase “fail to reject the null hypothesis”
to the less awkward “accept the null hypothesis” because choosing H0 does
212 CHAPTER 9. INFERENCE
not imply an affirmation that H0 is correct, only that the level of evidence
against H0 is not sufficiently compelling to warrant its rejection at signifi-
cance level α. In precise analogy, juries render verdicts of “not guilty” rather
than “innocent” because acquital does not imply an affirmation that the de-
fendant did not commit the crime, only that the level of evidence against
the defendant’s innocence was not beyond a reasonable doubt.3
And To a Moral Certainty
The Neyman-Pearson formulation of statistical hypothesis testing is a math-
ematical abstraction. Part of its generality derives from its ability to accom-
modate any specified significance level. As a practical matter, however, α
must be specified and we now ask how to do so.
In the penny-spinning example, Robin is making a personal decision and
is free to choose α as she pleases. In the termite example, the researchers
were guided by decades of scientific convention. In 1925, in his extremely
influential Statistical Methods for Research Workers, Ronald Fisher4 sug-
gested that α = 0.05 and α = 0.01 are often appropriate significance levels.
These suggestions were intended as practical guidelines, but they have be-
come enshrined (especially α = 0.05) in the minds of many scientists as a
sort of Delphic determination of whether or not a hypothesized theory is
true. While some degree of conformity is desirable (it inhibits a researcher
from choosing—after the fact—a significance level that will permit rejecting
the null hypothesis in favor of the alternative in which s/he may be invested),
many statisticians are disturbed by the scientific community’s slavish devo-
tion to a single standard and by its often uncritical interpretation of the
resulting conclusions.5
The imposition of an arbitrary standard like α = 0.05 is possible be-
cause of the precision with which mathematics allows hypothesis testing to
be formulated. Applying this precision to legal paradigms reveals the issues
3
In contrast, Scottish law permits a jury to return a verdict of “not proven,” thereby
reserving a verdict of “not guilty” to affirm a defendant’s innocence.
4
Sir Ronald Fisher is properly regarded as the single most important figure in the
history of statistics. It should be noted that he did not subscribe to all of the particulars
of the Neyman-Pearson formulation of hypothesis testing. His fundamental objection to
it, that it may not be possible to fully specify the alternative hypothesis, does not impact
our development, since we are concerned with situations in which both hypotheses are
fully specified.
5
See, for example, J. Cohen (1994). The world is round (p < .05). American Psychol-
ogist, 49:997–1003.
9.3. HEURISTICS OF HYPOTHESIS TESTING 213
with great clarity, but is of little practical value when specifying a signifi-
cance level, i.e., when trying to define the meaning of “beyond a reasonable
doubt.” Nevertheless, legal scholars have endeavored for centuries to po-
sition “beyond a reasonable doubt” along the infinite gradations of assent
that correspond to the continuum [0, 1] from which α is selected. The phrase
“beyond a reasonable doubt” is still often connected to the archaic phrase
“to a moral certainty.” This connection survived because moral certainty
was actually a significance level, intended to invoke an enormous body of
scholarly writings and specify a level of assent:
Throughout this development two ideas to be conveyed to the
jury have been central. The first idea is that there are two realms
of human knowledge. In one it is possible to obtain the absolute
certainty of mathematical demonstration, as when we say that
the square of the hypotenuse is equal to the sum of the squares
of the other two sides of a right triangle. In the other, which is
the empirical realm of events, absolute certainty of this kind is
not possible. The second idea is that, in this realm of events, just
because absolute certainty is not possible, we ought not to treat
everything as merely a guess or a matter of opinion. Instead,
in this realm there are levels of certainty, and we reach higher
levels of certainty as the quantity and quality of the evidence
available to us increase. The highest level of certainty in this
empirical realm in which no absolute certainty is possible is what
traditionally was called “moral certainty,” a certainty which there
was no reason to doubt.6
Although it is rarely (if ever) possible to quantify a juror’s level of as-
sent, those comfortable with statistical hypothesis testing may be inclined
to wonder what values of α correspond to conventional interpretations of
reasonable doubt. If a juror believes that there is a 5 percent probability
that chance alone could have produced the circumstantial evidence presented
against a defendant accused of pre-meditated murder, is the juror’s level of
assent beyond a reasonable doubt and to a moral certainty? We hope not.
We may be willing to tolerate a 5 percent probability of a Type I error
when studying termite foraging behavior, but the analogous prospect of a 5
6
Barbara J. Shapiro (1991). “Beyond Reasonable Doubt” and “Probable Cause”: His-
torical Perspectives on the Anglo-American Law of Evidence, University of California Press,
Berkeley, p. 41.
214 CHAPTER 9. INFERENCE
percent probability of wrongly convicting a factually innocent defendant is
abhorrent.7
In fact, little is known about how anyone in the legal system quantifies
reasonable doubt. Mary Gray cites a 1962 Swedish case in which a judge try-
ing an overtime parking case explicitly ruled that a significance probability
of 1/20736 was beyond reasonable doubt but that a significance probabil-
ity of 1/144 was not.8 In contrast, Alan Dershowitz relates a provocative
classroom exercise in which his students preferred to acquit in one scenario
with a significance probability of 10 percent and to convict in an analogous
scenario with a significance probability of 15 percent.9
9.4 Testing Hypotheses About a Population Mean
We now apply the heuristic reasoning described in Section 9.3 to the problem
of testing hypotheses about a population mean. Initially, we consider testing
H0 : µ = µ0 versus H1 : µ 6= µ0.
The intuition that we are seeking to formalize is fairly straightfoward. By
virtue of the Weak Law of Large Numbers, the observed sample mean ought
to be fairly close to the true population mean. Hence, if the null hypothesis
is true, then x̄n ought to be fairly close to the hypothesized mean, µ0. If we
observe X̄n = x̄n far from µ0, then we guess that µ 6= µ0, i.e., we reject H0.
Given a significance level α, we want to calculate a significance probabil-
ity p. The significance level is a real number that is fixed by and known to
the researcher, e.g., α = 0.05. The significance probability is a real number
that is determined by the sample, e.g., p
.
= 0.0004 in Section 9.1. We will
reject H0 if and only if p ≤ α.
In Section 9.3, we interpreted the significance probability as the prob-
ability that chance would produce a coincidence at least as extraordinary
as the phenomenon observed. Our first challenge is to make this notion
mathematically precise; how we do so depends on the hypotheses that we
7
This discrepancy illustrates that the consequences of committing a Type I error in-
fluence the choice of a significance level. The consequences of Jones and Trosset wrongly
concluding that termite species compete are not commensurate with the consequences of
wrongly imprisoning a factually innocent citizen.
8
M.W. Gray (1983). Statistics and the law. Mathematics Magazine, 56:67–81. As a
graduate of Rice University, I cannot resist quoting another of Gray’s examples of statistics-
as-evidence: “In another case, that of millionaire W. M. Rice, the signature on his will
was disputed, and the will was declared a forgery on the basis of probability evidence. As
a result, the fortune of Rice went to found Rice Institute.”
9
A.M. Dershowitz (1996). Reasonable Doubts, Simon & Schuster, New York, p. 40.
9.4. TESTING HYPOTHESES ABOUT A POPULATION MEAN 215
want to test. In the present situation, we submit that a natural significance
probability is
p = Pµ0
¡¯
¯X̄n − µ0
¯
¯ ≥ |x̄n − µ0|
¢
. (9.2)
To understand why this is the case, it is essential to appreciate the following
details:
1. The hypothesized mean, µ0, is a real number that is fixed by and
known to the researcher.
2. The estimated mean, x̄n, is a real number that is calculated from the
observed sample and known to the researcher; hence, the quantity
|x̄n − µ0| is a fixed real number.
3. The estimator, X̄n, is a random variable. Hence, the inequality
¯
¯X̄n − µ0
¯
¯ ≥ |x̄n − µ0| (9.3)
defines an event that may or may not occur each time the experiment
is performed. Specifically, (9.3) is the event that the sample mean
assumes a value at least as far from the hypothesized mean as the
researcher observed.
4. The significance probability, p, is the probability that (9.3) occurs.
The notation Pµ0 reminds us that we are interested in the probability
that this event occurs under the assumption that the null hypothesis is
true, i.e., under the assumption that µ = µ0.
Having formulated an appropriate significance probability for testing H0 :
µ = µ0 versus H1 : µ 6= µ0, our second challenge is to find a way to compute
p. We remind the reader that we have assumed that n is large.
Case 1: The population variance is known or specified by the null
hypothesis.
We define two new quantities, the random variable
Zn =
X̄n − µ0
σ/
√
n
and the real number
z =
x̄n − µ0
σ/
√
n
.
216 CHAPTER 9. INFERENCE
Under the null hypothesis H0 : µ = µ0, Zn ˙
∼Normal(0, 1) by the Central
Limit Theorem; hence,
p = Pµ0
¡¯
¯X̄n − µ0
¯
¯ ≥ |x̄n − µ0|
¢
= 1 − Pµ0
¡
− |x̄n − µ0| < X̄n − µ0 < |x̄n − µ0|
¢
= 1 − Pµ0
Ã
−
|x̄n − µ0|
σ/
√
n
<
X̄n − µ0
σ/
√
n
<
|x̄n − µ0|
σ/
√
n
!
= 1 − Pµ0 (−|z| < Zn < |z|)
.
= 1 − [Φ(|z|) − Φ(−|z|)]
= 2Φ(−|z|),
which can be computed by the R command
> 2*pnorm(-abs(z))
or by consulting a table. An illustration of the normal probability of interest
is sketched in Figure 9.1.
−4 −3 −2 −1 0 1 2 3 4
0.0
0.1
0.2
0.3
0.4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
... . .
. . ...
...
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Figure 9.1: P(|Z| ≥ |z| = 1.5)
An important example of Case 1 occurs when Xi ∼ Bernoulli(µ). In this
case, σ2 = Var Xi = µ(1 − µ); hence, under the null hypothesis that µ = µ0,
9.4. TESTING HYPOTHESES ABOUT A POPULATION MEAN 217
σ2 = µ0(1 − µ0) and
z =
x̄n − µ0
p
µ0(1 − µ0)/n
.
Example 9.1 To test H0 : µ = 0.5 versus H1 : µ 6= 0.5 at significance
level α = 0.05, we perform n = 2500 trials and observe 1200 successes.
Should H0 be rejected?
The observed proportion of successes is x̄n = 1200/2500 = 0.48, so the
value of the test statistic is
z =
0.48 − 0.50
p
0.5(1 − 0.5)/2500
=
−0.02
0.5/50
= −2
and the significance probability is
p
.
= 2Φ(−2)
.
= 0.0456 < 0.05 = α.
Because p ≤ α, we reject H0.
Case 2: The population variance is unknown.
Because σ2 is unknown, we must estimate it from the sample. We will use
the estimator introduced in Section 9.2,
S2
n =
1
n − 1
n
X
i=1
¡
Xi − X̄n
¢2
,
and define
Tn =
X̄n − µ0
Sn/
√
n
.
Because S2
n is a consistent estimator of σ2, i.e., S2
n
P
→ σ2, it follows from
Theorem 8.3 that
lim
n→∞
P (Tn ≤ z) = Φ(z).
Just as we could use a normal approximation to compute probabilities
involving Zn, so can we use a normal approximation to compute probabili-
ties involving Tn. The fact that we must estimate σ2 slightly degrades the
quality of the approximation; however, because n is large, we should observe
an accurate estimate of σ2 and the approximation should not suffer much.
Accordingly, we proceed as in Case 1, using
t =
x̄n − µ0
sn/
√
n
instead of z.
218 CHAPTER 9. INFERENCE
Example 9.2 To test H0 : µ = 20 versus H1 : µ 6= 20 at significance
level α = 0.05, we collect n = 400 observations, observing x̄n = 21.82935
and sn = 24.70037. Should H0 be rejected?
The value of the test statistic is
t =
21.82935 − 20
24.70037/20
= 1.481234
and the significance probability is
p
.
= 2Φ(−1.481234) = 0.1385441 > 0.05 = α.
Because p > α, we decline to reject H0.
9.4.1 One-Sided Hypotheses
In Section 9.3 we suggested that, if Robin is not interested in whether or
not penny-spinning is fair but rather in whether or not it favors her brother,
then appropriate hypotheses would be p < 0.5 (penny-spinning favors Arlen)
and p ≥ 0.5 (penny-spinning does not favor Arlen). These are examples of
one-sided (as opposed to two-sided) hypotheses.
More generally, we will consider two canonical cases:
H0 : µ ≤ µ0 versus H1 : µ > µ0
H0 : µ ≥ µ0 versus H1 : µ < µ0
Notice that the possibility of equality, µ = µ0, belongs to the null hypothesis
in both cases. This is a technical necessity that arises because we compute
significance probabilities using the µ in H0 that is nearest H1. For such a
µ to exist, the boundary between H0 and H1 must belong to H0. We will
return to this necessity later in this section.
Instead of memorizing different formulas for different situations, we will
endeavor to understand which values of our test statistic tend to undermine
the null hypothesis in question. Such reasoning can be used on a case-by-
case basis to determine the relevant significance probability. In so doing,
sketching crude pictures can be quite helpful!
Consider testing each of the following:
(a) H0 : µ = µ0 versus H1 : µ 6= µ0
(b) H0 : µ ≤ µ0 versus H1 : µ > µ0
(c) H0 : µ ≥ µ0 versus H1 : µ < µ0
Qualitatively, we will be inclined to reject the null hypothesis if
9.4. TESTING HYPOTHESES ABOUT A POPULATION MEAN 219
(a) We observe x̄n ≪ µ0 or x̄n ≫ µ0, i.e., if we observe |x̄n − µ0| ≫ 0.
This is equivalent to observing |t| ≫ 0, so the significance probability
is
pa = Pµ0 (|Tn| ≥ |t|) .
(b) We observe x̄n ≫ µ0, i.e., if we observe x̄n − µ0 ≫ 0.
This is equivalent to observing t ≫ 0, so the significance probability is
pb = Pµ0 (Tn ≥ t) .
(c) We observe x̄n ≪ µ0, i.e., if we observe x̄n − µ0 ≪ 0.
This is equivalent to observing t ≪ 0, so the significance probability is
pc = Pµ0 (Tn ≤ t) .
Example 9.2 (continued) Applying the above reasoning, we obtain
the significance probabilities sketched in Figure 9.2. Notice that pb = pa/2
and that pb + pc = 1. The probability pb is fairly small, about 7%. This
makes sense: we observed x̄n
.
= 21.8 > 20 = µ0, so the sample does contain
some evidence that µ > 20. However, the statistical test reveals that the
strength of this evidence is not sufficiently compelling to reject H0 : µ ≤ 20.
In contrast, the probability of pc is quite large, about 93%. This also
makes sense, because the sample contains no evidence that µ < 20. In such
instances, performing a statistical test only confirms that which is transpar-
ent from comparing the sample and hypothesized means.
9.4.2 Formulating Suitable Hypotheses
Examples 9.1 and 9.2 illustrated the mechanics of hypothesis testing. Once
understood, the above techniques for calculating significance probabilities
are fairly straightforward and can be applied routinely to a wide variety of
problems. In contrast, determining suitable hypotheses to be tested requires
one to carefully consider each situation presented. These determinations
cannot be reduced to formulas. To make them requires good judgment,
which can only be acquired through practice.
We now consider some examples that illustrate some important issues
that arise when formulating hypotheses. In each case, there are certain
key questions that must be answered: Why was the experiment performed?
Who needs to be convinced of what? Is one type of error perceived as more
important than the other?
220 CHAPTER 9. INFERENCE
(c)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .....
...
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
(b)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
..... .
(a)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
..... .
. .....
...
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Figure 9.2: Significance probabilities for Example 9.2. Each significance
probability is the area of the corresponding shaded region.
Example 9.3 A group of concerned parents wants speed humps in-
stalled in front of a local elementary school, but the city traffic office is
reluctant to allocate funds for this purpose. Both parties agree that humps
should be installed if the average speed of all motorists who pass the school
while it is in session exceeds the posted speed limit of 15 miles per hour
(mph). Let µ denote the average speed of the motorists in question. A ran-
dom sample of n = 150 of these motorists was observed to have a sample
mean of x̄ = 15.3 mph with a sample standard deviation of s = 2.5 mph.
(a) State null and alternative hypotheses that are appropriate from the
parents’ perspective.
(b) State null and alternative hypotheses that are appropriate from the city
traffic office’s perspective.
(c) Compute the value of an appropriate test statistic.
(d) Adopting the parents’ perspective and assuming that they are willing to
risk a 1% chance of committing a Type I error, what action should be
taken? Why?
9.4. TESTING HYPOTHESES ABOUT A POPULATION MEAN 221
(e) Adopting the city traffic office’s perspective and assuming that they are
willing to risk a 10% chance of committing a Type I error, what action
should be taken? Why?
Solution
(a) The parents would prefer to err on the side of protecting their chil-
dren, so they would rather build unnecessary speed humps than forego
necessary speed humps. Hence, they would like to see the hypothe-
ses formulated so that foregoing necessary speed humps is a Type I
error. Since speed humps will be built if it is concluded that µ > 15
and will not be built if it is concluded that µ < 15, the parents would
prefer a null hypothesis of H0 : µ ≥ 15 and an alternative hypothesis
of H1 : µ < 15.
Equivalently, if we suppose that the purpose of the experiment is to
provide evidence to the parents, then it is clear that the parents need to
be persuaded that speed humps are unnecessary. The null hypothesis
to which they will default in the absence of compelling evidence is
H0 : µ ≥ 15. They will require compelling evidence to the contrary,
H1 : µ < 15.
(b) The city traffic office would prefer to err on the side of conserving their
budget for important public works, so they would rather forego neces-
sary speed humps than build unnecessary speed humps. Hence, they
would like to see the hypotheses formulated so that building unneces-
sary speed humps is a Type I error. Since speed humps will be built
if it is concluded that µ > 15 and will not be built if it is concluded
that µ < 15, the city traffic office would prefer a null hypothesis of
H0 : µ ≤ 15 and an alternative hypothesis of H1 : µ > 15.
Equivalently, if we suppose that the purpose of the experiment is to
provide evidence to the city traffic, then it is clear that the office needs
to be persuaded that speed humps are necessary. The null hypothesis
to which it will default in the absence of compelling evidence is H0 :
µ ≤ 15. It will require compelling evidence to the contrary, H1 : µ >
15.
(c) Because the population variance is unknown, the appropriate test sta-
tistic is
t =
x̄ − µ0
s/
√
n
=
15.3 − 15
2.5/
√
150
.
= 1.47.
222 CHAPTER 9. INFERENCE
(d) We would reject the null hypothesis in (a) if x̄ is sufficiently smaller
than µ0 = 15. Since x̄ = 15.3 > 15, there is no evidence against
H0 : µ ≥ 15. The null hypothesis is retained and speed humps are
installed.
(e) We would reject the null hypothesis in (b) if x̄ is sufficiently larger
than µ0 = 15, i.e., for sufficiently large positive values of t. Hence, the
significance probability is
p = P (Tn ≥ t)
.
= P(Z ≥ 1.47) = 1 − Φ(1.47)
.
= 0.071 < 0.10 = α.
Because p ≤ α, the traffic office should reject H0 : µ ≤ 15 and install
speed humps.
Example 9.4 Imagine a variant of the Lanarkshire milk experiment
described in Section 1.2. Suppose that it is known that 10-year-old Scottish
schoolchildren gain an average of 0.5 pounds per month. To study the effect
of daily milk supplements, a random sample of n = 1000 such children is
drawn. Each child receives a daily supplement of 3/4 cups pasteurized milk.
The study continues for four months and the weight gained by each student
during the study period is recorded. Formulate suitable null and alternative
hypotheses for testing the effect of daily milk supplements.
Solution Let X1, . . . , Xn denote the weight gains and let µ = EXi.
Then milk supplements are effective if µ > 2 and ineffective if µ < 2. One of
these possibilities will be declared the null hypothesis, the other will be de-
clared the alternative hypothesis. The possibility µ = 2 will be incorporated
into the null hypothesis.
The alternative hypothesis should be the one for which compelling ev-
idence is desired. Who needs to be convinced of what? The parents and
teachers already believe that daily milk supplements are beneficial and would
have to be convinced otherwise. But this is not the purpose of the study!
The study is performed for the purpose of obtaining objective scientific evi-
dence that supports prevailing popular wisdom. It is performed to convince
government bureaucrats that spending money on daily milk supplements for
schoolchildren will actually have a beneficial effect. The parents and teach-
ers hope that the study will provide compelling evidence of this effect. Thus,
the appropriate alternative hypothesis is H1 : µ > 2 and the appropriate null
hypothesis is H0 : µ ≤ 2.
9.4. TESTING HYPOTHESES ABOUT A POPULATION MEAN 223
9.4.3 Statistical Significance and Material Significance
The significance probability is the probability that a coincidence at least as
extraordinary as the phenomenon observed can be produced by chance. The
smaller the significance probability, the more confidently we reject the null
hypothesis. However, it is one thing to be convinced that the null hypothesis
is incorrect—it is something else to assert that the true state of nature is
very different from the state(s) specified by the null hypothesis.
Example 9.5 A government agency requires prospective advertisers to
provide statistical evidence that documents their claims. In order to claim
that a gasoline additive increases mileage, an advertiser must fund an inde-
pendent study in which n vehicles are tested to see how far they can drive,
first without and then with the additive. Let Xi denote the increase in miles
per gallon (mpg with the additive minus mpg without the additive) observed
for vehicle i and let µ = EXi. The null hypothesis H0 : µ ≤ 1 is tested
against the alternative hypothesis H1 : µ > 1 and advertising is authorized
if H0 is rejected at a significance level of α = 0.05.
Consider the experiences of two prospective advertisers:
1. A large corporation manufactures an additive that increases mileage
by an average of µ = 1.01 miles per gallon. The corporation funds
a large study of n = 900 vehicles in which x̄ = 1.01 and s = 0.1 are
observed. This results in a test statistic of
t =
x̄ − µ0
s/
√
n
=
1.01 − 1.00
0.1/
√
900
= 3
and a significance probability of
p = P (Tn ≥ t)
.
= P(Z ≥ 3) = 1 − Φ(3)
.
= 0.00135 < 0.05 = α.
The null hypothesis is decisively rejected and advertising is authorized.
2. An amateur automotive mechanic invents an additive that increases
mileage by an average of µ = 1.21 miles per gallon. The mechanic
funds a small study of n = 9 vehicles in which x̄ = 1.21 and s = 0.4
are observed. This results in a test statistic of
t =
x̄ − µ0
s/
√
n
=
1.21 − 1.00
0.4/
√
9
= 1.575
224 CHAPTER 9. INFERENCE
and (assuming that the normal approximation remains valid) a signif-
icance probability of
p = P (Tn ≥ t)
.
= P(Z ≥ 1.575) = 1 − Φ(1.575)
.
= 0.05763 > 0.05 = α.
The null hypothesis is not rejected and advertising is not authorized.
These experiences are highly illuminating. Although the corporation’s
mean increase of µ = 1.01 mpg is much closer to the null hypothesis than the
mechanic’s mean increase of µ = 1.21 mpg, the corporation’s study resulted
in a much smaller significance probability. This occurred because of the
smaller standard deviation and larger sample size in the corporation’s study.
As a result, the government could be more confident that the corporation’s
product had a mean increase of more than 1.0 mpg than they could be that
the mechanic’s product had a mean increase of more than 1.0 mpg.
The preceding example illustrates that a small significance probability
does not imply a large physical effect and that a large physical effect does
not imply a small significance probability. To avoid confusing these two con-
cepts, statisticians distinguish between statistical significance and material
significance (importance). To properly interpret the results of hypothesis
testing, it is essential that one remember:
Statistical significance is not the same as material significance.
9.5 Set Estimation
Hypothesis testing is concerned with situations that demand a binary deci-
sion, e.g., whether or not to install speed humps in front of an elementary
school. The relevance of hypothesis testing in situations that do not demand
a binary decision is somewhat less clear. For example, many statisticians
feel that the scientific community overuses hypothesis testing and that other
types of statistical inference are often more appropriate. As we have dis-
cussed, a typical application of hypothesis testing in science partitions the
states of nature into two sets, one that corresponds to a theory and one
that corresponds to chance. Usually the theory encompasses a great many
possible states of nature and the mere conclusion that the theory is true
only begs the question of which states of nature are actually plausible. Fur-
thermore, it is a rather fanciful conceit to imagine that a single scientific
article should attempt to decide whether a theory is or is not true. A more
9.5. SET ESTIMATION 225
sensible enterprise for the authors to undertake is simply to set forth the
evidence that they have discovered and allow evidence to accumulate until
the scientific community reaches a consensus. One way to accomplish this is
for each article to identify what its authors consider a set of plausible values
for the population quantity in question.
To construct a set of plausible values of µ, we imagine testing H0 : µ = µ0
versus H1 : µ 6= µ0 for every µ0 ∈ (−∞, ∞) and eliminating those µ0 for
which H0 : µ = µ0 is rejected. To see where this leads, let us examine our
decision criterion in the case that σ is known: we reject H0 : µ = µ0 if and
only if
p = Pµ0
¡¯
¯X̄n − µ0
¯
¯ ≥ |x̄n − µ0|
¢ .
= 2Φ (− |z|) ≤ α, (9.4)
where z = (x̄n−µ0)/(σ/
√
n). Using the symmetry of the normal distribution,
we can rewrite condition (9.4) as
α/2 ≥ Φ (− |z|) = P (Z < − |z|) = P (Z > |z|) ,
which in turn is equivalent to the condition
Φ (|z|) = P (Z < |z|) = 1 − P (Z > |z|) ≥ 1 − α/2, (9.5)
where Z ∼ Normal(0, 1).
Now let q denote the 1 − α/2 quantile of Normal(0, 1), so that
Φ(q) = 1 − α/2.
Then condition (9.5) obtains if and only if |z| ≥ q. We express this by
saying that q is the critical value of the test statistic |Zn|, where Zn = (X̄n −
µ0)/(σ/
√
n). For example, suppose that α = 0.05, so that 1 − α/2 = 0.975.
Then the critical value is computed in R as follows:
> qnorm(.975)
[1] 1.959964
Given a significance level α and the corresponding q, we have determined
that q is the critical value of |Zn| for testing H0 : µ = µ0 versus H1 : µ 6= µ0
at significance level α. Thus, we reject H0 : µ = µ0 if and only if (iff)
¯
¯
¯
¯
x̄n − µ0
σ/
√
n
¯
¯
¯
¯ = |z| ≥ q
iff |x̄n − µ0| ≥ qσ/
√
n
iff µ0 6∈
¡
x̄n − qσ/
√
n, x̄n + qσ/
√
n
¢
.
226 CHAPTER 9. INFERENCE
Thus, the desired set of plausible values is the interval
µ
x̄n − q
σ
√
n
, x̄n + q
σ
√
n
¶
. (9.6)
If σ is unknown, then the argument is identical except that we estimate σ2
as
s2
n =
1
n − 1
n
X
i=1
(xi − x̄n)2
,
obtaining as the set of plausible values the interval
µ
x̄n − q
sn
√
n
, x̄n + q
sn
√
n
¶
. (9.7)
Example 9.2 (continued) A random sample of n = 400 observations
is drawn from a population with unknown mean µ and unknown variance σ2,
resulting in x̄n = 21.82935 and sn = 24.70037. Using a significance level of
α = 0.05, determine a set of plausible values of µ.
First, because α = 0.05 is the significance level, q = 1.959964 is the
critical value. From (9.7), an interval of plausible values is
21.82935 ± 1.959964 · 24.70037/
√
400 = (19.40876, 24.24994).
Notice that 20 ∈ (19.40876, 24.24994), meaning that (as we discovered in
Section 9.4) we would accept H0 : µ = 20 at significance level α = 0.05.
Now consider the random interval I, defined in Case 1 (population vari-
ance known) by
I =
µ
X̄n − q
σ
√
n
, X̄n − q
σ
√
n
¶
and in Case 2 (population variance unknown) by
I =
µ
X̄n − q
Sn
√
n
, X̄n − q
Sn
√
n
¶
.
The probability that this random interval covers the real number µ0 is
Pµ (I ⊃ µ0) = 1 − Pµ (µ0 6∈ I) = 1 − Pµ (reject H0 : µ = µ0) .
If µ = µ0, then the probability of coverage is
1 − Pµ0 (reject H0 : µ = µ0) = 1 − Pµ0 (Type I error) ≥ 1 − α.
9.5. SET ESTIMATION 227
Thus, the probability that I covers the true value of the population mean is
at least 1−α, which we express by saying that I is a (1−α)-level confidence
interval for µ. The level of confidence, 1 − α, is also called the confidence
coefficient.
We emphasize that the confidence interval I is random and the popu-
lation mean µ is fixed, albeit unknown. Each time that the experiment in
question is performed, a random sample is observed and an interval is con-
structed from it. As the sample varies, so does the interval. Any one such
interval, constructed from a single sample, either does or does not contain
the population mean. However, if this procedure is repeated a great many
times, then the proportion of such intervals that contain µ will be at least
1 − α. Actually observing one sample and constructing one interval from
it amounts to randomly selecting one of the many intervals that might or
might not contain µ. Because most (at least 1−α) of the intervals do, we can
be “confident” that the interval that was actually constructed does contain
the unknown population mean.
9.5.1 Sample Size
Confidence intervals are often used to determine sample sizes for future ex-
periments. Typically, the researcher specifies a desired confidence level, 1−α,
and a desired interval length, L. After determining the appropriate critical
value, q, one equates L with 2qσ/
√
n and solves for n, obtaining
n = (2qσ/L)2
. (9.8)
Of course, this formula presupposes knowledge of the population variance.
In practice, it is usually necessary to replace σ with an estimate—which may
be easier said than done if the experiment has not yet been performed. This
is one reason to perform a pilot study: to obtain a preliminary estimate of
the population variance and use it to design a better study.
Several useful relations can be deduced from equation (9.8):
1. Higher levels of confidence (1 − α) correspond to larger critical values
(q), which result in larger sample sizes (n).
2. Smaller interval lengths (L) result in larger sample sizes (n).
3. Larger variances (σ2) result in larger sample sizes (n).
In summary, if a researcher desires high confidence that the true mean of a
highly variable population is covered by a small interval, then s/he should
plan on collecting a great deal of data!
228 CHAPTER 9. INFERENCE
Example 9.5 (continued) A rival corporation purchases the rights to
the amateur mechanic’s additive. How large a study is required to determine
this additive’s mean increase in mileage to within 0.05 mpg with a confidence
coefficient of 1 − α = 0.99?
The desired interval length is L = 2 · 0.05 = 0.1 and the critical value
that corresponds to α = 0.01 is computed in R as follows:
> qnorm(1-.01/2)
[1] 2.575829
From the mechanic’s small pilot study, we estimate σ to be s = 0.4. Then
n = (2 · 2.575829 · 0.4/0.1)2 .
= 424.6,
so the desired study will require n = 425 vehicles.
9.5.2 One-Sided Confidence Intervals
The set of µ0 for which we would accept the null hypothesis H0 : µ = µ0
when tested against the two-sided alternative hypothesis H1 : µ 6= µ0 is a tra-
ditional, 2-sided confidence interval. In situations where 1-sided alternatives
are appropriate, we can construct corresponding 1-sided confidence intervals
by determining the set of µ0 for which the appropriate null hypothesis would
be accepted.
Example 9.5 (continued) The government test has a significance
level of α = 0.05. It rejects the null hypothesis H0 : µ ≤ µ0 if and only if
(iff)
p = P(Z ≥ t) ≤ 0.05
iff P(Z < t) ≥ 0.95
iff t ≥ qnorm(0.95)
.
= 1.645.
Equivalently, the null hypothesis H0 : µ ≤ µ0 is accepted if and only if
t =
x̄ − µ0
s/
√
n
< 1.645
iff x̄ < µ0 + 1.645 ·
s
√
n
iff µ0 > x̄ − 1.645 ·
s
√
n
.
9.6. EXERCISES 229
1. In the case of the large corporation, the null hypothesis H0 : µ ≤ µ0 is
accepted if and only if
µ0 > 1.01 − 1.645 ·
0.1
√
900
.
= 1.0045,
so the 1-sided confidence interval with confidence coefficient 1 − α =
0.95 is (1.0045, ∞).
2. In the case of the amateur mechanic, the null hypothesis H0 : µ ≤ µ0
is accepted if and only if
µ0 > 1.21 − 1.645 ·
0.4
√
9
.
= 0.9967,
so the 1-sided confidence interval with confidence coefficient 1 − α =
0.95 is (0.9967, ∞).
9.6 Exercises
1. According to The Justice Project, “John Spirko was sentenced to death
on the testimony of a witness who was ‘70 percent certain’ of his iden-
tification.” Formulate this case as a problem in hypothesis testing.
What can be deduced about the significance level used to convict
Spirko? Does this choice of significance level strike you as suitable
for a capital murder trial?
2. Blaise Pascal, the French theologian and mathematician, argued that
we cannot know whether or not God exists, but that we must behave
as though we do. He submitted that the consequences of wrongly be-
having as though God does not exist are greater than the consequences
of wrongly behaving as though God does exist, concluding that it is
better to err on the side of caution and act as though God exists.
This argument is known as Pascal’s Wager. Formulate Pascal’s Wager
as a hypothesis testing problem. What are the Type I and Type II
errors? On whom did Pascal place the burden of proof, believers or
nonbelievers?
3. Dorothy owns a lovely glass dreidl. Curious as to whether or not it
is fairly balanced, she spins her dreidl ten times, observing five gimels
and five hehs. Surprised by these results, Dorothy decides to compute
230 CHAPTER 9. INFERENCE
the probability that a fair dreidl would produce such aberrant results.
Which of the probabilities specified in Exercise 3.7.5 is the most appro-
priate choice of a significance probability for this investigation? Why?
4. It is thought that human influenza viruses originate in birds. It is
quite possible that, several years ago, a human influenza pandemic was
averted by slaughtering 1.5 million chickens brought to market in Hong
Kong. Because it is impossible to test each chicken individually, such
decisions are based on samples. Suppose that a boy has already died of
a bird flu virus apparently contracted from a chicken. Several diseased
chickens have already been identified. The health officials would prefer
to err on the side of caution and destroy all chickens that might be
infected; the farmers do not want this to happen unless it is absolutely
necessary. Suppose that both the farmers and the health officals agree
that all chickens should be destroyed if more than 2 percent of them are
diseased. A random sample of n = 1000 chickens reveals 40 diseased
chickens.
(a) Let Xi = 1 if chicken i is diseased and Xi = 0 if it is not. Assume
that X1, . . . , Xn ∼ P. To what family of probability distributions
does P belong? What population parameter indexes this family?
Use this parameter to state formulas for µ = EXi and σ2 =
Var Xi.
(b) State appropriate null and alternative hypotheses from the per-
spective of the health officials.
(c) State appropriate null and alternative hypotheses from the per-
spective of the farmers.
(d) Use the value of µ0 in the above hypotheses to compute the value
of σ2 under H0. Then compute the value of the test statistic z.
(e) Adopting the health officials’ perspective, and assuming that they
are willing to risk a 0.1% chance of committing a Type I error,
what action should be taken? Why?
(f) Adopting the farmers’ perspective, and assuming that they are
willing to risk a 10% chance of committing a Type I error, what
action should be taken? Why?
5. A company that manufactures light bulbs has advertised that its 75-
watt bulbs burn an average of 800 hours before failing. In reaction
9.6. EXERCISES 231
to the company’s advertising campaign, several dissatisfied custom-
ers have complained to a consumer watchdog organization that they
believe the company’s claim to be exaggerated. The consumer orga-
nization must decide whether or not to allocate some of its financial
resources to countering the company’s advertising campaign. So that
it can make an informed decision, it begins by purchasing and testing
100 of the disputed light bulbs. In this experiment, the 100 light bulbs
burned an average of x̄ = 745.1 hours before failing, with a sample
standard deviation of s = 238.0 hours. Formulate null and alterna-
tive hypotheses that are appropriate for this situation. Calculate a
significance probability. Do these results warrant rejecting the null
hypothesis at a significance level of α = 0.05?
6. To study the effects of Alzheimer’s disease (AD) on cognition, a scien-
tist administers two batteries of neuropsychological tasks to 60 mildly
demented AD patients. One battery is administered in the morning,
the other in the afternoon. Each battery includes a task in which
discourse is elicited by showing the patient a picture and asking the
patient to describe it. The quality of the discourse is measured by
counting the number of “information units” conveyed by the patient.
The scientist wonders if asking a patient to describe Picture A in the
morning is equivalent to asking the same patient to describe Picture B
in the afternoon, after having described Picture A several hours ear-
lier. To investigate, she computes the number of information units for
Picture A minus the number of information units for Picture B for
each patient. She finds an average difference of x̄ = −0.1833, with a
sample standard deviation of s = 5.18633. Formulate null and alter-
native hypotheses that are appropriate for this situation. Calculate
a significance probability. Do these results warrant rejecting the null
hypothesis at a significance level of α = 0.05?
7. Each student in a large statistics class of 600 students is asked to
toss a fair coin 100 times, count the resulting number of Heads, and
construct a 0.95-level confidence interval for the probability of Heads.
Assume that each student uses a fair coin and constructs the confidence
interval correctly. True or False: We would expect approximately 570
of the confidence intervals to contain the number 0.5.
8. The USGS decides to use a laser altimeter to measure the height µ
of Mt. Wrightson, the highest point in Pima County, Arizona. It is
232 CHAPTER 9. INFERENCE
known that measurements made by the laser altimeter have an ex-
pected value equal to µ and a standard deviation of 1 meter. How
many measurements should be made if the USGS wants to construct a
0.90-level confidence interval for µ that has a length of 20 centimeters?
9. Professor Johnson is interested in the probability that a certain type
of randomly generated matrix has a positive determinant. His student
attempts to calculate the probability exactly, but runs into difficulty
because the problem requires her to evaluate an integral in 9 dimen-
sions. Professor Johnson therefore decides to obtain an approximate
probability by simulation, i.e., by randomly generating some matrices
and observing the proportion that have positive determinants. His pre-
liminary investigation reveals that the probability is roughly 0.05. At
this point, Professor Park decides to undertake a more comprehensive
simulation experiment that will, with 0.95-level confidence, correctly
determine the probability of interest to within ±0.00001. How many
random matrices should he generate to achieve the desired accuracy?
10. Consider a box that contains 10 tickets, labelled
{1, 1, 1, 1, 2, 5, 5, 10, 10, 10}.
From this box, I propose to draw (with replacement) n = 40 tickets.
Let Y denote the sum of the values on the tickets that are drawn.
To approximate p = P(170.5 < Y < 199.5), a Math 351 student
writes an R function box.model that simulates the proposed experi-
ment. Evaluating box.model is like observing a value, y, of the ran-
dom variable Y . Then she writes a loop that repeatedly evaluates
box.model and computes p̂, the proportion of times that box.model
produces y ∈ (170.5, 199.5). The student intends to construct a 0.95-
level confidence interval for p. If she desires an interval of length L,
then how many times should she plan to evaluate box.model?
Hint: How else might the student estimate p?
11. In September 2003, Lena spun a penny 89 times and observed 2 Heads.
Let p denote the true probability that one spin of her penny will result
in Heads.
(a) The significance probability for testing H0 : p ≥ 0.3 versus H1 :
p < 0.3 is p = P(Y ≤ 2), where Y ∼ Binomial(89; 0.3).
9.6. EXERCISES 233
i. Compute p as in Section 9.1, using the binomial distribution
and pbinom.
ii. Approximate p as in Section 9.4, using the normal distribu-
tion and pnorm. How good is this approximation?
(b) Construct a 1-sided confidence interval for p by determining for
which values of p0 the null hypothesis H0 : p ≥ p0 would be
accepted at a significance level of (approximately) α = 0.05.
234 CHAPTER 9. INFERENCE
Chapter 10
1-Sample Location Problems
The basic ideas associated with statistical inference were introduced in Chap-
ter 9. We developed these ideas in the context of drawing inferences about a
single population mean, and we assumed that the sample was large enough
to justify appeals to the Central Limit Theorem for normal approximations.
The population mean is a natural measure of centrality, but it is not the
only one. Furthermore, even if we are interested in the population mean,
our sample may be too small to justify the use of a large-sample normal
approximation. The purpose of the next several chapters is to explore more
thoroughly how statisticians draw inferences about measures of centrality.
Measures of centrality are sometimes called location parameters. The
title of this chapter indicates an interest in a location parameter of a single
population. More specifically, we assume that X1, . . . , Xn ∼ P are inde-
pendently and identically distributed, we observe a random sample ~
x =
{x1, . . . , xn}, and we attempt to draw an inference about a location param-
eter of P. Because it is not always easy to identify the relevant population
in a particular experiment, we begin with some examples. Our analysis of
these examples is clarified by posing the following four questions:
1. What are the experimental units, i.e., what are the objects that are
being measured?
2. From what population (or populations) were the experimental units
drawn?
3. What measurements were taken on each experimental unit?
4. What random variables are relevant to the specified inference?
235
236 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS
For the sake of specificity, we assume that the location parameter of interest
in the following examples is the population median, q2(P).
Example 10.1 A machine is supposed to produce ball bearings that
are 1 millimeter in diameter. To determine if the machine was correctly
calibrated, a sample of ball bearings is drawn and the diameter of each ball
bearing is measured. For this experiment:
1. An experimental unit is a ball bearing. Notice that we are distin-
guishing between experimental units, the objects being measured (ball
bearings), and units of measurement (e.g., millimeters).
2. There is one population, viz., all ball bearings that might be produced
by the designated machine.
3. One measurement (diameter) is taken on each experimental unit.
4. Let Xi denote the diameter of ball bearing i. Then X1, . . . , Xn ∼ P
and we are interested in drawing inferences about q2(P), the population
median diameter. For example, we might test H0 : q2(P) = 1 against
H1 : q2(P) 6= 1.
Example 10.2 A drug is supposed to lower blood pressure. To deter-
mine if it does, a sample of hypertensive patients are administered the drug
for two months. Each person’s blood pressure is measured before and after
the two month period. For this experiment:
1. An experimental unit is a patient.
2. There is one population of hypertensive patients. (It may be difficult
to discern the precise population that was actually sampled. All hy-
pertensive patients? All Hispanic male hypertensive patients who live
in Houston, TX? All Hispanic male hypertensive patients who live in
Houston, TX, and who are sufficiently well-disposed to the medical
establishment to participate in the study? In published journal arti-
cles, scientists are often rather vague about just what population was
actually sampled.)
3. Two measurements (blood pressure before and after treatment) are
taken on each experimental unit. Let Bi and Ai denote the blood
pressures of patient i before and after treatment.
237
4. Let Xi = Bi − Ai, the decrease in blood pressure for patient i. Then
X1, . . . , Xn ∼ P and we are interested in drawing inferences about
q2(P), the population median decrease. For example, we might test
H0 : q2(P) ≤ 0 against H1 : q2(P) > 0.
Example 10.3 A graduate student investigated the effect of Parkin-
son’s disease (PD) on speech breathing. She recruited 16 PD patients to
participate in her study. She also recruited 16 normal control (NC) subjects.
Each NC subject was carefully matched to one PD patient with respect to
sex, age, height, and weight. The lung volume of each study participant was
measured. For this experiment:
1. An experimental unit was a matched PD-NC pair.
2. The population comprises all possible PD-NC pairs that satisfy the
study criteria.
3. Two measurements (PD and NC lung volume) were taken on each
experimental unit. Let Di and Ci denote the PD and NC lung volumes
of pair i.
4. Let Xi = log(Di/Ci) = log Di − log Ci, the logarithm of the PD pro-
portion of NC lung volume. (This is not the only way of comparing Di
and Ci, but it worked well in this investigation. Ratios can be difficult
to analyze and logarithms convert ratios to differences. Furthermore,
lung volume data tend to be skewed to the right. As in Exercise 2
of Section 7.6, logarithmic transformations of such data often have a
symmetrizing effect.) Then X1, . . . , Xn ∼ P and we are interested
in drawing inferences about q2(P). For example, to test the theory
that PD restricts lung volume, we might test H0 : q2(P) ≥ 0 against
H1 : q2(P) < 0.
This chapter is divided into sections according to distributional assump-
tions about the Xi:
10.1 If the data are assumed to be normally distributed, then we will be
interested in inferences about the population’s center of symmetry,
which we will identify as the population mean.
10.3 If the data are only assumed to be symmetrically distributed, then we
will also be interested in inferences about the population’s center of
symmetry, but we will identify it as the population median.
238 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS
10.2 If the data are only assumed to be continuously distributed, then we
will be interested in inferences about the population median.
Each section is subdivided into subsections, according to the type of inference
(point estimation, hypothesis testing, set estimation) at issue.
10.1 The Normal 1-Sample Location Problem
In this section we assume that P = Normal(µ, σ2). As necessary, we will
distinguish between cases in which σ is known and cases in which σ is un-
known.
10.1.1 Point Estimation
Because normal distributions are symmetric, the location parameter µ is the
center of symmetry and therefore both the population mean and the popu-
lation median. Hence, there are (at least) two natural estimators of µ, the
sample mean X̄n and the sample median q2(P̂n). Both are consistent, unbi-
ased estimators of µ. We will compare them by considering their asymptotic
relative efficiency (ARE). A rigorous definition of ARE is beyond the scope
of this book, but the concept is easily interpreted.
If the true distribution is P = N(µ, σ2), then the ARE of the sample
median to the sample mean for estimating µ is
e(P) =
2
π
.
= 0.64.
This statement has the following interpretation: for large samples, using the
sample median to estimate a normal population mean is equivalent to ran-
domly discarding approximately 36% of the observations and calculating the
sample mean of the remaining 64%. Thus, the sample mean is substantially
more efficient than is the sample median at extracting location information
from a normal sample.
In fact, if P = Normal(µ, σ2), then the ARE of any estimator of µ to
the sample mean is ≤ 1. This is sometimes expressed by saying that the
sample mean is asymptotically efficient for estimating a normal mean. The
sample mean also enjoys a number of other optimal properties in this case.
The sample mean is unquestionably the preferred estimator for the normal
1-sample location problem.
10.1. THE NORMAL 1-SAMPLE LOCATION PROBLEM 239
10.1.2 Hypothesis Testing
If σ is known, then the possible distributions of Xi are
n
Normal(µ, σ2
) : −∞ < µ < ∞
o
.
If σ is unknown, then the possible distributions of Xi are
n
Normal(µ, σ2
) : −∞ < µ < ∞, σ > 0
o
.
We partition the possible distributions into two subsets, the null and
alternative hypotheses. For example, if σ is known then we might specify
H0 =
n
Normal(0, σ2
)
o
and H1 =
n
Normal(µ, σ2
) : µ 6= 0
o
,
which we would typically abbreviate as H0 : µ = 0 and H1 : µ 6= 0. Analo-
gously, if σ is unknown then we might specify
H0 =
n
Normal(0, σ2
) : σ > 0
o
and
H1 =
n
Normal(µ, σ2
) : µ 6= 0, σ > 0
o
,
which we would also abbreviate as H0 : µ = 0 and H1 : µ 6= 0.
More generally, for any real number µ0 we might specify
H0 =
n
Normal(µ0, σ2
)
o
and H1 =
n
Normal(µ, σ2
) : µ 6= µ0
o
if σ is known, or
H0 =
n
Normal(µ0, σ2
) : σ > 0
o
and
H1 =
n
Normal(µ, σ2
) : µ 6= µ0, σ > 0
o
if σ in unknown. In both cases, we would typically abbreviate these hy-
potheses as H0 : µ = µ0 and H1 : µ 6= µ0.
The preceding examples involve two-sided alternative hypotheses. Of
course, as in Section 9.4, we might also specify one-sided hypotheses. How-
ever, the material in the present section is so similar to the material in
Section 9.4 that we will only discuss two-sided hypotheses.
The intuition that underlies testing H0 : µ = µ0 versus H1 : µ 6= µ0 was
discussed in Section 9.4:
240 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS
• If H0 is true, then we would expect the sample mean to be close to the
population mean µ0.
• Hence, if X̄n = x̄n is observed far from µn, then we are inclined to
reject H0.
To make this reasoning precise, we reject H0 if and only if the significance
probability
p = Pµ0
¡¯
¯X̄n − µ0
¯
¯ ≥ |x̄n − µ0|
¢
≤ α. (10.1)
The first equation in (10.1) is a formula for a significance probability.
Notice that this formula is identical to equation (9.2). The one difference
between the material in Section 9.4 and the present material lies in how one
computes p. For emphasis, we recall the following:
1. The hypothesized mean µ0 is a fixed number specified by the null
hypothesis.
2. The estimated mean, x̄n, is a fixed number computed from the sample.
Therefore, so is |x̄n − µ0|, the difference between the estimated mean
and the hypothesized mean.
3. The estimator, X̄n, is a random variable.
4. The subscript in Pµ0 reminds us to compute the probability under
H0 : µ = µ0.
5. The significance level α is a fixed number specified by the researcher,
preferably before the experiment was performed.
To apply (10.1), we must compute p. In Section 9.4, we overcame that
technical difficulty by appealing to the Central Limit Theorem. This allowed
us to approximate p even when we did not know the distribution of the
Xi, but only for reasonably large sample sizes. However, if we know that
X1, . . . , Xn are normally distributed, then it turns out that we can calculate
p exactly, even when n is small.
Case 1: The Population Variance is Known
Under the null hypothesis that µ = µ0, X1, . . . , Xn ∼ Normal(µ0, σ2) and
X̄n ∼ Normal
Ã
µ0,
σ2
n
!
.
10.1. THE NORMAL 1-SAMPLE LOCATION PROBLEM 241
This is the exact distribution of X̄n, not an asymptotic approximation. We
convert X̄n to standard units, obtaining
Z =
X̄n − µ0
σ/
√
n
∼ Normal (0, 1) . (10.2)
The observed value of Z is
z =
x̄n − µ0
σ/
√
n
.
The significance probability is
p = Pµ0
¡¯
¯X̄n − µ0
¯
¯ ≥ |x̄n − µ0|
¢
= Pµ0
ï
¯
¯
¯
¯
X̄n − µ0
σ/
√
n
¯
¯
¯
¯
¯
≥
¯
¯
¯
¯
x̄n − µ0
σ/
√
n
¯
¯
¯
¯
!
= P (|Z| ≥ |z|)
= 2P (Z ≥ |z|) .
In this case, the test that rejects H0 if and only if p ≤ α is sometimes called
the 1-sample z-test. The random variable Z is the test statistic.
Before considering the case of an unknown population variance, we re-
mark that it is possible to derive point estimators from hypothesis tests. For
testing H0 : µ = µ0 versus H1 : µ 6= µ0, the test statistics are
Z(µ0) =
X̄n − µ0
σ/
√
n
.
If we observe X̄n = x̄n, then what value of µ0 minimizes |z(µ0)|? Clearly,
the answer is µ0 = x̄n. Thus, our preferred point estimate of µ is the µ0 for
which it is most difficult to reject H0 : µ = µ0. This type of reasoning will
be extremely useful for analyzing situations in which we know how to test
but don’t know how to estimate.
Case 2: The Population Variance is Unknown
Statement (10.2) remains true if σ is unknown, but it is no longer possible
to compute z. Therefore, we require a different test statistic for this case.
A natural approach is to modify Z by replacing the unknown σ with an
estimator of it. Toward that end, we introduce the test statistic
Tn =
X̄n − µ0
Sn/
√
n
,
242 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS
where S2
n is the unbiased estimator of the population variance defined by
equation (9.1). Because Tn and Z are different random variables, they have
different probability distributions and our first order of business is to deter-
mine the distribution of Tn.
We begin by stating a useful fact:
Theorem 10.1 If X1, . . . , Xn ∼ Normal(µ, σ2), then
(n − 1)S2
n
σ2
=
n
X
i=1
¡
Xi − X̄n
¢2
/σ2
∼ χ2
(n − 1).
The χ2 (chi-squared) distribution was described in Section 5.5 and Theorem
10.1 is closely related to Theorem 5.3.
Next we write
Tn =
X̄n − µ0
Sn/
√
n
=
X̄n − µ0
σ/
√
n
·
σ/
√
n
Sn/
√
n
= Z ·
σ
Sn
= Z/
q
S2
n/σ2
= Z/
q
[(n − 1)S2
n/σ2] /(n − 1).
Using Theorem 10.1, we see that Tn can be written in the form
Tn =
Z
p
Y/ν
,
where Z ∼ Normal(0, 1) and Y ∼ χ2(ν). If Z and Y are independent random
variables, then it follows from Definition 5.7 that Tn ∼ t(n − 1).
Both Z and Y = (n − 1)S2
n/σ2 depend on X1, . . . , Xn, so one would be
inclined to think that Z and Y are dependent. This is usually the case,
but it turns out that they are independent if X1, . . . , Xn ∼ Normal(µ, σ2).
This is another remarkable property of normal distributions, usually stated
as follows:
Theorem 10.2 If X1, . . . , Xn ∼ Normal(µ, σ2), then X̄n and S2
n are inde-
pendent random variables.
The result that interests us can then be summarized as follows:
Corollary 10.1 If X1, . . . , Xn ∼ Normal(µ0, σ2), then
Tn =
X̄n − µ0
Sn/
√
n
∼ t(n − 1).
10.1. THE NORMAL 1-SAMPLE LOCATION PROBLEM 243
Now let
tn =
x̄n − µ0
sn/
√
n
,
the observed value of the test statistic Tn. The significance probability is
p = Pµ0 (|Tn| ≥ |tn|) = 2Pµ0 (Tn ≥ |tn|) .
In this case, the test that rejects H0 if and only if p ≤ α is called Student’s
1-sample t-test. Because it is rarely the case that the population variance is
known when the population mean is not, Student’s 1-sample t-test is used
much more frequently than the 1-sample z-test. We will use the R function
pt to compute significance probabilities for Student’s 1-sample t-test, as
illustrated in the following examples.
Example 10.4 Suppose that, to test H0 : µ = 0 versus H1 : µ 6= 0
(a 2-sided alternative), we draw a sample of size n = 25 and observe x̄ = 1
and s = 3. Then t = (1 − 0)/(3/
√
25) = 5/3 and the 2-tailed significance
probability is computed using both tails of the t(24) distribution, i.e., p =
2 ∗ pt(−5/3, df = 24)
.
= 0.1086.
Example 10.5 Suppose that, to test H0 : µ ≤ 0 versus H1 : µ > 0
(a 1-sided alternative), we draw a sample of size n = 25 and observe x̄ = 2
and s = 5. Then t = (2 − 0)/(5/
√
25) = 2 and the 1-tailed significance
probability is computed using one tail of the t(24) distribution, i.e., p =
1 − pt(2, df = 24)
.
= 0.0285.
10.1.3 Interval Estimation
As in Section 9.5, we will derive confidence intervals from tests. We imagine
testing H0 : µ = µ0 versus H1 : µ 6= µ0 for every µ0 ∈ (−∞, ∞). The µ0 for
which H0 : µ = µ0 is rejected are implausible values of µ; the µ0 for which
H0 : µ = µ0 is accepted constitute the confidence interval. To accomplish
this, we will have to derive the critical values of our tests. A significance
level of α will result in a confidence coefficient of 1 − α.
Case 1: The Population Variance is Known
If σ is known, then we reject H0 : µ = µ0 if and only if
p = Pµ0
¡¯
¯X̄n − µ0
¯
¯ ≥ |x̄n − µ0|
¢
= 2Φ (− |zn|) ≤ α,
244 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS
where zn = (x̄n −µ0)/(σ/
√
n). By the symmetry of the normal distribution,
this condition is equivalent to the condition
1 − Φ (− |zn|) = P (Z > − |zn|) = P (Z < |zn|) = Φ (|zn|) ≥ 1 − α/2,
where Z ∼ Normal(0, 1), and therefore to the condition |zn| ≥ qz, where qz
denotes the 1 − α/2 quantile of Normal(0, 1). The quantile qz is the critical
value of the two-sided 1-sample z-test. Thus, given a significance level α and
a corresponding critical value qz, we reject H0 : µ = µ0 if and only if (iff)
¯
¯
¯
¯
x̄n − µ0
σ/
√
n
¯
¯
¯
¯ = |zn| ≥ qz
iff |x̄n − µ0| ≥ qzσ/
√
n
iff µ0 6∈
¡
x̄n − qzσ/
√
n, x̄n + qzσ/
√
n
¢
and we conclude that the desired set of plausible values is the interval
µ
x̄n − qz
σ
√
n
, x̄n + qz
σ
√
n
¶
.
Notice that both the preceding derivation and the resulting confidence
interval are identical to the derivation and confidence interval in Section 9.5.
The only difference is that, because we are now assuming that X1, . . . , Xn ∼
Normal(µ, σ2) instead of relying on the Central Limit Theorem, no approx-
imation is required.
Example 10.6 Suppose that we desire 90% confidence about µ and
σ = 3 is known. Then α = 0.10 and qz
.
= 1.645. Suppose that we draw
n = 25 observations and observe x̄n = 1. Then
1 ± 1.645
3
√
25
= 1 ± 0.987 = (0.013, 1.987)
is a 0.90-level confidence interval for µ.
Case 2: The Population Variance is Unknown
If σ is unknown, then it must be estimated from the sample. The reasoning
in this case is the same, except that we rely on Student’s 1-sample t-test.
As before, we use S2
n to estimate σ2. The critical value of the 2-sided
1-sample t-test is qt, the 1−α/2 quantile of a t distribution with n−1 degrees
of freedom, and the confidence interval is
µ
x̄n − qt
sn
√
n
, x̄n + qt
sn
√
n
¶
.
10.2. THE GENERAL 1-SAMPLE LOCATION PROBLEM 245
Example 10.7 Suppose that we desire 90% confidence about µ and σ
is unknown. Suppose that we draw n = 25 observations and observe x̄n = 1
and s = 3. Then qt = qt(.95, df = 24)
.
= 1.711 and
1 ± 1.711 × 3/
√
25 = 1 ± 1.027 = (−0.027, 2.027)
is a 90% confidence interval for µ. Notice that the confidence interval is
larger when we use s = 3 instead of σ = 3.
10.2 The General 1-Sample Location Problem
In Section 10.1 we assumed that X1, . . . , Xn ∼ P and P = Normal(µ, σ2).
In this section, we again assume that X1, . . . , Xn ∼ P, but now we assume
only that the Xi are continuous random variables.
Because P is not assumed to be symmetric, we must decide which lo-
cation parameter to study. The population median, q2(P), enjoys several
advantages. Unlike the population mean, the population median always ex-
ists and is not sensitive to the influence of outliers. Furthermore, it turns out
that one can develop fairly elementary ways to study medians, even when
little is known about the probability distribution P. For simplicity, we will
denote the population median by θ.
10.2.1 Hypothesis Testing
It is convenient to begin our study of the general 1-sample location problem
with a discussion of hypothesis testing. As is Section 10.1, we initially con-
sider testing a 2-sided alternative, H0 : θ = θ0 versus H1 : θ 6= θ0. We will
explicate a procedure known as the sign test.
The intuition that underlies the sign test is elementary. If the population
median is θ = θ0, then when we sample P we should observe roughly half
the xi above θ0 and half the xi below θ0. Hence, if we observe proportions of
xi above/below θ0 that are very different from one half, then we are inclined
to reject the possibility that θ = θ0.
More formally, let p+ = PH0 (Xi > θ0) and p− = PH0 (Xi < θ0). Because
the Xi are continuous, PH0 (Xi = θ0) = 0 and therefore p+ = p− = 0.5.
Hence, under H0, observing whether Xi > θ0 or Xi < θ0 is equivalent to
tossing a fair coin, i.e., to observing a Bernoulli trial with success probability
p = 0.5. The sign test is the following procedure:
246 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS
1. Let ~
x = {x1, . . . , xn} denote the observed sample. If the Xi are con-
tinuous random variables, then P(Xi = θ0) = 0 and it should be that
each xi 6= θ0. In practice, of course, it may happen that we do observe
one or more xi = θ0. For the moment, we assume that ~
x contains no
such values.
2. Let
Y = #{Xi > θ0} = #{Xi − θ0 > 0}
be the test statistic. Under H0 : θ = θ0, Y ∼ Binomial(n; p = 0.5).
The observed value of the test statistic is
y = #{xi > θ0} = #{xi − θ0 > 0}.
3. Notice that EY = n/2. The significance probability is
p = Pθ0
µ¯
¯
¯
¯Y −
n
2
¯
¯
¯
¯ ≥
¯
¯
¯
¯y −
n
2
¯
¯
¯
¯
¶
.
The sign test rejects H0 : θ = θ0 if and only if p ≤ α.
4. To compute p, we first note that
¯
¯
¯
¯Y −
n
2
¯
¯
¯
¯ ≥
¯
¯
¯
¯y −
n
2
¯
¯
¯
¯
is equivalent to the event
(a) {Y ≤ y or Y ≥ n − y} if y ≤ n/2;
(b) {Y ≥ y or Y ≤ n − y} if y ≥ n/2.
To accomodate both cases, let c = min(y, n − y). Then
p = Pθ0 (Y ≤ c) + Pθ0 (Y ≥ c) = 2Pθ0 (Y ≤ c) = 2*pbinom(c,n,.5).
Example 10.8(a) Suppose that we want to test H0 : θ = 100 versus
H1 : θ 6= 100 at significance level α = 0.05, having observed the sample
~
x = {98.73, 97.17, 100.17, 101.26, 94.47, 96.39, 99.67, 97.77, 97.46, 97.41}.
Here n = 10, y = #{xi > 100} = 2, and c = min(2, 10 − 2) = 2, so
p = 2*pbinom(2,10,.5) = 0.109375 > 0.05
and we decline to reject H0.
10.2. THE GENERAL 1-SAMPLE LOCATION PROBLEM 247
Example 10.8(b) Now suppose that we want to test H0 : θ ≤ 97
versus H1 : θ > 97 at significance level α = 0.05, using the same data. Here
n = 10, y = #{xi > 97} = 8, and c = min(8, 10 − 8) = 2. Because large
values of Y are evidence against H0 : θ ≤ 97,
p = Pθ0 (Y ≥ y) = Pθ0 (Y ≥ 8) = 1 − Pθ0 (Y ≤ 7)
= 1-pbinom(7,10,.5) = 0.0546875 > 0.05
and we decline to reject H0.
Thus far we have assumed that the sample contains no values for which
xi = θ0. In practice, we may well observe such values. For example, if the
measurements in Example 10.8(a) were made less precisely, then we might
have observed the following sample:
~
x = {99, 97, 100, 101, 94, 96, 100, 98, 97, 97}. (10.3)
If we want to test H0 : θ = 100 versus H1 : θ 6= 100, then we have two values
that equal θ0 and the sign test requires modification.
We assume that #{xi = θ0} is fairly small; otherwise, the assumption
that the Xi are continuous is questionable. We consider two possible ways
to proceed:
1. Perhaps the most satisfying solution is to compute all of the signifi-
cance probabilities that correspond to different ways of counting the
xi = θ0 as larger or smaller than θ0. If there are k observations xi = θ0,
then this will produce 2k significance probabilities, which we might av-
erage to obtain a single p.
2. Alternatively, let p0 denote the significance probability obtained by
counting in the way that is most favorable to H0 (least favorable to
H1). This is the largest of the possible significance probabilities, so
if p0 ≤ α then we reject H0. Similarly, let p1 denote the significance
probability obtained by counting in the way that is least favorable
to H0 (most favorable to H1). This is the smallest of the possible
significance probabilities, so if p1 > α then we decline to reject H0. If
p0 > α ≥ p1, then we simply declare the results to be equivocal.
Example 10.8(c) Suppose that we want to test H0 : θ = 100 versus
H1 : θ 6= 100 at significance level α = 0.05, having observed the sample
248 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS
(10.3). Here n = 10 and y = #{xi > 100} depends on how we count the
observations x3 = x7 = 100. There are 22 = 4 possibilities:
possibility y = #{xi > 100} c = min(y, 10 − y) p
y3 < 100, y7 < 100 1 1 0.021484
y3 < 100, y7 > 100 2 2 0.109375
y3 > 100, y7 < 100 2 2 0.109375
y3 > 100, y7 > 100 3 3 0.343750
Noting that p0
.
= 0.344 > 0.05 > 0.021
.
= p1, we might declare the results to
be equivocal. However, noting that 3 of the 4 possibilities lead us to accept
H0 (and that the average p
.
= 0.146), we might conclude—somewhat more
decisively—that there is insufficient evidence to reject H0. The distinction
between these two interpretations is largely rhetorical, as the fundamental
logic of hypothesis testing requires that we decline to reject H0 unless there
is compelling evidence against it.
10.2.2 Point Estimation
Next we consider the problem of estimating the population median. A nat-
ural estimate is the plug-in estimate, the sample median. Another approach
begins by posing the following question: For what value of θ0 is the sign test
least inclined to reject H0 : θ = θ0 in favor of H1 : θ 6= θ0? The answer to
this question is also a natural estimate of the population median.
In fact, the plug-in and sign-test approaches lead to the same estimation
procedure. To understand why, we focus on the case that n is even, in which
case n/2 is a possible value of Y = #{Xi > θ0}. If |y − n/2| = 0, then
p = P
µ¯
¯
¯
¯Y −
n
2
¯
¯
¯
¯ ≥ 0
¶
= 1.
We see that the sign test produces the maximal significance probability of
p = 1 when y = n/2, i.e., when θ0 is chosen so that precisely half the
observations exceed θ0. This means that the sign test is least likely to reject
H0 : θ = θ0 when θ0 is the sample median. (A similar argument leads to the
same conclusion when n is odd.)
Thus, using the sign test to test hypotheses about population medians
corresponds to using the sample median to estimate population medians,
just as using Student’s t-test to test hypotheses about population means
corresponds to using the sample mean to estimate population means. One
10.2. THE GENERAL 1-SAMPLE LOCATION PROBLEM 249
consequence of this remark is that, when the population mean and median
are identical, the “Pitman efficiency” of the sign test to Student’s t-test
equals the asymptotic relative efficiency of the sample median to the sample
median. For example, using the sign test on normal data is asymptotically
equivalent to randomly discarding 36% of the observations, then using Stu-
dent’s t-test on the remaining 64%.
10.2.3 Interval Estimation
Finally, we consider the problem of constructing a (1 − α)-level confidence
interval for the population median. Again we rely on the sign test, deter-
mining for which θ0 the level-α sign test of H0 : θ = θ0 versus H1 : θ 6= θ0
will accept H0.
The sign test will reject H0 : θ = θ0 if and only if
y (θ0) = # {xi > θ0}
is either too large or too small. Equivalently, H0 will be accepted if θ0 is
such that the numbers of observations above and below θ0 are roughly equal.
To determine the critical value for the desired sign test, we suppose that
Y ∼ Binomial(n; 0.5). We would like to find k such that α = 2P(Y ≤ k), or
α/2 = pbinom(k, n, 0.5). In practice, we won’t be able to solve this equation
exactly. We will use the qbinom function plus trial-and-error to solve it
approximately, then modify our choice of α accordingly.
Having determined an acceptable (α, k), the sign test rejects H0 : θ = θ0
at level α if and only if either y(θ0) ≤ k or y(θ0) ≥ n − k. We need to
translate these inequalities into an interval of plausible values of θ0. To do
so, it is helpful to sort the values observed in the sample.
Definition 10.1 The order statistics of ~
x = {x1, . . . , xn} are any permuta-
tion of the xi such that
x(1) ≤ x(2) ≤ · · · ≤ x(n−1) ≤ x(n).
If ~
x contains n distinct values, then there is a unique set of order statistics
and the above inequalities are strict; otherwise, we say that ~
x contains ties.
Thus, x(1) is the smallest value in ~
x and x(n) is the largest. If n = 2m+1 (n
is odd), then the sample median is x(m+1); if n = 2m (n is even), then the
sample median is [x(m) + x(m+1)]/2.
250 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS
For simplicity we assume that ~
x contains no ties. If θ0 < x(k+1), then
at least n − k observations exceed θ0 and the sign test rejects H0 : θ = θ0.
Similarly, if θ0 > x(n−k), then no more than k observations exceed θ0 and
the sign test rejects H0 : θ = θ0. We conclude that the sign test accepts
H0 : θ = θ0 if and only if θ0 lies in the (1 − α)-level confidence interval
³
x(k+1), x(n−k)
´
.
Example 10.8(d) Using the n = 10 observations from Example 10.8(a),
we endeavor to construct a 0.90-level confidence interval for the population
median. We begin by determining a suitable choice of (α, k). If 1−α = 0.90,
then α/2 = 0.05. R returns qbinom(.05,10,.5) = 2. Next we experiment:
k pbinom(k, 10, 0.5)
2 0.0546875
1 0.01074219
We choose k = 2, resulting in a confidence level of
1 − α = 1 − 2 · 0.0546875 = 0.890625
.
= 0.89,
nearly equal to the requested level of 0.90. Now, upon sorting the data
(the sort function in R may be useful), we quickly discern that the desired
confidence interval is
³
x(3), x(8)
´
= (97.17, 99.67).
10.3 The Symmetric 1-Sample Location Problem
10.4 A Case Study from Neuropsychology
10.5. EXERCISES 251
10.5 Exercises
Problem Set A
1. Assume that a large number, n = 400, of observations are indepen-
dently drawn from a normal distribution with unknown population
mean µ and unknown population variance σ2. The resulting sample,
~
x, is used to test H0 : µ ≤ 0 versus H1 : µ > 0 at significance level
α = 0.05.
(a) What test should be used in this situation? If we observe ~
x that
results in x̄ = 3.194887 and s2 = 104.0118, then what is the value
of the test statistic?
(b) If we observe ~
x that results in a test statistic value of 1.253067,
then which of the following R expressions best approximates the
significance probability?
i. 2*pnorm(1.253067)
ii. 2*pnorm(-1.253067)
iii. 1-pnorm(1.253067)
iv. 1-pt(1.253067,df=399)
v. pt(1.253067,df=399)
(c) True of False: if we observe ~
x that results in a significance prob-
ability of p = 0.03044555, then we should reject the null hypoth-
esis.
2. A device counts the number of ions that arrive in a given time interval,
unless too many arrive. An experiment that relies on this device pro-
duces the following counts, where Big means that the count exceeded
255.
251 238 249 Big 243 248 229 Big 235 244
254 251 252 244 230 222 224 246 Big 239
Use these data to construct a confidence interval for the population
median number of ions with a confidence coefficient of approximately
0.95.
Problem Set B The following data are from Darwin (1876), The Effect
of Cross- and Self-Fertilization in the Vegetable Kingdom, Second Edition,
London: John Murray. They appear as Data Set 3 in A Handbook of Small
Data Sets, accompanied by the following description:
252 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS
“Pairs of seedlings of the same age, one produced by cross-fertilization
and the other by self-fertilization, were grown together so that
the members of each pair were reared under nearly identical con-
ditions. The aim was to demonstrate the greater vigour of the
cross-fertilized plants. The data are the final heights [in inches]
of each plant after a fixed period of time. Darwin consulted
[Francis] Galton about the analysis of these data, and they were
discussed further in [Ronald] Fisher’s Design of Experiments.”
Pair Fertilized
Cross Self
1 23.5 17.4
2 12.0 20.4
3 21.0 20.0
4 22.0 20.0
5 19.1 18.4
6 21.5 18.6
7 22.1 18.6
8 20.4 15.3
9 18.3 16.5
10 21.6 18.0
11 23.3 16.3
12 21.0 18.0
13 22.1 12.8
14 23.0 15.5
15 12.0 18.0
1. Show that this problem can be formulated as a 1-sample location prob-
lem. To do so, you should:
(a) Identify the experimental units and the measurement(s) taken on
each unit.
(b) Define appropriate random variables X1, . . . , Xn ∼ P. Remem-
ber that the statistical procedures that we will employ assume
that these random variables are independent and identically dis-
tributed.
(c) Let θ denote the location parameter (measure of centrality) of
interest. Depending on which statistical procedure we decide to
use, either θ = EXi = µ or θ = q2(Xi). State appropriate null
and alternative hypotheses about θ.
10.5. EXERCISES 253
2. Does it seem reasonable to assume that the sample ~
x = (x1, . . . , xn),
the observed values of X1, . . . , Xn, were drawn from:
(a) a normal distribution? Why or why not?
(b) a symmetric distribution? Why or why not?
3. Assume that X1, . . . , Xn are normally distributed and let θ = EXi = µ.
(a) Test the null hypothesis derived above using Student’s 1-sample
t-test. What is the significance probability? If we adopt a signif-
icance level of α = 0.05, should we reject the null hypothesis?
(b) Construct a (2-sided) confidence interval for θ with a confidence
coefficient of approximately 0.90.
4. Now we drop the assumption of normality. Assume that X1, . . . , Xn are
symmetric (but not necessarily normal), continuous random variables
and let θ = q2(Xi).
(a) Test the null hypothesis derived above using the Wilcoxon signed
rank test. What is the significance probability? If we adopt a
significance level of α = 0.05, should we reject the null hypothesis?
(b) Estimate θ by computing the median of the Walsh averages.
(c) Construct a (2-sided) confidence interval for θ with a confidence
coefficient of approximately 0.90.
5. Finally we drop the assumption of symmetry, assuming only that
X1, . . . , Xn are continuous random variables, and let θ = q2(Xi).
(a) Test the null hypothesis derived above using the sign test. What
is the significance probability? If we adopt a significance level of
α = 0.05, should we reject the null hypothesis?
(b) Estimate θ by computing the sample median.
(c) Construct a (2-sided) confidence interval for θ with a confidence
coefficient of approximately 0.90.
Problem Set C The ancient Greeks greatly admired rectangles with a
height-to-width ratio of
1 :
1 +
√
5
2
= 0.618034.
254 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS
They called this number the “golden ratio” and used it repeatedly in their
art and architecture, e.g., in building the Parthenon. Furthermore, golden
rectangles are often found in the art of later western cultures.
A cultural anthropologist wondered if the Shoshoni, a native American
civilization, also used golden rectangles. The following measurements, which
appear as Data Set 150 in A Handbook of Small Data Sets, are height-to-
width ratios of beaded rectangles used by the Shoshoni in decorating various
leather goods:
0.693 0.662 0.690 0.606 0.570
0.749 0.672 0.628 0.609 0.844
0.654 0.615 0.668 0.601 0.576
0.670 0.606 0.611 0.553 0.933
We will analyze the Shoshoni rectangles as a 1-sample location problem.
1. There are two natural scales that we might use in analyzing these
data. One possibility is to analyze the ratios themselves; the other is
to analyze the (natural) logarithms of the ratios. For which of these
possibilities would an assumption of normality seem more plausible?
Please justify your answer.
2. Choose the possibility (ratios or logarithms of ratios) for which an as-
sumption of normality seems more plausible. Formulate suitable null
and alternative hypotheses for testing the possibility that the Shoshoni
were using golden rectangles. Using Student’s 1-sample t-test, compute
a significance probability for testing these hypotheses. Would you re-
ject or accept the null hypothesis using a significance level of 0.05?
3. Suppose that we are unwilling to assume that either the ratios or the
log-ratios were drawn from a normal distribution. Use the sign test to
construct a 0.90-level confidence interval for the population median of
the ratios.
Problem Set D Researchers studied the effect of the drug caprotil on
essential hypertension, reporting their findings in the British Medical Jour-
nal. They measured the supine systolic and diastolic blood pressures of 15
patients with moderate essential hypertension, immediately before and two
hours after administering caprotil. The following measurements are Data
10.5. EXERCISES 255
Set 72 in A Handbook of Small Data Sets:
Patient Systolic Diastolic
before after before after
1 210 201 130 125
2 169 165 122 121
3 187 166 124 121
4 160 157 104 106
5 167 147 112 101
6 176 145 101 85
7 185 168 121 98
8 206 180 124 105
9 173 147 115 103
10 146 136 102 98
11 174 151 98 90
12 201 168 119 98
13 198 179 106 110
14 148 129 107 103
15 154 131 100 82
We will consider the question of whether or not caprotil affects systolic and
diastolic blood pressure differently.
1. Let SB and SA denote before and after systolic blood pressure; let
DB and DB denote before and after diastolic blood pressure. There
are several random variables that might be of interest:
Xi = (SBi − SAi) − (DBi − DAi) (10.4)
Xi =
SBi − SAi
SBi
−
DBi − DAi
DBi
(10.5)
Xi =
SBi − SAi
SBi
÷
DBi − DAi
DBi
(10.6)
Xi = log
µ
SBi − SAi
SBi
÷
DBi − DAi
DBi
¶
(10.7)
Suggest rationales for considering each of these possibilities.
2. Which (if any) of the above random variables appear to be normally
distributed? Which appear to be symmetrically distributed?
3. Does caprotil affect systolic and diastolic blood pressure differently?
Write a brief report that summarizes your investigation and presents
your conclusion(s).
256 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS
Chapter 11
2-Sample Location Problems
Thus far, in Chapters 9 and 10, we have studied inferences about a single
population. In contrast, the present chapter is concerned with comparing
two populations with respect to some measure of centrality, typically the
population mean or the population median. Specifically, we assume the
following:
1. X1, . . . , Xn1 ∼ P1 and Y1, . . . , Yn2 ∼ P2 are continuous random vari-
ables. The Xi and the Yj are mutually independent. In particular,
there is no natural pairing of X1 with Y1, X2 with Y2, etc.
2. P1 has location parameter θ1 and P2 has location parameter θ2. We
assume that comparisons of θ1 and θ2 are meaningful. For example, we
might compare population means, θ1 = µ1 = EXi and θ2 = µ2 = EYj,
or population medians, θ1 = q2(Xi) and θ2 = q2(Yj), but we would
not compare the mean of one population and the median of another
population. The shift parameter, ∆ = θ1 − θ2, measures the difference
in population location.
3. We observe random samples ~
x = {x1, . . . , xn1 } and ~
y = {y1, . . . , yn2 },
from which we attempt to draw inferences about ∆. Notice that we
do not assume that n1 = n2.
The same four questions that we posed at the beginning of Chapter 10
can be asked here. What distinguishes 2-sample problems from 1-sample
problems is the number of populations from which the experimental units
were drawn. The prototypical case of a 2-sample problem is the case of a
treatment population and a control population. We begin by considering
some examples.
257
258 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS
Example 11.1 A researcher investigated the effect of Alzheimer’s dis-
ease (AD) on ability to perform a confrontation naming task. She recruited
60 mildly demented AD patients and 60 normal elderly control subjects.
The control subjects resembled the AD patients in that the two groups had
comparable mean ages, years of education, and (estimated) IQ scores; how-
ever, the control subjects were not individually matched to the AD patients.
Each person was administered the Boston Naming Test (BNT), on which
higher scores represent better performance. For this experiment:
1. An experimental unit is a person.
2. The experimental units belong to one of two populations: AD patients
or normal elderly persons.
3. One measurement (score on BNT) is taken on each experimental unit.
4. Let Xi denote the BNT score for AD patient i. Let Yj denote the BNT
score for control subject j. Then X1, . . . , Xn1 ∼ P1, Y1, . . . , Yn2 ∼ P2,
and we are interested in drawing inferences about ∆ = θ1 − θ2. Notice
that ∆ < 0 if and only if θ1 < θ2. Thus, to document that AD
compromises confrontation naming ability, we might test H0 : ∆ ≥ 0
against H1 : ∆ < 0.
Example 11.2 A drug is supposed to lower blood pressure. To deter-
mine if it does, n1 + n2 hypertensive patients are recruited to participate
in a double-blind study. The patients are randomly assigned to a treatment
group of n1 patients and a control group of n2 patients. Each patient in
the treatment group receives the drug for two months; each patient in the
control group receives a placebo for the same period. Each patient’s blood
pressure is measured before and after the two month period, and neither the
patient nor the technician know to which group the patient was assigned.
For this experiment:
1. An experimental unit is a patient.
2. The experimental units belong to one of two populations: hypertensive
patients who receive the drug and hypertensive patients who receive
the placebo. Notice that there are two populations despite the fact that
all n1 + n2 patients were initially recruited from a single population.
Different treatment protocols create different populations.
11.1. THE NORMAL 2-SAMPLE LOCATION PROBLEM 259
3. Two measurements (blood pressure before and after treatment) are
taken on each experimental unit.
4. Let B1i and A1i denote the before and after blood pressures of patient
i in the treatment group. Similarly, let B2j and A2j denote the before
and after blood pressures of patient j in the control group. Let Xi =
B1i −A1i, the decrease in blood pressure for patient i in the treatment
group, and let Yj = B2j−A2j, the decrease in blood pressure for patient
j in the control group. Then X1, . . . , Xn1 ∼ P1, Y1, . . . , Yn2 ∼ P2, and
we are interested in drawing inferences about ∆ = θ1 −θ2. Notice that
∆ > 0 if and only if θ1 > θ2, i.e., if the decrease in blood pressure is
greater for the treatment group than for the control group. Thus, a
drug company required to produce compelling evidence of the drug’s
efficacy might test H0 : ∆ ≤ 0 against H1 : ∆ > 0.
This chapter is divided into three sections:
11.1 If the data are assumed to be normally distributed, then we will be
interested in inferences about the difference in population means. We
will distinguish three cases, corresponding to what is known about the
population variances.
11.2 If the data are only assumed to be continuously distributed, then we
will be interested in inferences about the difference in population me-
dians. We will assume a shift model, i.e., we will assume that P1 and
P2 only differ with respect to location.
11.3 If the data are also assumed to be symmetrically distributed, then
we will be interested in inferences about the difference in population
centers of symmetry. If we assume symmetry, then we need not assume
a shift model.
11.1 The Normal 2-Sample Location Problem
In this section we assume that
P1 = Normal
³
µ1, σ2
1
´
and P2 = Normal
³
µ2, σ2
2
´
.
In describing inferential methods for ∆ = µ1 −µ2, we emphasize connections
with material in Chapter 9 and Section 10.1. For example, the natural
260 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS
estimator of a single normal population mean µ is the plug-in estimator µ̂,
the sample mean, an unbiased, consistent, asymptotically efficient estimator
of µ. In precise analogy, the natural estimator of ∆ = µ1 −µ2, the difference
in populations means, is ˆ
∆ = µ̂1 − µ̂2 = X̄ − Ȳ , the difference in sample
means. Because
E ˆ
∆ = EX̄ − EȲ = µ1 − µ2 = ∆,
ˆ
∆ is an unbiased estimator of ∆. It is also consistent and asymptotically
efficient.
In Chapter 9 and Section 10.1, hypothesis testing and set estimation
for a single population mean were based on knowing the distribution of the
standardized natural estimator, a random variable of the form
sample
mean − hypothesized
mean
standard deviation
of sample mean
.
The denominator of this random variable, often called the standard error,
was either known or estimated, depending on our knowledge of the popula-
tion variance σ2. For σ2 known, we learned that
Z =
X̄ − µ0
p
σ2/n
(
∼ Normal(0, 1) if X1, . . . , Xn ∼ Normal
¡
µ0, σ2
¢
˙
∼ Normal(0, 1) if n large
)
.
For σ2 unknown and estimated by S2, we learned that
T =
X̄ − µ0
p
S2/n
(
∼ t(n − 1) if X1, . . . , Xn ∼ Normal
¡
µ0, σ2
¢
˙
∼ Normal(0, 1) if n large
)
.
These facts allowed us to construct confidence intervals for and test hy-
potheses about the population mean. The confidence intervals were of the
form
µ
sample
mean
¶
± q ·
µ
standard
error
¶
,
where the critical value q is the appropriate quantile of the distribution of Z
or T. The tests also were based on Z or T, and the significance probabilities
were computed using the corresponding distribution.
The logic for drawing inferences about two populations means is identical
to the logic for drawing inferences about one population mean—we simply
11.1. THE NORMAL 2-SAMPLE LOCATION PROBLEM 261
replace “mean” with “difference in means” and base inferences about ∆ on
the distribution of
sample
difference − hypothesized
difference
standard deviation
of sample difference
=
ˆ
∆ − ∆0
standard error
.
Because Xi ∼ Normal(µ1, σ2
1) and Yj ∼ Normal(µ2, σ2
2),
X̄ ∼ Normal
Ã
µ1,
σ2
1
n1
!
and Ȳ ∼ Normal
Ã
µ2,
σ2
2
n2
!
.
Because X̄ and Ȳ are independent, it follows from Theorem 5.2 that
ˆ
∆ = X̄ − Ȳ ∼ Normal
Ã
∆ = µ1 − µ2,
σ2
1
n1
+
σ2
2
n2
!
.
We now distinguish three cases:
1. Both σi are known (and possibly unequal). The inferential theory for
this case is easy; unfortunately, population variances are rarely known.
2. The σi are unknown, but necessarily equal (σ1 = σ2 = σ). This case
should strike the student as somewhat implausible. If the population
variances are not known, then under what circumstances might we
reasonably assume that they are equal? Although such circumstances
do exist, the primary importance of this case is that the correspond-
ing theory is elementary. Nevertheless, it is important to study this
case because the methods derived from the assumption of an unknown
common variance are widely used—and abused.
3. The σi are unknown and possibly unequal. This is clearly the case of
greatest practical importance, but the corresponding theory is some-
what unsatisfying. The problem of drawing inferences when the pop-
ulation variances are unknown and possibly unequal is sufficiently no-
torious that it has a name: the Behrens-Fisher problem.
11.1.1 Known Variances
If ∆ = ∆0, then
Z =
ˆ
∆ − ∆0
r
σ2
1
n1
+
σ2
2
n2
∼ Normal(0, 1).
262 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS
Given α ∈ (0, 1), let qz denote the 1 − α/2 quantile of Normal(0, 1). We
construct a (1 − α)-level confidence interval for ∆ by writing
1 − α = P (|Z| < qz)
= P

| ˆ
∆ − ∆| < qz
s
σ2
1
n1
+
σ2
2
n2


= P

 ˆ
∆ − qz
s
σ2
1
n1
+
σ2
2
n2
< ∆ < ˆ
∆ − qz
s
σ2
1
n1
+
σ2
2
n2


The desired confidence interval is
ˆ
∆ ± qz
s
σ2
1
n1
+
σ2
2
n2
.
Example 11.3 For the first population, suppose that we know that
the population standard deviation is σ1 = 5 and that we observe a sample of
size n1 = 60 with sample mean x̄ = 7.6. For the second population, suppose
that we know that the population standard deviation is σ2 = 2.5 and that
we observe a sample of size n2 = 15 with sample mean ȳ = 5.2. To construct
a 0.95-level confidence interval for ∆, we first compute
qz = qnorm(.975) = 1.959964
.
= 1.96,
then
(7.6 − 5.2) ± 1.96
s
52
60
+
2.52
15
.
= 2.4 ± 1.79 = (0.61, 4.21).
Example 11.4 For the first population, suppose that we know that
the population variance is σ2
1 = 8 and that we observe a sample of size
n1 = 10 with sample mean x̄ = 9.7. For the second population, suppose
that we know that the population variance is σ2
2 = 96 and that we observe
a sample of size n2 = 5 with sample mean ȳ = 2.6. To construct a 0.95-level
confidence interval for ∆, we first compute
qz = qnorm(.975) = 1.959964
.
= 1.96,
then
(9.7 − 2.6) ± 1.96
r
8
10
+
96
5
.
= 7.1 ± 8.765 = (−1.665, 15.865).
11.1. THE NORMAL 2-SAMPLE LOCATION PROBLEM 263
To test H0 : ∆ = ∆0 versus H1 : ∆ 6= ∆0, we exploit the fact that
Z ∼ Normal(0, 1) under H0. Let z denote the observed value of Z. Then a
natural level-α test is the test that rejects H0 if and only if
p = P∆0 (|Z| ≥ |z|) ≤ α,
which is equivalent to rejecting H0 if and only if |z| ≥ qz. This test is
sometimes called the 2-sample z-test.
Example 11.3 (continued) To test H0 : ∆ = 0 versus H1 : ∆ 6= 0,
we compute
z =
(7.6 − 5.2) − 0
p
52/60 + 2.52/15
.
= 2.629.
Because |2.629| > 1.96, we reject H0 at significance level α = 0.05. The
significance probability is
p = P∆0 (|Z| ≥ |2.629|) = 2 ∗ pnorm(−2.629)
.
= 0.008562.
Example 11.4 (continued) To test H0 : ∆ = 0 versus H1 : ∆ 6= 0,
we compute
z =
(9.7 − 2.6) − 0
p
8/10 + 96/5
.
= 1.5876.
Because |1.5876| < 1.96, we decline to reject H0 at significance level α = 0.05.
The significance probability is
p = P∆0 (|Z| ≥ |1.5876|) = 2 ∗ pnorm(−1.5876)
.
= 0.1124.
11.1.2 Unknown Common Variance
Now we assume that σ1 = σ2 = σ, but that the common variance σ2 is
unknown. Because σ2 is unknown, we must estimate it. Let
S2
1 =
1
n1 − 1
n1
X
i=1
(Xi − X̄)2
denote the sample variance for the Xi and let
S2
2 =
1
n2 − 1
n2
X
j=1
(Yj − Ȳ )2
264 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS
denote the sample variance for the Yj. If we only sampled the first popu-
lation, then we would use S2
1 to estimate the first population variance, σ2
1.
Likewise, if we only sampled the second population, then we would use S2
2
to estimate the second population variance, σ2
2. Neither is appropriate in
the present situation, as S2
1 does not use the second sample and S2
2 does not
use the first sample. Therefore, we create a weighted average of the separate
sample variances,
S2
P =
(n1 − 1)S2
1 + (n2 − 1)S2
2
(n1 − 1) + (n2 − 1)
=
1
n1 + n2 − 2


n1
X
i=1
(Xi − X̄)2
+
n2
X
j=1
(Yj − Ȳ )2

 ,
the pooled sample variance. Then
ES2
P =
(n1 − 1)ES2
1 + (n2 − 1)ES2
2
(n1 − 1) + (n2 − 1)
=
(n1 − 1)σ2 + (n2 − 1)σ2
(n1 − 1) + (n2 − 1)
= σ2
,
so the pooled sample variance is an unbiased estimator of a common popula-
tion variance. It is also consistent and asymptotically efficient for estimating
a common normal variance.
Instead of
Z =
ˆ
∆ − ∆0
r
σ2
1
n1
+
σ2
2
n2
=
ˆ
∆ − ∆0
r³
1
n1
+ 1
n2
´
σ2
,
we now rely on
T =
ˆ
∆ − ∆0
r³
1
n1
+ 1
n2
´
S2
P
.
The following result allows us to construct confidence intervals and test
hypotheses about the shift parameter ∆ = µ1 − µ2.
Theorem 11.1 If ∆ = ∆0, then T ∼ t(n1 + n2 − 2).
Given α ∈ (0, 1), let qt denote the 1 − α/2 quantile of t(n1 + n2 − 2).
Exploiting Theorem 11.1, a (1 − α)-level confidence interval for ∆ is
ˆ
∆ ± qt
sµ
1
n1
+
1
n2
¶
S2
P .
11.1. THE NORMAL 2-SAMPLE LOCATION PROBLEM 265
Example 11.3 (continued) Now suppose that, instead of knowing
population standard deviations σ1 = 5 and σ2 = 2.5, we observe sample
standard deviations s1 = 5 and s2 = 2.5. The ratio of sample variances,
s2
1/s2
2 = 4 6= 1, strongly suggests that the population variances are unequal.
We proceed under the assumption that σ1 = σ2 for the purpose of illustra-
tion. The pooled sample variance is
S2
P =
s
59 · 52 + 14 · 2.52
59 + 14
= 21.40411.
To construct a 0.95-level confidence interval for ∆, we first compute
qt = qt(.975, 73) = 1.992997
.
= 1.993,
then
(7.6 − 5.2) ± 1.993
sµ
1
60
+
1
15
¶
· 21.40411
.
= 2.4 ± 2.66 = (−0.26, 5.06).
Example 11.4 (continued) Now suppose that, instead of knowing
population variances σ2
1 = 8 and σ2
2 = 96, we observe sample variances
s2
1 = 8 and s2
2 = 96. Again, the ratio of sample variances, s2
2/s2
1 = 12 6= 1,
strongly suggests that the population variances are unequal. We proceed
under the assumption that σ1 = σ2 for the purpose of illustration. The
pooled sample variance is
S2
P =
s
9 · 8 + 4 · 96
9 + 4
= 35.07692.
To construct a 0.95-level confidence interval for ∆, we first compute
qt = qt(.975, 13) = 2.160369
.
= 2.16,
then
(9.7 − 2.6) ± 2.16
sµ
1
10
+
1
5
¶
· 35.07692
.
= 7.1 ± 7.01 = (0.09, 14.11).
To test H0 : ∆ = ∆0 versus H1 : ∆ 6= ∆0, we exploit the fact that
T ∼ t(n1 + n2 − 2) under H0. Let t denote the observed value of T. Then a
natural level-α test is the test that rejects H0 if and only if
p = P∆0 (|T| ≥ |t|) ≤ α,
which is equivalent to rejecting H0 if and only if |t| ≥ qt. This test is called
Student’s 2-sample t-test.
266 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS
Example 11.3 (continued) To test H0 : ∆ = 0 versus H1 : ∆ 6= 0,
we compute
t =
(7.6 − 5.2) − 0
p
(1/60 + 1/15) · 21.40411
.
= 1.797.
Because |1.797| < 1.993, we decline to reject H0 at significance level α = .05.
The significance probability is
p = P∆0 (|T| ≥ |1.797|) = 2 ∗ pt(−1.797, 73)
.
= 0.0764684.
Example 11.4 (continued) To test H0 : ∆ = 0 versus H1 : ∆ 6= 0,
we compute
t =
(9.7 − 2.6) − 0
p
(1/10 + 1/5) · 35.07692
.
= 2.19.
Because |2.19| > 2.16, we reject H0 at significance level α = .05. The
significance probability is
p = P∆0 (|T| ≥ |2.19|) = 2 ∗ pt(−2.19, 13)
.
= 0.04747.
11.1.3 Unknown Variances
Now we drop the assumption that σ1 = σ2. We must then estimate each
population variance separately, σ2
1 with S2
1 and σ2
2 with S2
2. Instead of
Z =
ˆ
∆ − ∆0
r
σ2
1
n1
+
σ2
2
n2
we now rely on
TW =
ˆ
∆ − ∆0
r
S2
1
n1
+
S2
2
n2
.
Unfortunately, there is no analogue of Theorem 11.1—the exact distribution
of TW is not known.
The exact distribution of TW appears to be intractable, but Welch (1937,
1947) argued that TW ˙
∼ t(ν), with
ν =
³
σ2
1
n1
+
σ2
2
n2
´2
(σ2
1/n1)2
n1−1 +
(σ2
2/n2)2
n2−1
.
11.1. THE NORMAL 2-SAMPLE LOCATION PROBLEM 267
Because σ2
1 and σ2
2 are unknown, we estimate ν by
ν̂ =
³
S2
1
n1
+
S2
2
n2
´2
(S2
1 /n1)2
n1−1 +
(S2
2 /n2)2
n2−1
.
Simulation studies have revealed that the approximation TW ˙
∼ t(ν̂) works
well in practice.
Given α ∈ (0, 1), let qt denote the 1−α/2 quantile of t(ν̂). Using Welch’s
approximation, an approximate (1 − α)-level confidence interval for ∆ is
ˆ
∆ ± qt
s
S2
1
n1
+
S2
2
n2
.
Example 11.3 (continued) Now we estimate the unknown popula-
tion variances separately, σ2
1 by s2
1 = 52 and σ2
2 by s2
2 = 2.52. Welch’s
approximation involves
ν̂ =
³
52
60 + 2.52
15
´2
(52/60)2
60−1 + (2.52/15)2
15−1
= 45.26027
.
= 45.26
degrees of freedom. To construct a 0.95-level confidence interval for ∆, we
first compute
qt = qt(.975, 45.26)
.
= 2.014,
then
(7.6 − 5.2) ± 2.014
q
52/60 + 2.52/15
.
= 2.4 ± 1.84 = (0.56, 4.24).
Example 11.4 (continued) Now we estimate the unknown popula-
tion variances separately, σ2
1 by s2
1 = 8 and σ2
2 by s2
2 = 96. Welch’s approxi-
mation involves
ν̂ =
³
8
10 + 96
5
´2
(8/10)2
10−1 + (96/5)2
5−1
= 4.336931
.
= 4.337
degrees of freedom. To construct a 0.95-level confidence interval for ∆, we
first compute
qt = qt(.975, 4.337)
.
= 2.6934,
268 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS
then
(9.7 − 2.6) ± 2.6934
q
8/10 + 96/5
.
= 7.1 ± 13.413 = (−6.313, 20.513).
To test H0 : ∆ = ∆0 versus H1 : ∆ 6= ∆0, we exploit the approximation
TW ˙
∼ t(ν̂) under H0. Let tW denote the observed value of TW . Then a
natural approximate level-α test is the test that rejects H0 if and only if
p = P∆0 (|TW | ≥ |tW |) ≤ α,
which is equivalent to rejecting H0 if and only if |tW | ≥ qt. This test is
sometimes called Welch’s approximate t-test.
Example 11.3 (continued) To test H0 : ∆ = 0 versus H1 : ∆ 6= 0,
we compute
tW =
(7.6 − 5.2) − 0
p
52/60 + 2.52/15
.
= 2.629.
Because |2.629| > 2.014, we reject H0 at significance level α = 0.05. The
significance probability is
p = P∆0 (|TW | ≥ |2.629|) = 2 ∗ pt(−2.629, 45.26)
.
= 0.011655.
Example 11.4 (continued) To test H0 : ∆ = 0 versus H1 : ∆ 6= 0,
we compute
tW =
(9.7 − 2.6) − 0
p
8/10 + 96/5
.
= 1.4257.
Because |1.4257| < 2.6934, we decline to reject H0 at significance level α =
0.05. The significance probability is
p = P∆0 (|TW | ≥ |1.4257|) = 2 ∗ pt(−1.4257, 4.337)
.
= 0.2218.
Examples 11.3 and 11.4 were carefully constructed to reveal the sensi-
tivity of Student’s 2-sample t-test to the assumption of equal population
variances. Welch’s approximation is good enough that we can use it to
benchmark Student’s test when variances are unequal. In Example 11.3,
Welch’s approximate t-test produced a significance probability of p
.
= 0.012,
leading us to reject the null hypothesis at α = 0.05. Student’s 2-sample
t-test produced a misleading significance probability of p
.
= 0.076, leading
11.1. THE NORMAL 2-SAMPLE LOCATION PROBLEM 269
us to commit a Type II error. In Example 11.4, Welch’s approximate t-test
produced a significance probability of p
.
= 0.222, leading us to accept the
null hypothesis at α = 0.05. Student’s 2-sample t-test produced a misleading
significance probability of p
.
= 0.047, leading us to commit a Type I error.
Evidently, Student’s 2-sample t-test (and the corresponding procedure for
constructing confidence intervals) should not be used unless one is convinced
that the population variances are identical. The consequences of using Stu-
dent’s test when the population variances are unequal may be exacerbated
when the sample sizes are unequal. In general:
• If n1 = n2, then t = tW .
• If the population variances are (approximately) equal, then t and tW
tend to be (approximately) equal.
• If the larger sample is drawn from the population with the larger vari-
ance, then t will tend to be less than tW . All else equal, this means
that Student’s test will tend to produce significance probabilities that
are too large.
• If the larger sample is drawn from the population with the smaller
variance, then t will tend to be greater than tW . All else equal, this
means that Student’s test will tend to produce significance probabilities
that are too small.
• If the population variances are (approximately) equal, then ν̂ will be
(approximately) n1 + n2 − 2.
• It will always be the case that ν̂ ≤ n1+n2−2. All else equal, this means
that Student’s test will tend to produce significance probabilities that
are too large.
From these observations we draw the following conclusions:
1. If the population variances are unequal, then Student’s 2-sample t-test
may produce misleading significance probabilities.
2. If the population variances are equal, then Welch’s approximate t-
test is approximately equivalent to Student’s 2-sample t-test. Thus,
if one uses Welch’s test in the situation for which Student’s test is
appropriate, one is not likely to be led astray.
270 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS
141 148 132 138 154 142 150 146 155 158 150 140
147 148 144 150 149 145 149 158 143 141 144 144
126 140 144 142 141 140 145 135 147 146 141 136
140 146 142 137 148 154 137 139 143 140 131 143
141 149 148 135 148 152 143 144 141 143 147 146
150 132 142 142 143 153 149 146 149 138 142 149
142 137 134 144 146 147 140 142 140 137 152 145
133 138 130 138 134 127 128 138 136 131 126 120
124 132 132 125 139 127 133 136 121 131 125 130
129 125 136 131 132 127 129 132 116 134 125 128
139 132 130 132 128 139 135 133 128 130 130 143
144 137 140 136 135 126 139 131 133 138 133 137
140 130 137 134 130 148 135 138 135 138
Table 11.1: Maximum breadth (in millimeters) of 84 skulls of Etruscan males
(top) and 70 skulls of modern Italian males.
3. Don’t use Student’s 2-sample t-test! I remember how shocked I was
when I first heard this advice as a first-year graduate student in a
course devoted to the theory of hypothesis testing. The instructor,
Erich Lehmann, one of the great statisticians of the 20th century and
the author of a famous book on hypothesis testing, told us: “If you
get just one thing out of this course, I’d like it to be that you should
never use Student’s 2-sample t-test.”
11.2 The Case of a General Shift Family
11.3 The Symmetric Behrens-Fisher Problem
11.4 Case Study: Etruscan versus Italian Head
Breadth
In a collection of essays on the origin of the Etruscan empire, N.A. Barnicott
and D.R. Brothwell compared measurements on ancient and modern bones.1
1
N.A. Barnicott and D.R. Brothwell (1959). The evaluation of metrical data in the
comparison of ancient and modern bones. In Medical Biology and Etruscan Origins, edited
11.4. CASE STUDY: ETRUSCAN VERSUS ITALIAN HEAD BREADTH 271
−2 0 1 2
125
140
155
Etruscan
Theoretical Quantiles
Sample
Quantiles
−2 −1 0 1 2
115
130
145
Italian
Theoretical Quantiles
Sample
Quantiles
Figure 11.1: Normal probability plots of two samples of maximum skull
breadth.
Measurements of the maximum breadth of 84 Etruscan skulls and 70 modern
Italian skulls were subsequently reproduced as Data Set 155 in A Handbook
of Small Data Sets and are displayed in Table 11.1. We use these data to
explore the difference (if any) between Etruscan and modern Italian males
with respect to head breadth. In the discussion that follows, x will denote
Etruscans and y will denote modern Italians.
We begin by asking if it is reasonable to assume that maximum skull
breadth is normally distributed. Normal probability plots of our two sam-
ples are displayed in Figure 11.1. The linearity of these plots conveys the
distinct impression of normality. Kernel density estimates constructed from
the two samples are superimposed in Figure 11.2, created by the following R
commands:
> plot(density(x),type="l",xlim=c(100,180),
+ xlab="Maximum Skull Breadth",
+ main="Kernel Density Estimates")
> lines(density(y),type="l")
Not only do the kernel density estimates reinforce our impression of nor-
by G.E.W. Wolstenholme and C.M. O’Connor, Little, Brown & Company, p. 136.
272 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS
100 120 140 160 180
0.00
0.02
0.04
0.06
Kernel Density Estimates
Maximum Skull Breadth
Density
Figure 11.2: Kernel density estimates constructed from two samples of max-
imum skull breadth. The sample mean for the Etruscan skulls is x̄
.
= 143.8;
the sample mean for the modern Italian skulls is ȳ
.
= 132.4.
mality, they also suggest that the two populations have comparable vari-
ances. (The ratio of sample variances is s2
1/s2
2 = 1.07819.) The difference
is maximum breadth between Etruscan and modern Italian skulls is nicely
summarized by a shift parameter.
Now we construct a probability model. This is a 2-sample location prob-
lem in which an experimental unit is a skull. The skulls were drawn from
two populations, Etruscan males and modern Italian males, and one mea-
surement (maximum breadth) was made on each experimental unit. Let
Xi denote the maximum breadth of Etruscan skull i and let Yj denote the
maximum breadth of Italian skull j. We assume that the Xi and Yj are
independent, with Xi ∼ Normal(µ1, σ2
1) and Yj ∼ Normal(µ2, σ2
2). Notice
that, although the sample variances are nearly equal, we do not assume
that the population variances are identical. Instead, we will use Welch’s ap-
proximation to construct an approximate 0.95-level confidence interval for
11.4. CASE STUDY: ETRUSCAN VERSUS ITALIAN HEAD BREADTH 273
∆ = µ1 − µ2.
Because the confidence coefficient 1 − α = 0.95, α = 0.05. The desired
confidence interval is of the form
ˆ
∆ ± q
s
s2
1
n1
+
s2
2
n2
,
where q is the 1 − α/2 = 0.975 quantile of a t distribution with ν̂ degrees of
freedom. We can easily compute these quantities in R. To compute ˆ
∆, the
estimated shift parameter:
> Delta <- mean(x)-mean(y)
To compute the standard error:
> n1 <- length(x)
> n2 <- length(y)
> v1 <- var(x)/n1
> v2 <- var(y)/n2
> se <- sqrt(v1+v2)
To compute ν̂, the estimated degrees of freedom:
> nu <- (v1+v2)^2/(v1^2/(n1-1)+v2^2/(n2-1))
To compute q, the desired quantile:
> q <- qt(.975,df=nu)
Finally, to compute the lower and upper endpoints of the desired confidence
interval:
> lower <- Delta-q*se
> upper <- Delta+q*se
These calculations result in a 0.95-level confidence interval for ∆ = µ1 − µ2
of (9.459782, 13.20212), so that we can be fairly confident that the maximum
breadth of Etruscan male skulls is, on average, roughly a centimeter greater
than the maximum breadth of modern Italian male skulls.
274 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS
11.5 Exercises
Problem Set A
1. We have been using various mathematical symbols in our study of 1-
and 2-sample location problems. Each of the symbols listed below is
used to represent a real number. State which of the following state-
ments applies to each symbol:
i. The real number represented by this symbol is an unknown pop-
ulation parameter.
ii. The real number represented by this symbol is calculated from
the observed data.
iii. The real number represented by this symbol is specified by the
experimenter.
Here are the symbols:
µ µ0 x̄ s2 t α ∆ ∆0 p ν̂
2. Assume that X1, . . . , X10 ∼ Normal(µ1, σ2
1) and that Y1, . . . , Y20 ∼
Normal(µ2, σ2
2). None of the population parameters are known. Let
∆ = µ1 − µ2. To test H0 : ∆ ≥ 0 versus H1 : ∆ < 0 at significance
level α = 0.05, we observe samples ~
x and ~
y.
(a) What test should be used in this situation? If we observe ~
x and
~
y that result in x̄ = −0.82, s1 = 4.09, ȳ = 1.39, and s2 = 1.22,
then what is the value of the test statistic?
(b) If we observe ~
x and ~
y that result in s1 = 4.09, s2 = 1.22,, and a
test statistic value of 1.76, then which of the following R expres-
sions best approximates the significance probability?
i. 2*pnorm(-1.76)
ii. pt(-1.76,df=28)
iii. pt(1.76,df=10)
iv. pt(-1.76,df=10)
v. 2*pt(1.76,df=28)
(c) True of False: if we observe ~
x and ~
y that result in a significance
probability of p = 0.96, then we should reject the null hypothesis.
11.5. EXERCISES 275
Problem Set B Each of the following scenarios can be modelled as a 1-
or 2-sample location problem. For 1-sample problems, let Xi denote the
random variables of interest and let µ = EXi. For 2-sample problems, let
Xi and Yj denote the random variables of interest; let µ1 = EXi, µ2 = EYj,
and ∆ = µ1 − µ2. For each scenario, you should answer/do the following:
(a) What is the experimental unit?
(b) From how many populations were the experimental units
drawn? Identify the population(s). How many units were
drawn from each population? Is this a 1- or a 2-sample
problem?
(c) How many measurements were taken on each experimental
unit? Identify them.
(d) Define the parameter(s) of interest for this problem. For 1-
sample problems, this should be µ; for 2-sample problems,
this should be ∆.
(e) State appropriate null and alternative hypotheses.
Here are the scenarios:
1. A mathematics/education concentrator theorizes that learning math-
ematics and statistics is sometimes impeded by the widespread use of
odd symbols like α, χ, and ω. She reasons that, if her theory is cor-
rect, then students who belong to sororities and fraternities—who she
presumes are more familiar with Greek letters—should have an easier
time learning the mathematical subjects that use such symbols. To
investigate, she obtains a list of all William & Mary students who are
enrolled in Math 111 (calculus) and a list of all William & Mary stu-
dents who belong to a sorority or fraternity. She uses this information
to choose (at random) 20 calculus students who do belong to a soror-
ity or fraternity and 20 calculus students who do not. She persuades
each of these students to take a calculus quiz, specially designed to use
lots of Greek letters. How might she use the resulting data to test her
theory? (Respond to (a)–(e) above.)
2. Umberto theorizes that living with a dog diminishes depression in the
elderly, here defined as more than 70 years of age. To investigate his
theory, he recruits 15 single elderly men who own dogs and 15 single
elderly men who do not own any pets. The Hamilton instrument for
276 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS
measuring depressive tendency is administered to each subject. High
scores indicate depression. How might Umberto use the resulting data
to test his theory? (Respond to (a)–(e) above.)
3. The William & Mary women’s tennis team uses championship balls in
their matches and less expensive practice balls in their team practices.
The players have formed a strong impression that the practice balls
do not wear as well as the championship balls, i.e., that the practice
balls lose their bounce more quickly than the championship balls. To
investigate this perception, Nina and Delphine conceive the following
experiment. Before one practice, the team opens new cans of cham-
pionship balls and practice balls, which they then use for that day’s
practice. After practice, Nina and Delphine randomly select 10 of the
used championship balls and 10 of the used practice balls. They drop
each ball from a height of 1 meter and measure the height of its first
bounce. How might Nina and Delphine test the team’s impression that
practice balls do not wear as well as championship balls? (Respond to
(a)–(e) above.)
4. A political scientist theorizes that women tend to be more opposed
to military intervention than do men. To investigate this theory, he
devises an instrument on which a subject responds to several recent
U.S. military interventions on a 5-point Likert scale (1=“strongly sup-
port,”. . . ,5=“strongly oppose”). A subject’s score on this instrument
is the sum of his/her individual responses. The scientist randomly se-
lects 50 married couples in which neither spouse has a registered party
affiliation and administers the instrument to each of the 100 individu-
als so selected. How might he use his results to determine if his theory
is correct? (Respond to (a)–(e) above.)
5. A shoe company claims that wearing its racing flats will typically im-
prove one’s time in a 10K road race by more than 30 seconds. A
running magazine sponsors an event to test this claim. It arranges for
120 runners to enter two road races, held two weeks apart on the same
course. For the second race, each of these runners is supplied with the
new racing flat. How might the race results be used to determine the
validity of the shoe company’s claim? (Respond to (a)–(e) above.)
6. Susan theorizes that impregnating wood with an IGR (insect growth
regulator) will reduce wood consumption by termites. To investigate
11.5. EXERCISES 277
this theory, she impregnates 60 wood blocks with a solvent contain-
ing the IGR and 60 wood blocks with just the solvent. Each block is
weighed, then placed in a separate container with 100 ravenous ter-
mites. After two weeks, she removes the blocks and weighs them again
to determine how much wood has been consumed. How might Su-
san use her results to determine if her theory is correct? (Respond to
(a)–(e) above.)
7. To investigate the effect of swing dancing on cardiovascular fitness, an
exercise physiologist recruits 20 couples enrolled in introductory swing
dance classes. Each class meets once a week for ten weeks. Participants
are encouraged to go out dancing on at least two additional occasions
each week. In general, lower resting pulses are associated with greater
cardiovascular fitness. Accordingly, each participant’s resting pulse
is measured at the beginning and at the end of the ten-week class.
How might the resulting data be used to determine if swing dancing
improves cardiovascular fitness? (Respond to (a)–(e) above.)
8. It is thought that Alzheimer’s disease (AD) impairs short-term memory
more than it impairs long-term memory. To test this theory, a psychol-
ogist studied 60 mildly demented AD patients and 60 normal elderly
control subjects. Each subject was administered a short-term and a
long-term memory task. On each task, high scores are better than low
scores. How might the psychologist use the resulting task scores to
determine if the theory is correct? (Respond to (a)–(e) above.)
9. According to an article in Newsweek (May 10, 2004, page 89), recent
“studies have shown consistently that women are better than men at
reading and responding to subtle cues about mood and temperament.”
Some psychologists believe that such differences can be explained in
part by biological differences between male and female brains. One
such psychologist conducts a study in which day-old babies are shown
three human faces and three mechanical objects. The time that the
baby stares at each face/object is recorded. Of interest is how much
time the baby spends staring at faces versus how much time the baby
spends staring at objects. The psychologist’s theory predicts that this
comparison will differ by sex, with female babies preferring faces to
objects to a greater extent than do male babies. How might the psy-
chologist use his results to determine if his theory is correct? (Respond
to (a)–(e) above.)
278 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS
Problem Set C In the early 1960s, the Western Collaborative Group
Study investigated the relation between behavior and risk of coronary heart
disease in middle-aged men. Type A behavior is characterized by urgency,
aggression and ambition; Type B behavior is noncompetitive, more relaxed
and less hurried. The following data, which appear in Table 2.1 of Selvin
(1991) and Data Set 47 in A Handbook of Small Data Sets, are the cholesterol
measurements of 20 heavy men of each behavior type. (In fact, these 40
men were the heaviest in the study. Each weighed at least 225 pounds.) We
consider whether or not they provide evidence that heavy Type A men have
higher cholesterol levels than heavy Type B men.
Cholesterol Levels for Heavy Type A Men
233 291 312 250 246 197 268 224 239 239
254 276 234 181 248 252 202 218 212 325
Cholesterol Levels for Heavy Type B Men
344 185 263 246 224 212 188 250 148 169
226 175 242 252 153 183 137 202 194 213
1. Respond to (a)–(e) in Problem Set B.
2. Does it seem reasonable to assume that the samples ~
x and ~
y, the ob-
served values of X1, . . . , Xn1 and Y1, . . . , Yn2 , were drawn from normal
distributions? Why or why not?
3. Assume that the Xi and the Yj are normally distributed.
(a) Test the null hypothesis derived above using Welch’s approximate
t-test. What is the significance probability? If we adopt a signif-
icance level of α = 0.05, should we reject the null hypothesis?
(b) Construct a (2-sided) confidence interval for ∆ with a confidence
coefficient of approximately 0.90.
Problem Set D Researchers measured urinary β-thromboglobulin excre-
tion in 12 diabetic patients and 12 normal control subjects, reporting their
findings in Thrombosis and Haemostasis. The following measurements are
Data Set 313 in A Handbook of Small Data Sets:
Normal 4.1 6.3 7.8 8.5 8.9 10.4
11.5 12.0 13.8 17.6 24.3 37.2
Diabetic 11.5 12.1 16.1 17.8 24.0 28.8
33.9 40.7 51.3 56.2 61.7 69.2
11.5. EXERCISES 279
1. Do these measurements appear to be samples from symmetric distri-
butions? Why or why not?
2. Both samples of positive real numbers appear to be drawn from dis-
tributions that are skewed to the right, i.e., the upper tail of the dis-
tribution is longer than the lower tail of the distribution. Often, such
distributions can be symmetrized by applying a suitable data transfor-
mation. Two popular candidates are:
(a) The natural logarithm: ui = log(xi) and vj = log(yj).
(b) The square root: ui =
√
xi and vj =
√
yj.
Investigate the effect of each of these transformations on the above
measurements. Do the transformed measurements appear to be sam-
ples from symmetric distributions? Which transformation do you pre-
fer?
3. Do the transformed measurements appear to be samples from normal
distributions? Why or why not?
4. The researchers claimed that diabetic patients have increased urinary
β-thromboglobulin excretion. Assuming that the transformed mea-
surements are samples from normal distributions, how convincing do
you find the evidence for their claim?
Problem Set E
1. Chemistry lab partners Arlen and Stuart collaborated on an experi-
ment in which they measured the melting points of 20 specimens of
two types of sealing wax. Twelve of the specimens were of one type
(A); eight were of the other type (B). Each student then used Welch’s
approximate t-test to test the null hypothesis of no difference in mean
melting point between the two methods:
• Arlen applied Welch’s approximate t-test to the original melting
points, which were measured in degrees Fahrenheit.
• Stuart first converted each melting point to degrees Celsius (by
subtracting 32, then multiplying by 5/9), then applied Welch’s
approximate t-test to the converted melting points.
280 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS
Comment on the potential differences between these two analyses. In
particular, is it True or False that (ignoring round-off error) Arlen and
Stuart will obtain identical significance probabilities? Please justify
your comments.
2. A graduate student in ornithology would like to determine if created
marshes differ from natural marches in their appeal to avian commu-
nities. He plans to observe n1 = 9 natural marshes and n2 = 9 created
marshes, counting the number of red-winged blackbirds per acre that
inhabit each marsh. His thesis committee wants to know how much he
thinks he will be able to learn from this experiment.
Let Xi denote the number of blackbirds per acre in natural marsh i and
let Yj denote the number of blackbirds per acre in created marsh j. In
order to respond to his committee, the student makes the simplifying
assumptions that Xi ∼ Normal(µ1, σ2) and Yj ∼ Normal(µ2, σ2). He
estimates that iqr(Xi) = iqr(Yj) = 10. Calculate L, the length of the
0.90-level confidence interval for ∆ = µ1 − µ2 that he can expect to
construct.
3. A film buff has formed the vague impression that movies tend to be
longer than they used to be. Are they really longer? Or do they
just seem longer? To investigate, he randomly samples U.S. feature
films made in 1956 and U.S. feature films made in 1996, obtaining the
following results:
11.5. EXERCISES 281
Year Title Minutes
1956 Accused of Murder 74
Away All Boats 114
Baby Doll 114
The Bold and the Brave 87
Come Next Spring 92
The Flaming Teen-Age 55
Gun Girls 67
Helen of Troy 118
The Houston Story 79
Patterns 83
The Price of Fear 79
The Revolt of Mamie Stover 92
Written on the Wind 99
The Young Guns 87
1996 $40,000 70
Barb Wire 98
Breathing Room 90
Daddy’s Girl 95
Ed’s Next Move 88
From Dusk to Dawn 108
Galgameth 110
The Glass Cage 96
Kissing a Dream 91
Love & Sex etc. 88
Love is All There Is 120
Making the Rules 96
Spirit Lost 90
Work 90
Do these data provide convincing evidence that 1996 movies are longer
than 1956 movies? Compute a significance probability that may be
used to encourage or discourage the film buff’s impression. Explain
how this number should be interpreted. Identify and defend any as-
sumptions that you made in your calculations.
282 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS
Chapter 12
k-Sample Location Problems
Now we generalize our study of location problems from two to k ≥ 3 popula-
tions. Again we are concerned with comparing the populations with respect
to some measure of centrality, typically the population mean or the popu-
lation median. We designate the populations by P1, . . . , Pk and the corre-
sponding sample sizes by n1, . . . , nk. Our bookkeeping will be facilitated by
the use of double subscripts, e.g.,
X11, . . . , X1n1 ∼ P1,
X21, . . . , X2n2 ∼ P2,
.
.
.
Xk1, . . . , Xknk
∼ Pk.
These expressions can be summarized succinctly by writing
Xij ∼ Pi.
We assume the following:
1. The Xij are mutually independent continuous random variables.
2. Pi has location parameter θi, e.g., θi = µi = EXij or θi = q2(Xij).
3. We observe random samples ~
xi = {xi1, . . . , xini }, from which we at-
tempt to draw inferences about (θ1, . . . , θk). In general, we do not
assume that n1 = · · · = nk. However, certain procedures do require
equal sample sizes. Furthermore, certain procedures that can be used
with unequal sample sizes are greatly simplified when the sample sizes
are equal.
283
284 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS
The same four questions that we posed at the beginning of Chapter 10
and asked in Chapters 10–11 can be asked here. What distinguishes k-sample
problems from 1-sample and 2-sample problems is the number of populations
from which the experimental units were drawn. The prototypical case of a
k-sample problem is the case of several treatment populations.
One may wonder why we distinguish between k = 2 and k ≥ 3 pop-
ulations. In fact, many methods for k-sample problems can be applied to
2-sample problems, in which case they often simplify to methods studied
in Chapter 11. However, many issues arise with k ≥ 3 populations that
do not arise with two populations, so the problem of comparing more than
two location parameters is considerably more complicated than the problem
of comparing only two. For this reason, our study of k-sample location
problems will be less comprehensive than our previous studies of 1-sample
and 2-sample location problems.
12.1 The Case of a Normal Shift Family
In this section we assume that P = Normal(µi, σ2). This is sometimes called
the fixed effects model for the oneway analysis of variance (ANOVA). Notice
that we are assuming that each normal population has the same variance.
Recall that we criticized the assumption of equal variances for the normal
2-sample problem. In that setting, however, Welch’s approximate t-test
provides a viable alternative that is available in many popular statistical
software packages. In the more complicated setting of k normal popula-
tions, the assumption of equal variances (sometimes called the assumption
of homoscedasticity) is fairly standard, if only because it is less clear how to
proceed when the variances are unequal. The problem of unequal variances
is discussed in Section 12.3.
12.1.1 The Fundamental Null Hypothesis
The fundamental problem of the analysis of variance is the problem of testing
the null hypothesis that all of the population means are the same, i.e.,
H0 : µ1 = · · · = µk, (12.1)
against the alternative hypothesis that they are not all the same. Notice
that the statement that the population means are not identical does not
imply that each population mean is distinct. For example, if µ1 = µ2 = 1.5
12.1. THE CASE OF A NORMAL SHIFT FAMILY 285
and µ3 = 2.2, then H0 is false. We stress that the analysis of variance is
concerned with inferences about means, not variances.
To motivate our test of H0, we formulate another null hypothesis that is
equivalent to H0. First, let
N =
k
X
i=1
ni
denote the sum of the sample sizes and let
µ̄· =
k
X
i=1
ni
N
µi
denote the population grand mean. The population grand mean is a weighted
average of the individual population means, each population weighted in
proportion to how many of the observations were drawn from it. If H0 is
true, then µ1 = · · · = µk have a common value, say µ, and the population
grand mean equals that common value:
µ̄· =
k
X
i=1
ni
N
µ =
µ
N
k
X
i=1
ni = µ.
Next we introduce a quantity that measures how nearly the individual
population means equal the population grand mean. Let
γ =
k
X
i=1
ni (µi − µ̄·)2
. (12.2)
Notice that γ ≥ 0 and that γ = 0 if and only if each µi = µ̄·. But each µi = µ̄·
if and only if each individual mean assumes a common value, which occurs
if and only if the individual means are identical. Thus, H0 is equivalent to
the null hypothesis
H′
0 : γ = 0,
which is to be tested against the alternative hypothesis
H′
1 : γ > 0.
12.1.2 Testing the Fundamental Null Hypothesis
The idea that underlies our test is to estimate γ and reject H′
0 when the
estimate is sufficiently larger than zero. To estimate γ, we need only estimate
286 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS
the population means that appear in (12.2). The individual sample means,
X̄i· =
1
ni
ni
X
j=1
Xij,
are unbiased estimators of the individual population means, and the sample
grand mean,
X̄·· =
k
X
i=1
ni
N
X̄i· =
k
X
i=1
ni
N


1
ni
ni
X
j=1
Xij

 =
1
N
k
X
i=1
ni
X
j=1
Xij
is an unbiased estimator of the population grand mean. Hence, a natural
estimator of γ is the between-groups or treatment sum of squares,
SSB =
k
X
i=1
ni
¡
X̄i· − X̄··
¢2
,
the variation of the individual sample means about the sample grand mean.
A useful formula for computing the observed value of SSB from the observed
values of the individual sample means is
ssB =
k
X
i=1
nix̄2
i· −
1
N
à k
X
i=1
nix̄i·
!2
.
What remains is to determine when SSB is “sufficiently larger than zero.”
We consider two cases, depending on whether or not the common population
variance σ2 is known.
Known Population Variance
Situations in which σ2 is known are rarely encountered, but it is useful to
consider how to proceed in this case. Here is the key fact that we require:
Theorem 12.1 Under the fundamental null hypothesis (12.1), the random
variable
SSB/σ2
∼ χ2
(k − 1),
where χ2(ν) denotes the chi-squared distribution with ν degrees of freedom,
introduced in Section 5.5. The quantity k − 1 is the between-groups degrees
of freedom.
12.1. THE CASE OF A NORMAL SHIFT FAMILY 287
Theorem 12.1 suggests a way to determine whether or not SSB is “suf-
ficiently larger than zero.” Under H0,
P (SSB ≥ q) = P
³
SSB/σ2
≥ q/σ2
´
= P
³
Y ≥ q/σ2
´
,
where Y ∼ χ2(k − 1); hence, we can use the chi-squared distribution to
compute significance probabilities and/or critical values.
Example 12.1 Suppose that we draw samples of n1 = 20, n2 = 25,
and n3 = 30 observations from normal populations with unknown means and
common variance σ2 = 9, obtaining sample means of x̄1 = 1.489, x̄2 = 1.712,
and x̄3 = 3.082. To test the fundamental null hypothesis that the individual
population means are identical, we first compute N = 20+25+30 = 75 and
evaluate SSB, obtaining
ssB =
³
20 · 1.4892
+ 25 · 1.7122
+ 30 · 3.0822
´
−
(20 · 1.489 + 25 · 1.712 + 30 · 3.082)2
/75
.
= 39.402.
Now we use the R function pchisq to compute a significance probability p:
> 1-pchisq(39.402/9,df=2)
[1] 0.1120287
For conventional levels of significance, p > 0.10 is too large to warrant
rejecting the null hypothesis.
Unknown Population Variance
Now we consider the more realistic case of an unknown population variance.
Our development will mimic the case of a known population variance, but
it is complicated by the need to estimate σ2. Recall that, in Section 11.1.2,
we estimated the unknown common population variance of k = 2 normal
populations with the pooled sample variance,
S2
P =
(n1 − 1)S2
1 + (n2 − 1)S2
2
(n1 − 1) + (n2 − 1)
,
where S2
i is the sample variance for sample i. This procedure is easily ex-
tended to the present case of k ≥ 3 by defining the pooled sample variance
288 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS
as
S2
P =
(n1 − 1)S2
1 + · · · + (nk − 1)S2
k
(n1 − 1) + · · · + (n2 − 1)
=
1
n1 + · · · + nk − k
k
X
i=1
(ni − 1) S2
i
=
1
N − k
k
X
i=1
ni
X
j=1
¡
Xij − X̄i·
¢2
.
As in the case of k = 2,
ES2
P =
(n1 − 1)ES2
1 + · · · + (nk − 1)ES2
k
(n1 − 1) + · · · + (nk − 1)
=
(n1 − 1)σ2 + · · · (nk − 1)σ2
(n1 − 1) + · · · + (nk − 1)
= σ2
,
so the pooled sample variance is an unbiased estimator of a common popula-
tion variance. It is also consistent and asymptotically efficient for estimating
a common normal variance.
In the previous case of a known population variance, our statistic for
testing the fundamental null hypothesis was SSB/σ2. In the present case of
an unknown population variance, we estimate σ2 with S2
P . Our test statistic
will turn out to be SSB/S2
P multiplied by a constant.
In order to simplify the formulas that follow, we multiply S2
P by N − k,
obtaining the within-groups or error sum of squares
SSW = (N − k)S2
P =
k
X
i=1
(ni − 1) S2
i =
k
X
i=1
ni
X
j=1
¡
Xij − X̄i·
¢2
.
In contrast to SSB, which measures the variation of the individual sample
means about the sample grand mean, SSW measures the variations of the
individual observations about the corresponding sample means. For com-
pleteness, we also define the total sum of squares,
SST =
k
X
i=1
ni
X
j=1
¡
Xij − X̄··
¢2
,
which measures the variation of the individual observations about the sample
grand mean.
There is a beautiful relationship between SSB, SSW , and SST , viz.,
12.1. THE CASE OF A NORMAL SHIFT FAMILY 289
Theorem 12.2 SSB + SSW = SST
This formula turns out to be a corollary of the Pythagorean Theorem in
N-dimensional Euclidean space! (In Section 14.2, we will explore a sim-
ilar formula in greater detail.) The reason that our method for testing
the fundamental null hypothesis is called the analysis of variance is that
the method relies on decomposing total squared error into squared error be-
tween groups and squared error within groups. This elegant—and extremely
useful—decomposition is only possible when we use squared error.
The quantities SSB, SSW , and SST are random variables. The following
facts, which subsume Theorem 12.1, summarize the statistical behavior of
these random variables.
Theorem 12.3 The random variable
SST /σ2
∼ χ2
(N − 1).
The quantity N − 1 is the total degrees of freedom.
Under the fundamental null hypothesis (12.1), SSB and SSW are inde-
pendent random variables and
SSB/σ2
∼ χ2
(k − 1),
SSW /σ2
∼ χ2
(N − k).
The quantity k − 1 is the between-groups degrees of freedom and the quantity
N − k is the within-groups degrees of freedom.
We have already remarked that the random variable
SSB
S2
P
=
SSB
SSW /(N − k)
would seem to be a natural statistic for testing the fundamental null hy-
pothesis. Although sound in theory, this approach fails in practice because
the distribution of SSB/S2
P is not tractable. Fortunately, this approach can
be salvaged by a trivial modification. Applying the definition of Fisher’s F
distribution in Section 5.5 to the independent χ2 random variables SSB/σ2
and SSW /σ2, we discover
Corollary 12.1 Under the fundamental null hypothesis (12.1),
F =
SSB
σ2 /(k − 1)
SSW
σ2 /(N − k)
=
SSB/(k − 1)
SSW /(N − k)
∼ F(k − 1, N − k),
290 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS
where F(ν1, ν2) denotes Fisher’s F distribution with ν1 and ν2 degrees of
freedom.
The random variable F is the desired test statistic; notice that
F =
SSB/(k − 1)
SSW /(N − k)
=
1
k − 1
SSB
S2
P
.
Appealing to Corollary 12.1, we see that the ANOVA F-test of the fun-
damental null hypothesis of equal population means is to reject H0 at sig-
nificance level α if and only if the significance probability
p = P(Y ≥ f) ≤ α,
where f denotes the observed value of F and Y ∼ F(k−1, N −k). Of course,
we can also formulate the test using critical values instead of significance
probabilities, in which case we reject H0 at significance level α if and only if
f ≥ q, where q is the 1 − α quantile of the F(k − 1, N − k) distribution.
Example 12.2 Suppose that we draw samples of n1 = 25, n2 = 20,
and n3 = 20 observations from normal populations with unknown means
and unknown common variance, obtaining the following sample quantities:
i = 1 i = 2 i = 3
ni 25 20 20
x̄i· 9.783685 10.908170 15.002820
s2
i 29.89214 18.75800 51.41654
To test the null hypothesis of equal population means at significance level
α = 0.05, we begin by computing the observed values of SSB and SSW ,
obtaining ssB
.
= 322.4366 and
ssW = (25−1)·29.89214+(20−1)·18.75800+(20−1)·51.41654
.
= 2050.7280.
It follows that the observed value of the test statistic is
f =
ssB/(k − 1)
ssW /(N − k)
.
=
322.4366/2
2050.7280/62
.
= 4.874141.
Now we use the R function pf to compute a significance probability p:
> 1-pf(4.874141,df1=2,df2=62)
[1] 0.01081398
12.1. THE CASE OF A NORMAL SHIFT FAMILY 291
Because p < α, we reject the null hypothesis. Equivalently, we might use
the R function qf to compute a critical value q:
> qf(1-.05,df1=2,df2=62)
[1] 3.145258
Because f > q, we reject the null hypothesis.
The information related to an ANOVA F-test is usually collected in an
ANOVA table:
Source of Sum of Degrees of Mean Test Significance
Variation Squares Freedom Squares Statistic Probability
Between SSB k − 1 MSB F p
Within SSW N − k MSW = S2
P
Total SST N − 1
Note that we have introduced new notation for the mean squares, MSB =
SSB/(k−1) and MSW = SSW /(N−k), allowing us to write F = MSB/MSW .
It is also helpful to examine R2 = SSB/SST , the proportion of total varia-
tion “explained” by differences in the sample means.
Example 12.2 (continued) For the ANOVA performed in Example
12.2, the ANOVA table is
Source SS df MS F p
Between 322.4366 2 161.21830 4.874141 0.01081398
Within 2050.7280 62 33.07625
Total 2373.1640 64
The proportion of total variation explained by differences in the sample
means is 322.4366/2373.1640
.
= 0.1358678. Thus, although there is sufficient
variation between the sample means for us to infer that the population means
are not identical, this variation accounts for a fairly small proportion of the
total variation in the data.
12.1.3 Planned Comparisons
Rejecting the fundamental null hypothesis of equal population means leaves
numerous alternatives. Typically, a scientist would like to say more than
simply “H0 : µ1 = · · · = µk is false.” Concluding that the population
292 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS
means are not identical naturally invites investigation of how they differ.
Sections 12.1.3 and 12.1.4 describe several useful inferential procedures for
performing more elaborate comparisons of population means. Section 12.1.3
describes two procedures that are appropriate when the scientist has deter-
mined specific comparisons of interest in advance of the experiment. For
reasons that will become apparent, this is the preferred case. However, it is
often the case that a specific comparison occurs to a scientist after examin-
ing the results of the experiment. Although statistical inference in such cases
is rather tricky, a variety of procedures for a posteriori inference have been
developed. Two such procedures are described in Section 12.1.4.
Inspired by K.A. Brownlee’s classic statistics text,1 we motivate the con-
cept of a planned comparison by considering a famous physics experiment.
Example 12.3 Heyl (1930) attempted to determine the gravitational
constant using k = 3 different materials—gold, platinum, and glass. It seems
natural to ask not just if the three materials lead to identical determinations
of the gravitational constant, by testing H0 : µ1 = µ2 = µ3, but also to ask:
1. If glass differs from the two heavy metals, by testing
H0 :
µ1 + µ2
2
= µ3 vs. H1 :
µ1 + µ2
2
6= µ3,
or, equivalently,
H0 : µ1 + µ2 = 2µ3 vs. H1 : µ1 + µ2 6= 2µ3,
or, equivalently,
H0 : µ1 + µ2 − 2µ3 = 0 vs. H1 : µ1 + µ2 − 2µ3 6= 0,
or, equivalently,
H0 : θ1 = 0 vs. H1 : θ1 6= 0,
where θ1 = µ1 + µ2 − 2µ3.
2. If the two heavy metals differ from each other, by testing
H0 : µ1 = µ2 vs. H1 : µ1 6= µ2,
1
K.A. Brownlee, Statistical Theory and Methodology in Science and Engineering, Second
Edition, John Wiley & Sons, 1965.
12.1. THE CASE OF A NORMAL SHIFT FAMILY 293
or, equivalently,
H0 : µ1 − µ2 = 0 vs. H1 : µ1 − µ2 6= 0,
or, equivalently,
H0 : θ2 = 0 vs. H1 : θ2 6= 0,
where θ2 = µ1 − µ2.
Notice that both of the planned comparisons proposed in Example 12.3
have been massaged into testing a null hypothesis of the form θ = 0. For
this construction to make sense, θ must have a special structure, which
statisticians identify as a contrast.
Definition 12.1 A contrast is a linear combination (weighted sum) of the
k population means,
θ =
k
X
i=1
ciµi,
for which
Pk
i=1 ci = 0.
Example 12.3 (continued) In the contrasts suggested previously,
1. θ1 = 1 · µ1 + 1 · µ2 + (−2) · µ3 and 1 + 1 − 2 = 0; and
2. θ2 = 1 · µ1 + (−1) · µ2 + 0 · µ3 and 1 − 1 + 0 = 0.
We usually identify different contrasts by their coefficients, e.g., c = (1, 1, −2)
or c = (1, −1, 0).
The methods of Section 12.1.2 are easily extended to the problem of
testing a single contrast, H0 : θ = 0 versus H1 : θ 6= 0. In Definition 12.1,
each population mean µi can be estimated by the unbiased estimator X̄i·;
hence, an unbiased estimator of θ is
θ̂ =
k
X
i=1
ciX̄i·.
We will reject H0 if θ̂ is observed sufficiently far from zero.
294 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS
Once again, we rely on a squared error criterion and ask if the observed
quantity (θ̂)2 is sufficiently far from zero. However, the quantity (θ̂)2 is not
a satisfactory measure of departure from H0 : θ = 0 because its magnitude
depends on the magnitude of the coefficients in the contrast. To remove this
dependency, we form a ratio that does not depend on how the coefficients
were scaled. The sum of squares associated with the contrast θ is the random
variable
SSθ =
³Pk
i=1 ciX̄i·
´2
Pk
i=1 c2
i /ni
.
The following facts about the distribution of SSθ lead to a test of H0 :
θ = 0 versus H1 : θ 6= 0.
Theorem 12.4 Under the fundamental null hypothesis H0 : µ1 = · · · = µk,
SSθ is independent of SSW , SSθ/σ2 ∼ χ2(1), and
F(θ) =
SSθ
σ2 /1
SSW
σ2 /(N − k)
=
SSθ
SSW /(N − k)
∼ F(1, N − k).
The F-test of H0 : θ = 0 is to reject H0 if and only if
p = PH0 (F(θ) ≥ f(θ)) ≤ α,
i.e., if and only if
f(θ) ≥ q = qf(1-α,df1=1,df2=N-k),
where f(θ) denotes the observed value of F(θ).
Example 12.3 (continued) Heyl (1930) collected the following data:
Gold 83 81 76 78 79 72
Platinum 61 61 67 67 64
Glass 78 71 75 72 74
Applying the methods of Section 12.1.2, we obtain the following ANOVA
table:
Source SS df MS F p
Between 565.1 2 282.6 26.1 0.000028
Within 140.8 13 10.8
Total 705.9 15
12.1. THE CASE OF A NORMAL SHIFT FAMILY 295
To test H0 : θ1 = 0 versus H1 : θ1 6= 0, we first compute
ssθ1 =
[1 · x̄1· + 1 · x̄2· + (−2) · x̄3·]2
12/6 + 12/5 + (−2)2/5
.
= 29.16667,
then
f (θ1) =
ssθ
ssW /(N − k)
.
=
29.16667
140.8333/(16 − 3)
.
= 2.692308.
Finally, we use the R function pf to compute a significance probability p:
> 1-pf(2.692308,df1=1,df2=13)
[1] 0.1247929
Because p > 0.05, we decline to reject the null hypothesis at significance
level α = 0.05. Equivalently, we might use the R function qf to compute a
critical value q:
> qf(1-.05,df1=1,df2=13)
[1] 4.667193
Because f < q, we decline to reject the null hypothesis.
In practice, one rarely tests a single contrast. However, testing multiple
contrasts involves more than testing each contrast as though it was the
only contrast. Entire books have been devoted to the problem of multiple
comparisons; the remainder of Section 12.1 describes four popular procedures
for testing multiple contrasts.
Orthogonal Contrasts
When it can be used, the method of orthogonal contrasts is generally pre-
ferred. It is quite elegant, but has certain limitations. We begin by explain-
ing what it means for contrasts to be orthogonal.
Definition 12.2 Two contrasts with coefficient vectors (c1, . . . , ck) and
(d1, . . . , dk) are orthogonal if and only if
k
X
i=1
cidi
ni
= 0.
A collection of contrasts is mutually orthogonal if and only if each pair of
contrasts in the collection is orthogonal.
296 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS
Notice that, if n1 = · · · = nk, then the orthogonality condition simplifies
to
k
X
i=1
cidi = 0.
Students who know some linear algebra should recognize that this condition
states that the dot product between the vectors c and d vanishes, i.e., that
the vectors c and d are orthogonal (perpendicular) to each other.
Example 12.3 (continued) Whether or not two contrasts are orthog-
onal depends not only on their coefficient vectors, but also on the size of the
samples drawn from each population.
• Suppose that Heyl (1930) had collected samples of equal size for each
of the three materials that he used. If n1 = n2 = n3, then θ1 and θ2
are orthogonal because
1 · 1 + 1 · (−1) + (−2) · 0 = 0.
• In fact, Heyl (1930) collected samples with sizes n1 = 6 and n2 = n3 =
5. In this case, θ1 and θ2 are not orthogonal because
1 · 1
6
+
1 · (−1)
5
+
(−2) · 0
5
=
1
6
−
1
5
6= 0.
However, θ1 is orthogonal to θ3 = 18µ1 − 17µ2 − µ3 because
1 · 18
6
+
1 · (−17)
5
+
(−2) · (−1)
5
= 3 − 3.2 + 0.2 = 0.
It turns out that the number of mutually orthogonal contrasts cannot
exceed k − 1. Obviously, this fact limits the practical utility of the method;
however, families of mutually orthogonal contrasts have two wonderful prop-
erties that commend their use.
First, any family of k − 1 mutually orthogonal contrasts partitions SSB
into k − 1 separate components,
SSB = SSθ1 + · · · + SSθk−1
,
each with one degree of freedom. This information is usually incorporated
into an expanded ANOVA table, as in. . .
12.1. THE CASE OF A NORMAL SHIFT FAMILY 297
Example 12.3 (continued) In the case of Heyl’s (1930) data, the
orthogonal contrasts θ1 and θ3 partition the between-groups sum-of-squares:
Source SS df MS F p
Between 565.1 2 282.6 26.1 0.000028
θ1 29.2 1 29.2 2.7 0.124793
θ3 535.9 1 535.9 49.5 0.000009
Within 140.8 13 10.8
Total 705.9 15
Testing the fundamental null hypothesis, H0 : µ1 = µ2 = µ3, results in
a tiny signficance probability, leading us to conclude that the population
means are not identical. The decomposition of the variation between groups
into contrasts θ1 and θ3 provides insight into the differences between the
population means. Testing the null hypothesis, H0 : θ1 = 0, results in a
large signficance probability, leading us to conclude that the heavy metals
do not, in tandem, differ from glass. However, testing the null hypothesis,
H0 : θ3 = 0, results in a tiny signficance probability, leading us to conclude
that the heavy metals do differ from each other. This is only possible if
the glass mean lies between the gold and platinum means. For this simple
example, our conclusions are easily checked by examining the raw data.
The second wonderful property of mutually orthogonal contrasts is that
tests of mutually orthogonal contrasts are mutually independent. As we
shall demonstrate, this property provides us with a powerful way to address
a crucial difficulty that arises whenever we test multiple hypotheses. The
difficulty is as follows. When testing a single null hypothesis that is true,
there is a small chance (α) that we will falsely reject the null hypothesis
and commit a Type I error. When testing multiple null hypotheses, each
of which are true, there is a much larger chance that we will falsely reject
at least one of them. We desire control of this family-wide error rate, often
abbreviated FWER.
Definition 12.3 The family-wide error rate (FWER) of a family of con-
trasts is the probability under the fundamental null hypothesis H0 : µ1 =
· · · = µk of falsely rejecting at least one null hypothesis.
The fact that tests of mutually orthogonal contrasts are mutually in-
dependent allows us to deduce a precise relation between the significance
level(s) of the individual tests and the FWER.
298 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS
1. Let Er denote the event that H0 : θr = 0 is falsely rejected. Then
P(Er) = α is the rate of Type I error for an individual test.
2. Let E denote the event that at least one Type I error is committed,
i.e.,
E =
k−1
[
r=1
Er.
The family-wide rate of Type I error is FWER = P(E).
3. The event that no Type I errors are committed is
Ec
=
k−1

r=1
Ec
r,
and the probability of this event is P(Ec) = 1 − FWER.
4. By independence,
1 − FWER = P (Ec
) = P (Ec
1) × · · · × P
¡
Ec
k−1
¢
= (1 − α)k−1
;
hence,
FWER = 1 − (1 − α)k−1
.
Notice that FWER > α, i.e., the family rate of Type I error is greater
than the error rate for an individual test. For example, if k = 3 and α = 0.05,
then
FWER = 1 − (1 − .05)2
= 0.0975.
This phenomenon is sometimes called “alpha slippage.” To protect against
alpha slippage, we usually prefer to specify the family rate of Type I error
that will be tolerated, then compute a significance level that will ensure the
specified family rate. For example, if k = 3 and we desire FWER = 0.05,
then we solve
0.05 = 1 − (1 − α)2
to obtain a significance level of
α = 1 −
√
0.95
.
= 0.0253.
12.1. THE CASE OF A NORMAL SHIFT FAMILY 299
Bonferroni t-Tests
It is often the case that one desires to test contrasts that are not mutually
orthogonal. This can happen with a small family of contrasts. For example,
suppose that we want to compare a control mean µ1 to each of two treatment
means, µ2 and µ3, in which case the natural contrasts have coefficient vectors
c = (1, −1, 0) and d = (1, 0, −1). In this case, the orthogonality condition
simplifies to 1/n1 = 0, which is impossible. Furthermore, as we have noted,
families of more than k − 1 contrasts cannot be mutually orthogonal.
Statisticians have devised a plethora of procedures for testing multiple
contrasts that are not mutually orthogonal. Many of these procedures ad-
dress the case of multiple pairwise contrasts, i.e., contrasts for which each
coefficient vector has exactly two nonzero components. We describe one such
procedure that relies on Bonferroni’s inequality.
Suppose that we plan m pairwise comparisons. These comparisons are
defined by contrasts θ1, . . . , θm, each of the form µi − µj, not necessarily
mutually orthogonal. Notice that each H0 : θr = 0 versus H1 : θr 6= 0 is a
normal 2-sample location problem with equal variances. From this observa-
tion, the following facts can be deduced.
Theorem 12.5 Under the fundamental null hypothesis H0 : µ1 = · · · = µk,
Z =
X̄i· − X̄j·
r³
1
ni
+ 1
nj
´
σ2
∼ N(0, 1)
and
T (θr) =
X̄i· − X̄j·
r³
1
ni
+ 1
nj
´
MSW
∼ t(N − k).
From Theorem 12.5, the t-test of H0 : θr = 0 is to reject if and only if
p = P (|T (θr)| ≥ |t(θr)|) ≤ α,
i.e., if and only if
|t (θr)| ≥ q = qt(1-α/2,df=N-k),
where t(θr) denotes the observed value of T(θr). This t-test is virtually
identical to Student’s 2-sample t-test, described in Section 11.1.2, except
that it pools all k samples to estimate the common variance instead of only
pooling the two samples that are being compared.
300 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS
At this point, you may recall that Section 11.1 strongly discouraged the
use of Student’s 2-sample t-test, which assumes a common population vari-
ance. Instead, we recommended Welch’s approximate t-test. In the present
case, our test of the fundamental null hypothesis H0 : µ1 = · · · = µk has
already imposed the assumption of a common population variance, so our
use of the T statistic in Theorem 12.5 is theoretically justified. But this
justification is rather too glib, as it merely begs the question of why we as-
sumed a common population variance in the first place. The general answer
to this question is that the ANOVA methodology is extremely powerful and
that comparable procedures in the case of unequal population variances may
not exist. (Fortunately, ANOVA often provides useful insights even when
its assumptions are violated. In such cases, however, one should interpret
significance probabilities with extreme caution.) In the present case, a good
procedure does exist, viz., the pairwise application of Welch’s approximate
t-test. The following discussion of how to control the family-wide error rate
in such cases applies equally to either type of pairwise t-test.
Unless the pairwise contrasts are mutually orthogonal, we cannot use
the multiplication rule for independent events to compute the family rate of
Type I error. However, Bonferroni’s inequality states that
FWER = P(E) = P
à m
[
r=1
Er
!
≤
m
X
r=1
P (Er) = mα;
hence, we can ensure that the family rate of Type I error is no greater than a
specified FWER by testing each contrast at significance level α = FWER/m.
Example 12.3 (continued) Instead of planning θ1 and θ3, suppose
that we had planned θ4 and θ5, defined by coefficient vectors c = (−1, 0, 1)
and d = (0, −1, 1) respectively. To test θ4 and θ5 with a family-wide error
rate of FWER ≤ 0.10, we first compute
t (θ4) =
x̄3· − x̄1·
r³
1
n3
+ 1
n1
´
msW
.
= −2.090605
and
t (θ5) =
x̄3· − x̄2·
r³
1
n3
+ 1
n2
´
msW
.
= 4.803845,
resulting in the following significance probabilities:
12.1. THE CASE OF A NORMAL SHIFT FAMILY 301
> 2*pt(-2.090605,df=13)
[1] 0.0567719
> 2*pt(-4.803845,df=13)
[1] 0.0003444588
There are m = 2 pairwise comparisons. To ensure FWER ≤ 0.10, we com-
pare the significance probabilities to α = 0.10/2 = 0.05, which leads us to
reject H0 : θ5 = 0 and to decline to reject H0 : θ4 = 0.
What do we lose by using Bonferroni’s inequality instead of the multipli-
cation rule? Without the assumption of independence, we must be slightly
more conservative in choosing a significance level that will ensure a specified
family-wide rate of error. For the same FWER, Bonferroni’s inequality leads
to a slightly smaller α than does the multiplication rule. The discrepancy
grows as m increases.
12.1.4 Post Hoc Comparisons
We now consider situations in which we determine that a comparison is of
interest after inspecting the data. For example, suppose that we had decided
to compare gold to platinum after inspecting Heyl’s (1930) data. This ought
to strike you as a form of cheating. Almost every randomly generated data
set will have an appealing pattern in it that may draw the attention of an
interested observer. To allow such patterns to determine what the scientist
will investigate is to invite abuse. Fortunately, statisticians have devised
procedures that protect ethical scientists from the heightened risk of Type I
error when the null hypothesis was constructed after the data were examined.
The present section describes two such procedures.
Bonferroni t-Tests
To fully appreciate the distinction between planned and post hoc compar-
isons, it is highly instructive to examine the method of Bonferroni t-tests.
Suppose that only pairwise comparisons are of interest. Because we are test-
ing after we have had the opportunity to inspect the data (and therefore
to construct the contrasts that appear to be nonzero), we suppose that all
pairwise contrasts were of interest a priori. Hence, whatever the number of
pairwise contrasts actually tested a posteriori, we set
m =
Ã
k
2
!
=
k(k − 1)
2
302 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS
and proceed as before.
The difference between planned and post hoc comparisons is especially
sobering when k is large. For example, suppose that we desire that the
family-wide error rate does not exceed 0.10 when testing two pairwise con-
trasts among k = 10 groups. If the comparisons were planned, then m = 2
and we can perform each test at signficance level α = 0.10/2 = 0.05. How-
ever, if the comparisons were constructed after examining the data, then
m = 45 and we must perform each test at signficance level α = 0.10/45
.
=
0.0022. Obviously, much stronger evidence is required to reject the same
null hypothesis when the comparison is chosen after examining the data.
Scheffé F-Tests
The reasoning that underlies Scheffé F-Tests for post hoc comparisons is
analogous to the reasoning that underlies Bonferroni t-tests for post hoc
comparisons. To accommodate the possibility that a general contrast was
constructed after examining the data, Scheffé’s procedure is predicated on
the assumption that all possible contrasts were of interest a priori. This
makes Scheffé’s procedure the most conservative of all multiple comparison
procedures.
Scheffé’s F-test of H0 : θr = 0 versus H1 : θr 6= 0 is to reject H0 if and
only if
p = 1-pf(f(θ)/(k-1),df1=k-1,df2=N-k) ≤ α,
i.e., if and only if
f (θr)
k − 1
≥ q = qf(1-α,k-1,N-k),
where f(θr) denotes the observed value of the F(θr) defined for the method
of planned orthogonal contrasts. It can be shown that, no matter how many
H0 : θr = 0 are tested by this procedure, the family-wide rate of Type I error
is no greater than α.
Example 12.3 (continued) Let θ6 = µ1 − µ3. Scheffé’s F-test pro-
duces the following results:
Source f(θr)/2 p
θ1 1.3 0.294217
θ2 25.3 0.000033
θ3 24.7 0.000037
θ6 2.2 0.151995
12.2. THE CASE OF A GENERAL SHIFT FAMILY 303
For the first three comparisons, our conclusions are not appreciably affected
by whether the contrasts were constructed before or after examining the
data. However, if θ6 had been planned, we would have obtained f(θ6) = 4.4
and p = 0.056772, which might easily lead to a different conclusion.
12.2 The Case of a General Shift Family
12.2.1 The Kruskal-Wallis Test
12.3 The Behrens-Fisher Problem
304 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS
12.4 Exercises
1. Jean Kerr devoted an entire chapter of Please Don’t Eat the Daisies
(1959) to the subject of dieting, observing that. . .
“Today, with the science of nutrition advancing so rapidly,
there is plenty of food for conversation, if for nothing else.
We have the Rockefeller diet, the Mayo diet, high-protein
diets, low-protein diets, “blitz” diets which feature cottage
cheese and something that tastes like very thin sandpaper,
and—finally—a liquid diet that duplicates all the rich, nour-
ishing goodness of mother’s milk. I have no way of know-
ing which of these is the most efficacious for losing weight,
but there’s no question in my mind that as a conversation-
stopper the “mother’s milk diet” is quite a ways out ahead.”
For her master’s thesis, a nutrition student at the University of Arizona
decides to compare several weight loss strategies. She recruits 140
moderately obese adult women and randomly assigns each woman to
one of the following diets: Rockefeller, Mayo, Atkins (high-protein), a
low-protein diet, a blitz diet, a liquid diet, and—as a control—Aunt
Jean’s marshmallow fudge diet. Each woman is weighed before dieting,
asked to follow the prescribed diet for eight weeks, then weighed again.
The resulting data will be analyzed using the analysis of variance and
related statistical techniques.
(a) This is a k-sample problem. What is the value of k?
(b) What null hypothesis is tested by an analysis of variance? (Your
answer should specify relations between certain population pa-
rameters. Be sure to define these parameters!)
(c) How many pairwise comparisons are possible?
(d) The student is especially interested in three pairwise comparisons:
Atkins versus low-protein, low-protein versus fudge, and fudge
versus liquid. Specify contrasts that correspond to each of these
comparisons.
(e) Are the preceding contrasts orthogonal? Why or why not?
2. As part of her senior thesis, a William & Mary physics major decides
to repeat Heyl’s (1930) experiment for determining the gravitational
12.4. EXERCISES 305
constant using 4 different materials: silver, copper, topaz, and quartz.
She plans to test 10 specimens of each material.
(a) Three comparisons are planned:
i. Metal (silver & copper) versus Gem (topaz & quartz)
ii. Silver versus Copper
iii. Topaz versus Quartz
What contrasts correspond to these comparisons? Are they or-
thogonal? Why or why not? If the desired family rate of Type
I error is 0.05, then what significance level should be used for
testing the null hypotheses H0 : θr = 0?
(b) After analyzing the data, an ANOVA table is constructed. Com-
plete the table from the information provided.
Source SS df MS F p
Between
θ1 0.001399
θ2 0.815450
θ3 0.188776
Within 9.418349
Total
(c) Referring to the above table, explain what conclusion the student
should draw about each of her planned comparisons.
(d) Assuming that the ANOVA assumption of homoscedasticity is
warranted, use the above table to estimate the common popula-
tion variance.
3. R. R. Sokal observed 25 females of each of three genetic lines (RS, SS,
NS) of the fruitfly Drosophila melanogaster and recorded the number
of eggs laid per day by each female for the first 14 days of her life.
The lines labelled RS and SS were selectively bred for resistance and
for susceptibility to the insecticide DDT. A nonselected control line
is labelled NS. The purpose of the experiment was to investigate the
following research questions:
• Do the two selected lines (RS and SS) differ in fecundity from the
nonselected line (NS)?
306 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS
• Does the line selected for resistance (RS) differ in fecundity from
the line selected for susceptibility (SS)?
The data are presented in Table 12.1.
RS 12.8 21.6 14.8 23.1 34.6 19.7 22.6 29.6 16.4 20.3
29.3 14.9 27.3 22.4 27.5 20.3 38.7 26.4 23.7 26.1
29.5 38.6 44.4 23.2 23.6
SS 38.4 32.9 48.5 20.9 11.6 22.3 30.2 33.4 26.7 39.0
12.8 14.6 12.2 23.1 29.4 16.0 20.1 23.3 22.9 22.5
15.1 31.0 16.9 16.1 10.8
NS 35.4 27.4 19.3 41.8 20.3 37.6 36.9 37.3 28.2 23.4
33.7 29.2 41.7 22.6 40.4 34.4 30.4 14.9 51.8 33.8
37.9 29.5 42.4 36.6 47.4
Table 12.1: Fecundity of Female Fruitflies
(a) Use side-by-side boxplots and normal probability plots to investi-
gate the ANOVA assumptions of normality and homoscedasticity.
Do these assumptions seem plausible? Why or why not?
(b) Construct constrasts that correspond to the research questions
framed above. Verify that these constrasts are orthogonal. At
what significance level should the contrasts be tested in order to
maintain a family rate of Type I error equal to 5%?
(c) Use ANOVA and the method of orthogonal contrasts to construct
an ANOVA table. State the null and alternative hypotheses that
are tested by these methods. For each null hypothesis, state
whether or not it should be rejected. (Use α = 0.05 for the
ANOVA hypothesis and the significance level calculated above
for the contrast hypotheses.)
4. A number of Byzantine coins were discovered in Cyprus. These coins
were minted during the reign of King Manuel I, Comnenus (1143–1180).
It was determined that n1 = 9 of these coins were minted in an early
coinage, n2 = 7 were minted several years later, n3 = 4 were minted
in a third coinage, and n4 = 7 were minted in a fourth coinage.
The silver content (percentage) of each coin was measured, with the
results presented in Table 12.2.
12.4. EXERCISES 307
1 5.9 6.8 6.4 7.0 6.6 7.7 7.2 6.9 6.2
2 6.9 9.0 6.6 8.1 9.3 9.2 8.6
3 4.9 5.5 4.6 4.5
4 5.3 5.6 5.5 5.1 6.2 5.8 5.8
Table 12.2: Silver Content of Byzantine Coins
(a) Investigate the ANOVA assumptions of normality and homoscedas-
ticity. Do these assumptions seem plausible? Why or why not?
(b) Construct an ANOVA table. State the null and alternative hy-
potheses tested by this method. Should the null hypothesis be
rejected at the α = 0.10 level?
(c) Examining the data, it appears that coins minted early in King
Manuel’s reign (the first two coinages) tended to contain more
silver than coins minted later in his reign (the last two coinages).
Construct a contrast that is suitable for investigating if this is
the case. State appropriate null and alternative hypotheses and
test them using Scheffé’s F-test for multiple comparisons with a
significance level of 5%.
5. R. E. Dolkart and colleagues compared antibody responses in normal
and alloxan diabetic mice. Three groups of mice were studied: normal,
alloxan diabetic, and alloxan diabetic treated with insulin. Several
comparisons are of interest:
• Does the antibody response of alloxan diabetic mice differ from
the antibody response of normal mice?
• Does the antibody response of alloxan diabetic mice treated with
insulin differ from the antibody response of normal mice?
• Does treating alloxan diabetic mice with insulin affect their anti-
body response?
Table 12.3 contains the measured amounts of nitrogen-bound bovine
serum albumen produced by the mice.
(a) Using the above data, investigate the ANOVA assumptions of nor-
mality and homoscedasticity. Do these assumptions seem plausi-
ble for these data? Why or why not?
308 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS
Normal 156 282 197 297 116 127 119 29 253 122
349 110 143 64 26 86 122 455 655 14
Alloxan 391 46 469 86 174 133 13 499 168 62
127 276 176 146 108 276 50 73
Alloxan 82 100 98 150 243 68 228 131 73 18
+insulin 20 100 72 133 465 40 46 34 44
Table 12.3: Antibody Responses of Diabetic Mice
(b) Now transform the data by taking the square root of each mea-
surement. Using the transformed data, investigate the ANOVA
assumptions of normality and homoscedasticity. Do these as-
sumptions seem plausible for the transformed data? Why or why
not?
(c) Using the transformed data, construct an ANOVA table. State
the null and alternative hypotheses tested by this method. Should
the null hypothesis be rejected at the α = 0.05 level?
(d) Using the transformed data, construct suitable contrasts for inves-
tigating the research questions framed above. State appropriate
null and alternative hypotheses and test them using the method
of Bonferroni t-tests. At what significance level should these hy-
potheses be tested in order to maintain a family rate of Type I
error equal to 5%? Which null hypotheses should be rejected?
Chapter 13
Association
13.1 Categorical Random Variables
13.2 Normal Random Variables
The continuous random variables (X, Y ) define a function that assigns a pair
of real numbers to each experimental outcome. Let
B = [a, b] × [c, d] ⊂ ℜ2
be a rectangular set of such pairs and suppose that we want to compute
P ((X, Y ) ∈ B) = P (X ∈ [a, b], Y ∈ [c, d]) .
Just as we compute P(X ∈ [a, b]) using the pdf of X, so we compute
P((X, Y ) ∈ B) using the joint probability density function of (X, Y ). To
do so, we must extend the concept of area under the graph of a function of
one variable to the concept of volume under the graph of a function of two
variables.
Theorem 13.1 Let X be a continuous random variable with pdf fx and let
Y be a continuous random variable with pdf fy. In this context, fx and fy are
called the marginal pdfs of (X, Y ). Then there exists a function f : ℜ2 → ℜ,
the joint pdf of (X, Y ), such that
P ((X, Y ) ∈ B) = VolumeB(f) =
Z b
a
Z d
c
f(x, y) dy dx (13.1)
for all rectangular subsets B. If X and Y are independent, then
f(x, y) = fx(x)fy(y).
309
310 CHAPTER 13. ASSOCIATION
Remark: If (13.1) is true for all rectangular subsets of ℜ2, then it is
true for all subsets in the sigma-field generated by the rectangular subsets.
We can think of the joint pdf as a function that assigns an elevation to a
point identified by two coordinates, longitude (x) and latitude (y). Noting
that topographic maps display elevations via contours of constant elevation,
we can describe a joint pdf by identifying certain of its contours, i.e., subsets
of ℜ2 on which f(x, y) is contant.
Definition 13.1 Let f denote the joint pdf of (X, Y ) and fix c > 0. Then
n
(x, y) ∈ ℜ2
: f(x, y) = c
o
is a contour of f.
13.2.1 Bivariate Normal Distributions
Suppose that X ∼ Normal(0, 1) and Y ∼ Normal(0, 1), not necessarily in-
dependent. To measure the degree of dependence between X and Y , we
consider the quantity E(XY ).
• If there is a positive association between X and Y , then experimental
outcomes that have. . .
– positive values of X will tend to have positive values of Y , so XY
will tend to be positive;
– negative values of X will tend to have negative values of Y , so
XY will tend to be positive.
Hence, E(XY ) > 0 indicates positive association.
• If there is a negative association between X and Y , then experimental
outcomes that have. . .
– positive values of X will tend to have negative values of Y , so
XY will tend to be negative;
– negative values of X will tend to have positive values of Y , so
XY will tend to be negative.
Hence, E(XY ) < 0 indicates negative association.
13.2. NORMAL RANDOM VARIABLES 311
If X ∼ Normal(µx, σ2
x) and Y ∼ Normal(µy, σ2
y), then we measure depen-
dence after converting to standard units:
Definition 13.2 Let µx = EX and σ2
x = Var X < ∞. Let µy = EY and
σ2
y = Var Y < ∞. The population product-moment correlation coefficient of
X and Y is
ρ = ρ(X, Y ) = E
"µ
X − µx
σx
¶ Ã
Y − µy
σy
!#
.
The product-moment correlation coefficient has the following properties:
Theorem 13.2 If X and Y have finite variances, then
1. −1 ≤ ρ ≤ 1
2. ρ = ±1 if and only if
Y − µy
σy
= ±
X − µx
σx
,
in which case Y is completely determined by X.
3. If X and Y are independent, then ρ = 0.
4. If X and Y are normal random variables for which ρ = 0, then X and
Y are independent.
If ρ = ±1, then the values of (X, Y ) fall on a straight line. If |ρ| < 1,
then the five population parameters (µx, µy, σ2
x, σ2
y, ρ) determine a unique
bivariate normal pdf. The contours of this joint pdf are concentric ellipses
centered at (µx, µy). We use one of these ellipses to display the basic features
of the bivariate normal pdf in question.
Definition 13.3 Let f denote a nondegenerate (|ρ| < 1) bivariate normal
pdf. The population concentration ellipse is the contour of f that contains
the four points
(µx ± σx, µy ± σy) .
It is not difficult to create an R function that plots concentration ellipses.
The function binorm.ellipse is described in Appendix R and/or can be
obtained from the web page for this book/course.
312 CHAPTER 13. ASSOCIATION
Example 13.1 The following R commands produce the population
concentration ellipse for a bivariate normal distribution with parameters
µx = 10, µy = 20, σ2
x = 4, σ2
y = 16 and ρ = 0.5:
> pop <- c(10,20,4,16,.5)
> binorm.ellipse(pop)
The ellipse plotted by these commands is displayed in Figure 13.1.
x
y
5 10 15
15
20
25
Concentration Ellipse
Figure 13.1: The population concentration ellipse for a bivariate normal
distribution with parameters (µx, µy, σ2
x, σ2
y, ρ) = (10, 20, 4, 16, 0.5).
Unless the population concentration ellipse is circular, it has a unique
major axis. The line that coincides with this axis is the first principal compo-
nent of the population and plays an important role in multivariate statistics.
We will encounter this line again in Chapter 14.
13.2. NORMAL RANDOM VARIABLES 313
13.2.2 Bivariate Normal Samples
A bivariate sample is a set of paired observations:
(x1, y1) , (x2, y2) , . . . , (xn, yn) .
We assume that each pair (xi, yi) was independently drawn from the same
bivariate distribution. Bivariate samples are usually stored in an n × 2 data
matrix, 





x1 y1
x2 y2
.
.
.
xn yn






,
and are often displayed by plotting each (xi, yi) in the Cartesian plane. The
resulting figure is called a scatter diagram.
Example 13.2 Twenty students enrolled in Math 351 (Applied Statis-
tics) at the College of William & Mary produced the following scores on two
midterm tests:
x y
87 87
25 57
76 91
84 67
91 67
82 66
94 86
89 74
92 92
76 85
84 75
99 92
92 55
74 74
84 74
94 69
99 98
63 81
82 80
91 85
314 CHAPTER 13. ASSOCIATION
A scatter diagram of these data is displayed in Figure 13.2. Typically, it is
easier to discern patterns by inspecting a scatter diagram than by inspecting
a table of numbers. In particular, note the presence of an apparent outlier.
•
•
•
• •
•
•
•
•
•
•
•
•
• •
•
•
• •
•
x = score on Test 1
y
=
score
on
Test
2
40 60 80 100
40
60
80
100
Figure 13.2: A scatter diagram of a bivariate sample. Each point corresponds
to a student. The horizontal position of the point represents the student’s
score on the first midterm test; the vertical position of the point represents
the student’s score on the second midterm test.
The population from which the bivariate sample in Example 13.2 was
drawn is not known, so this sample should not be interpreted as a typical
example of a bivariate normal sample. However, it is not difficult to create
an R function that simulates sampling from a specified bivariate normal
population. The function binorm.sample is described in Appendix R and/or
can be obtained from the web page for this book/course.
13.2. NORMAL RANDOM VARIABLES 315
Example 13.1 (continued) The following R command draws n = 5
observations from the previously specified bivariate normal distribution:
> binorm.sample(pop,5)
[,1] [,2]
[1,] 12.293160 24.07643
[2,] 11.819520 24.13076
[3,] 11.529582 17.28637
[4,] 6.912459 23.39430
[5,] 11.043991 18.12538
Notice that binorm.sample returns the sample in the form of a data matrix.
Having observed a bivariate normal sample, we inquire how to estimate
the five population parameters (µx, µy, σ2
x, σ2
y, ρ). We have already discussed
how to estimate the population means (µx, µy) with the sample means (x̄, ȳ)
and the population variances (σ2
x, σ2
y) with the sample variances (s2
x, s2
y). The
plug-in estimate of ρ is
ρ̂ =
1
n
n
X
i=1
"µ
xi − µ̂x
σ̂x
¶ Ã
yi − µ̂y
σ̂y
!#
=
1
n
n
X
i=1


Ã
xi − x̄
p
(n − 1)s2
x/n
! 

yi − ȳ
q
(n − 1)s2
y/n




=
1
n − 1
n
X
i=1
"µ
xi − x̄
sx
¶ Ã
yi − ȳ
sy
!#
,
where
σ̂x =
q
c
σ2
x and σ̂y =
q
c
σ2
y.
This quantity is Pearson’s product-moment correlation coefficient, usually
denoted r.
It is not difficult to create an R function that computes the estimates
(x̄, ȳ, s2
x, s2
y, r) from a bivariate data matrix. The function binorm.estimate
is described in Appendix R and/or can be obtained from the web page for
this book/course.
316 CHAPTER 13. ASSOCIATION
Example 13.1 (continued) The following R commands draw n = 100
observations from a bivariate normal distribution with parameters µx = 10,
µy = 20, σ2
x = 4, σ2
y = 16 and ρ = 0.5, then estimate the parameters from
the sample:
> Data <- binorm.sample(pop,100)
> binorm.estimate(Data)
[1] 9.8213430 20.3553502 4.2331147 16.7276819 0.5632622
Naturally, the estimates do not equal the estimands because of sampling
variation.
Finally, it is not difficult to create an R function that plots a scatter di-
agram and overlays the sample concentration ellipse, i.e., the concentration
ellipse constructed using the computed sample quantities (x̄, ȳ, s2
x, s2
y, r) in-
stead of the unknown population quantities (µx, µy, σ2
x, σ2
y, ρ). The function
binorm.scatter is described in Appendix R and/or can be obtained from
the web page for this book/course.
Example 13.1 (continued) The following R command creates the
overlaid scatter diagram displayed in Figure 13.3:
> binorm.scatter(Data)
When analyzing bivariate data, it is good practice to examine both the
scatter diagram and the sample concentration ellipse in order to ascertain
how well the latter summarizes the former. A poor summary suggests that
the sample may not have been drawn from a bivariate normal distribution,
as in Figure 13.4.
13.2.3 Inferences about Correlation
We have already observed that ρ̂ = r is the plug-in estimate of ρ. In this
section, we consider how to test hypotheses about and construct confidence
intervals for ρ.
Given normal random variables X and Y , an obvious question is whether
or not they are uncorrelated. To answer this question, we test the null
hypothesis H0 : ρ = 0 against the alternative hypothesis H1 : ρ 6= 0.
(One might also be interested in one-sided hypotheses and ask, for example,
whether or not there is convincing evidence of positive correlation.) We can
derive a test from the following fact about the plug-in estimator of ρ.
13.2. NORMAL RANDOM VARIABLES 317
•
•
•
•
•
•
•
•
• •
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
• •
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
x
y
0 5 10 15 20
10
15
20
25
30
Scatter Diagram
Figure 13.3: A scatter diagram of a bivariate normal sample, with the sample
concentration ellipse overlaid.
Theorem 13.3 Suppose that (Xi, Yi), i = 1, . . . , n, are independent pairs
of random variables with a bivariate normal distribution. Let ρ̂ denote the
plug-in estimator of ρ. If Xi and Yi are uncorrelated, i.e., ρ = 0, then
ρ̂
√
n − 2
p
1 − ρ̂2
∼ t(n − 2).
Assuming that (Xi, Yi) have a bivariate normal distribution, Theorem
13.3 allows us to compute a significance probability for testing H0 : ρ = 0
versus H1 : ρ 6= 0. Let T ∼ t(n − 2). Then the probability of observing
|ρ̂| ≥ |r| under H0 is
p = P
Ã
|T| ≥
r
√
n − 2
√
1 − r2
!
318 CHAPTER 13. ASSOCIATION
•
••
•
• •
•
•
•
•
•
•
•
•
• •
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
• •
•
•
•
•
•
• • •
•
•
•
•
•
•
•
•
•
•
•
•
•
• •
•
•
x
y
-5 0 5
-5
0
5
Scatter Diagram
Figure 13.4: A scatter diagram for which the sample concentration ellipse
is a poor summary. These data were not drawn from a bivariate normal
distribution.
and we reject H0 if and only if p ≤ α. Equivalently, we reject H0 if and only
if (iff)
¯
¯
¯
¯
¯
r
√
n − 2
√
1 − r2
¯
¯
¯
¯
¯
≥ qt iff
r2(n − 2)
1 − r2
≥ q2
t iff r2
≥
q2
t
n − 2 + q2
t
,
where qt = qt(1 − α/2, n − 2).
When testing hypotheses about correlation, it is important to appreci-
ate the distinction between statistical significance and material significance.
Strong evidence that an association exists is not the same as evidence of a
strong association. The following examples illustrate the distinction.
13.2. NORMAL RANDOM VARIABLES 319
Example 13.3 I used binorm.sample to draw a sample of n = 300
observations from a bivariate normal distribution with a population cor-
relation coefficient of ρ = 0.1. This is a rather weak association. I then
used binorm.estimate to compute a sample correlation coefficient of r =
0.16225689. The test statistic is
r
√
n − 2
√
1 − r2
= 2.838604
and the significance probability is
p
¯
= 2 ∗ pt(−2.838604, 298) = 0.004842441.
This is fairly decisive evidence that ρ 6= 0, but concluding that X and Y are
correlated does not warrant concluding that X and Y are strongly correlated.
Example 13.4 I used binorm.sample to draw a sample of n = 10
observations from a bivariate normal distribution with a population corre-
lation coefficient of ρ = 0.8. This is a fairly strong association. I then
used binorm.estimate to compute a sample correlation coefficient of r =
0.3759933. The test statistic is
r
√
n − 2
√
1 − r2
= 1.147684
and the significance probability is
p = 2 ∗ pt(−1.147684, 8) = 0.2842594.
There is scant evidence that ρ 6= 0, despite the fact that X and Y are
strongly correlated.
Although testing whether of not ρ = 0 is an important decision, it is not
the only inference of interest. For example, if we want to construct confidence
intervals for ρ, then we need to test H0 : ρ = ρ0 versus H1 : ρ 6= ρ0. To do
so, we rely on an approximation due to Ronald Fisher. Let
ζ =
1
2
log
µ
1 + ρ
1 − ρ
¶
and rewrite the hypotheses as H0 : ζ = ζ0 versus H1 : ζ 6= ζ0. This is
sometimes called Fisher’s z-transformation. Fisher discovered that
ζ̂ =
1
2
log
µ
1 + ρ̂
1 − ρ̂
¶
˙
∼ Normal
µ
ζ,
1
n − 3
¶
,
320 CHAPTER 13. ASSOCIATION
which allows us to compute an approximate significance probability. Let
Z ∼ Normal(0, 1) and set
z =
1
2
log
µ
1 + r
1 − r
¶
.
Then
p
.
= P
³
|Z| ≥ |z − ζ0|
√
n − 3
´
and we reject H0 : ζ = ζ0 if and only if p ≤ α. Equivalently, we reject
H0 : ζ = ζ0 if and only if
|z − ζ0|
√
n − 3 ≥ qz,
where qz = qnorm(1 − α/2).
To construct an approximate (1 − α)-level confidence interval for ρ, we
first observe that
z ±
qz
√
n − 3
(13.2)
is an approximate (1 − α)-level confidence interval for ζ. We then use the
inverse of Fisher’s z-transformation,
ρ =
e2ζ − 1
e2ζ + 1
,
to transform (13.2) to a confidence interval for ρ.
Example 13.5 Suppose that we draw n = 100 observations from a
bivariate normal distribution and observe r = 0.5. To construct a 0.95-level
confidence interval, we use qz
.
= 1.96. First we compute
z =
1
2
log
µ
1 + 0.5
1 − 0.5
¶
= 0.5493061
and
z ±
qz
√
n − 3
.
= 0.5493061 ±
1.96
√
97
= (0.350302, 0.7483103)
to obtain a confidence interval (a, b) for ζ. The corresponding confidence
interval for ρ is
Ã
e2a − 1
e2a + 1
,
e2b − 1
e2b + 1
!
= (0.3366433, 0.6341398).
Notice that the plug-in estimate ρ̂ = r = 0.5 is not the midpoint of this
interval.
13.3. MONOTONIC ASSOCIATION 321
13.3 Monotonic Association
13.4 Spurious Association
322 CHAPTER 13. ASSOCIATION
13.5 Exercises
1. Consider the following data matrix:
4.81310497776088 5.50546805210632
3.20790912734096 3.23537831017746
2.03360531141548 1.57466192734915
3.80353555823225 4.0777212868518
3.44874039566775 3.57596515608872
4.02513467455476 4.39110976256498
4.18921274133904 4.62315118989928
1.57765999081644 0.929857871257454
2.55801286069007 2.31628619574412
3.30197349607145 3.36840541617217
3.49344457748324 3.63918641630698
3.84773963203205 4.14023528753161
1.6571339655711 1.04225104421118
2.01676932918443 1.55085225294214
3.26802020797819 3.32038821566353
3.21119453633111 3.24002458012926
3.98834405943784 4.33907997569859
3.39396169865743 3.49849637984759
3.98470335590536 4.33393124338638
2.92484672005844 2.83506761480053
3.24990948234283 2.98840952533401
4.48210022495756 1.24582866569767
2.49246311350902 4.05960045290903
2.5490793094774 3.97953306072058
3.56806772786439 2.53846581953658
2.58341332552653 3.93097742957316
3.00614070448958 3.33315063705718
3.59845899773574 2.49548607350678
3.24798603840268 2.99112968584062
3.27071210738312 2.95899017086906
3.61265049129421 2.47541627084607
3.98487089689919 1.94901712504748
2.92139406397179 3.453000485443
2.10733672639563 4.60425141279254
3.20304499253985 3.05468592240708
1.84295811639769 4.97813922865297
3.11571443259585 3.17818998468951
3.5505950180758 2.56317596269101
3.41454250084746 2.75558327775034
2.6505463184044 3.83603704056258
13.5. EXERCISES 323
(a) Do the x values appear to have been drawn from a normal distri-
bution? Why or why not?
(b) Do the y values appear to have been drawn from a normal distri-
bution? Why or why not?
(c) Do the (x, y) values appear to have been drawn from a bivariate
normal distribution? Why or why not?
(d) Suggest an explanation for the phenomena observed in (a)–(c).
Is this a paradox? How do you think that these (x, y) pairs were
obtained?
Hint: Do not try to type these data into R! They are available elec-
tronically. Assuming that the data matrix is stored in a text file named
ex131.dat, located in the root directory of a diskette, the following
command reads the data into the Windows version of R:
> Data <- matrix(scan("a:ex131.dat"),byrow=T,ncol=2)
The following R commands then create vectors of x and y values:
> x <- Data[,1]
> y <- Data[,2]
2. Consider the test score data reported in Example 13.2.
(a) Quantify the association between midterm test scores by com-
puting Pearson’s product-moment correlation coefficient. Is the
association positive or negative?
(b) Examining the scatter diagram displayed in Figure 13.2, one stu-
dent appears to be an outlier. Omitting the corresponding row of
the data matrix, re-compute Pearson’s product-moment correla-
tion coefficient. How does the outlier affect the value of r?
Hint: If Data is a complete data matrix, then Data[-17,] is the
same data matrix without row 17.
324 CHAPTER 13. ASSOCIATION
3. Pearson and Lee reported the following heights (in inches) of eleven
pairs of siblings:
sister brother
69 71
64 68
65 66
63 67
65 70
62 71
65 70
64 73
66 72
59 65
62 66
Assuming that these pairs were drawn from a bivariate normal popu-
lation, construct a confidence interval for ρ, the population product-
moment correlation coefficient, that has a confidence level of approxi-
mately 0.90.
Hint: If x is the vector of sister heights and y is the vector of brother
heights (in the same order), then the following R command creates the
above data matrix:
> Data <- cbind(x,y)
4. Let α = 0.05.
(a) Suppose that we sample from a bivariate normal distribution with
ρ = 0.5. Assuming that we observe r = 0.5, how large a sample
will be needed to reject H0 : ρ = 0 in favor of H0 : ρ 6= 0?
(b) Suppose that we sample from a bivariate normal distribution with
ρ = 0.1. Assuming that we observe r = 0.1, how large a sample
will be needed to reject H0 : ρ = 0 in favor of H0 : ρ 6= 0?
Chapter 14
Simple Linear Regression
One way to quantify the association between two random variables, X and
Y , is to quantify the extent to which knowledge of X allows one to predict
values of Y . Notice that this approach to association is asymmetric: one
variable (conventionally denoted X) is the predictor variable and the other
variable (conventionally denoted Y ) is the predicted variable. The predictor
variable is often called the independent variable and the predicted variable
is often called the dependent variable. We will eschew this terminology, as
it has nothing to do with the probabilistic (in)dependence of events and
random variables.
14.1 The Regression Line
Suppose that Y ∼ Normal(µy, σ2
y) and that we want to predict the outcome
of an experiment in which we observe Y . If we know µy, then the obvious
value of Y to predict is EY = µy. The expected value of the squared error
of this prediction is E(Y − µy)2 = Var Y = σ2
y.
Now suppose that X ∼ Normal(µx, σ2
x) and that we observe X = x.
Again we want to predict Y . Does knowing X = x allow us to predict Y
more accurately? The answer depends on the association between X and Y .
If X and Y are independent, then knowing X = x will not help us predict Y .
If X and Y are dependent, then knowing X = x should help us predict Y .
Example 14.1 Suppose that we want to predict the adult height to
which a male baby will grow. Knowing only that adult male heights are
normally distributed, we would predict the average height of this population.
325
326 CHAPTER 14. SIMPLE LINEAR REGRESSION
However, if we knew that the baby’s father had attained a height of 6′-11′′,
then we surely would be inclined to revise our prediction and predict that
the baby will grow to a greater-than-average height.
When X and Y are normally distributed, the key to predicting Y from
X = x is the following result.
Theorem 14.1 Suppose that (X, Y ) have a bivariate normal distribution
with parameters (µx, µy, σ2
x, σ2
y, ρ). Then the conditional distribution of Y
given X = x is
Y |X = x ∼ Normal
µ
µy + ρ
σy
σx
(x − µx) ,
³
1 − ρ2
´
σ2
y
¶
.
Because Y |X = x is normally distributed, the obvious value of Y to
predict when X = x is
ŷ(x) = E(Y |X = x) = µy + ρ
σy
σx
(x − µx) . (14.1)
Interpreting (14.1) as a function that assigns a predicted value of Y to each
value of x, we see that the prediction function (14.1) corresponds to a line
that passes through the point (µx, µy) with slope ρ σy/σx. The prediction
function (14.1) is the population regression function and the corresponding
line is the population regression line.
The expected squared error of the prediction (14.1) is
Var(Y |X = x) = (1 − ρ2
)σ2
y.
Notice that this quantity does not depend on the value of x. If X and Y are
strongly correlated, then ρ ≈ ±1, (1−ρ2)σ2
y ≈ 0, and prediction is extremely
accurate. If X and Y are uncorrelated, then ρ = 0, (1 − ρ2)σ2
y = σ2
y,
and the accuracy of prediction is not improved by knowing X = x. These
remarks suggest a natural way of interpreting what ρ actually measures: the
proportion by which the expected squared error of prediction is reduced by
virtue of knowing X = x is
σ2
y −
¡
1 − ρ2
¢
σ2
y
σ2
y
= ρ2
,
the population coefficient of determination. Statisticians often express this
interpretation by saying that ρ2 is “the proportion of variation explained by
linear regression.” Of course, as we emphasized in Section 13.4, this is not
an explanation in the sense of articulating a causal mechanism.
14.1. THE REGRESSION LINE 327
Example 14.2 Suppose that (µx, µy, σ2
x, σ2
y, ρ) = (10, 20, 22, 42, 0.5).
Then
ŷ(x) = 20 + 0.5 ·
4
2
(x − 10) = x + 10
and ρ2 = 0.25.
Rewriting (14.1), the equation for the population regression line, as
ŷ(x) − µy
σy
= ρ
x − µx
σx
,
we discern an important fact:
Corollary 14.1 Suppose that (x, y) lies on the population regression line.
If x lies z standard deviations above µx, then y lies ρz standard deviations
above µy.
Example 14.2 (continued) The value x = 12 lies (12 − 10)/2 = 1
standard deviations above the X-population mean, µx = 10. The predicted
y-value that corresponds to x = 12, ŷ(12) = 12 + 10 = 22, lies (22 − 20)/4 =
0.5 standard deviations above the Y -population mean, µy = 20.
Example 14.2 (continued) The 0.90 quantile of X is
x = qnorm(.9,mean=10,sd=2) = 12.5631.
The predicted y-value that corresponds to x = 12.5631 is ŷ(12.5631) =
22.5631. At what quantile of Y does the predicted y-value lie? The answer
is
P (Y ≤ ŷ(x)) = pnorm(22.5631,mean=20,sd=4) = 0.7391658.
At first, most students find the preceding example counterintuitive. If x
lies at the 0.90 quantile of X, then should we not predict ŷ(x) to lie at the
0.90 quantile of Y ? This is a natural first impression, but one that must be
dispelled. We begin by considering two familiar situations:
1. Consider the case of a young boy whose father is extremely tall, at
the 0.995 quantile of adult male heights. We surely would predict that
the boy will grow to be quite tall. But precisely how tall? A father’s
height does not completely determine his son’s height. Height is also
328 CHAPTER 14. SIMPLE LINEAR REGRESSION
affected by myriad other factors, considered here as chance variation.
Statistically speaking, it’s more likely that the boy will grow to an
adult height slightly shorter than his extremely tall father than that
he will grow to be even taller.
2. Consider the case of two college freshman, William and Mary, who are
enrolled in an introductory chemistry class of 250 students. On the first
midterm examination, Mary attains the 5th highest score and William
obtains the 245th highest (5th lowest) score. How should we predict
their respective performances on the second midterm examonation?
There is undoubtedly a strong, positive correlation between scores on
the two tests. We surely will predict that Mary will do quite well on the
second test and that William will do rather badly. But how well and
how badly? One test score does not completely determine another—if
it did, then computing semester grades would be easy! Mary can’t
do much better on the second test than she did on the first, but she
might easily do worse. Statistically speaking, it’s likely that she’ll rank
slightly below 5th on the second test. Likewise, William can’t do much
worse on the second test than he did on the first. Statistically speaking,
it’s likely that he’ll rank slightly above 245th on the second test.
The phenomenon that we have just described, that experimental units
with extreme X quantiles will tend to have less extreme Y quantiles, is
purely statistical. It was first discerned by Sir Francis Galton, who called
it “regression to mediocrity.” Modern statisticians call it regression to the
mean, or simply the regression effect.
Having refined our intuition, we can now explain the regression effect
by examining the population concentration ellipse in Figure 14.1. For sim-
plicity, we assume that X and Y have been converted to standard units.
The bivariate normal population represented in Figure 14.1 has population
parameters µx = µy = 0, σ2
x = σ2
y = 1, and ρ = 0.5. Recall that the line
that coincides with the major axis of the ellipse is called the first principal
component. In Figure 14.1, the first principal component is the line y = x
and the regression line is the line y = x/2. Both lines pass through the point
(µx, µy) = (0, 0), but their slopes differ by a factor of |ρ| = 0.5.
Let us explore the implications of the fact that, if |ρ| < 1, then the
regression line does not coincide with the major axis of the concentration
ellipse. Given X = x, it might seem tempting to predict Y = x. But this
14.1. THE REGRESSION LINE 329
µy = 0
µx = 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
−x x
y = x
a+b
2
b
a
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Figure 14.1: The Regression Effect.
would be a mistake! Here, x > µx and clearly
P (Y > x|X = x) <
1
2
,
so ŷ(x) = x overpredicts Y |X = x. Similarly, ŷ(−x) = −x underpredicts
Y |X = −x.
The population regression line is the line of conditional expected values,
y = E(Y |X = x). Let (x, a) and (x, b) denote the lower and upper points at
which the vertical line X = x intersects the population concentration ellipse.
As one might guess, it turns out that
ŷ(x) = E(Y |X = x) =
a + b
2
.
330 CHAPTER 14. SIMPLE LINEAR REGRESSION
However, the midpoint of the vertical line segment that connects (x, a)
and (x, b) is not (x, x). The discrepancy between using the first princi-
pal component to predict ŷ(x) = x and using the regression line to predict
ŷ(x) = (a + b)/2, indicated by an arrow in Figure 14.1, is the regression
effect.
The correlation coefficient ρ mediates the strength of the regression effect.
If ρ = ±1, then
Y − µx
σy
= ±
X − µx
σx
and Y is completely determined by X. In this case there is no regression
effect: if x lies z standard deviations above µx, then we know that y lies z
standard deviations above µy. At the other extreme, if ρ = 0, then knowing
X = x does not reduce the expected squared error of prediction at all. In
this case, we regress all the way to the mean: regardless of where x lies, we
predict ŷ = µy.
Thus far, we have focussed on predicting Y from X = x in the case that
the population concentration ellipse is known. We have done so in order to
emphasize that the regression effect is an inherent property of prediction, not
a statistical anomaly caused by chance variation. In practice, however, the
population concentration ellipse typically is not known and we must rely on
the sample concentration ellipse, estimated from bivariate data. This means
that we must substitute (x̄, ȳ, s2
x, s2
y, r) for (µx, µy, σ2
x, σ2
y, ρ). The sample
regression function is
ŷ(x) = ȳ + r
sy
sx
(x − x̄) (14.2)
and the corresponding line is the sample regression line. Notice that the
slope of the sample regression line does not depend on whether we use plug-
in or unbiased estimates of the population variances. The variances affect
the regression line through the (square root of) their ratio,
c
σ2
y
c
σ2
x
=
1
n
Pn
i=1 (yi − ȳ)2
1
n
Pn
i=1 (xi − x̄)2 =
1
n−1
Pn
i=1 (yi − ȳ)2
1
n−1
Pn
i=1 (xi − x̄)2 =
s2
y
s2
x
,
which is not affected by the choice of plug-in or unbiased.
Example 14.2 (continued) I used binorm.sample to draw a sample
of n = 100 observations from a bivariate normal distribution with parameters
pop =
³
µx, µy, σ2
x, σ2
y, ρ
´
=
³
10, 20, 22
, 42
, 0.5
´
.
14.2. THE METHOD OF LEAST SQUARES 331
I then used binorm.estimate to compute sample estimates of pop, obtaining
est =
³
x̄, ȳ, s2
x, s2
y, r
´
= (10.0006837, 19.3985929, 4.4512393, 14.1754248, 0.4707309) .
The resulting formula for the sample regression line is
ŷ(x) = ȳ + r
sy
sx
(x − x̄) = ȳ + 1.784545 (x − x̄) = 1.55192 + 1.784545x.
It is not difficult to create an R function that plots a scatter diagram of the
sample and overlays both the sample concentration ellipse and the sample
regression line. The function binorm.regress is described in Appendix R
and/or can be obtained from the web page for this book/course. The com-
mands used in this example are as follows:
> pop <- c(10,20,4,16,.5)
> Data <- binorm.sample(pop,100)
> est <- binorm.estimate(Data)
> binorm.regress(Data)
The scatter diagram created by binorm.regress is displayed in Figure 14.2.
14.2 The Method of Least Squares
In Section 14.1 we derived the regression line from properties of bivariate
normal distributions. Having derived it, we now note that the sample re-
gression line can be computed from any set of n ≥ 2 points (xi, yi) ∈ ℜ2 for
which the xi assume more than one distinct value (and therefore sx > 0). In
this section, we derive the regression line in this more general setting.
Given points (xi, yi) ∈ ℜ2, i = 1, . . . , n, we ask two conceptually distinct
questions:
1. What line best summarizes the (x, y) pairs?
2. What line best predicts values of y from values of x?
We will answer each of these questions by applying the method of least
squares. The possible lines are of the form y = a + bx. Given a candidate
line, we measure the error between the line and each (xi, yi), then sum the
332 CHAPTER 14. SIMPLE LINEAR REGRESSION
•
•
•
•
• •
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
• •
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
x
y
5 10 15 20
10
15
20
25
Regression Line
Figure 14.2: Scatter diagram, sample concentration ellipse, and sample re-
gression line of n = 100 observations sampled from a bivariate normal dis-
tribution. Notice that the sample regression line is not the major axis of the
sample concentration ellipse.
squared errors from i = 1, . . . , n. The best line is the one that minimizes
this sum of squared errors:
min
a,b
n
X
i=1
"
error
Ã
(xi, yi)
y = a + bx
!#2
(14.3)
The distinction between (1) summary and (2) prediction lies in how we define
error.
To define the line that best summarizes the (x, y) pairs, it is natural to
define the error between a point and a line as the Euclidean distance from the
point to the line. This is found by measuring the length of the perpendicular
14.2. THE METHOD OF LEAST SQUARES 333
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
•
•
•
•
•
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Figure 14.3: Perpendicular Errors for Summary
line segment that connects them, as in Figure 14.3. Thus,
summary
error
Ã
(xi, yi)
y = a + bx
!
= perpendicular
distance
Ã
(xi, yi)
y = a + bx
!
.
Using this definition of error, the solution of Problem 14.3 is the major
axis of the sample concentration ellipse, the first principal component of the
sample. We emphasize: the first principal component is used for summary,
not prediction.
In contrast, to define the line that best predicts y values from x values,
it is natural to define the error between a point (xi, yi) and a line y = a + bx
as the difference between the observed value y = yi and the predicted value
y = ŷ (xi) = a + bxi.
The difference yi − ŷ(xi) is a residual error and the absolute difference |yi −
ŷ(xi)| is the length of the vertical line segment that connects (xi, yi) and
y = a + bx, as in Figure 14.4. Using this definition of error, the solution of
334 CHAPTER 14. SIMPLE LINEAR REGRESSION
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
•
•
•
•
•
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Figure 14.4: Vertical Errors for Prediction
Problem 14.3 is the sample regression line. We emphasize: the regression
line is used for prediction, not summary.
The remainder of this section provides a more detailed exposition of the
squared error approach to prediction. Let
SS(a, b) =
n
X
i=1
(yi − a − bxi)2
,
the sum of the squared residual errors that result from the prediction func-
tion ŷ(x) = a + bx. The method of least squares chooses (a, b) to minimize
SS(a, b). Before analyzing this problem, we first consider an easier prob-
lem. If we knew {y1, . . . , yn} but not the corresponding {x1, . . . , xn}, then it
would be impossible to measure errors associated with prediction functions
that involve x. In this situation we would be forced to restrict attention
to prediction functions of the form ŷ = a, which corresponds to restricting
attention to lines with zero slope. The method of least squares then chooses
a to minimize n
X
i=1
(yi − a)2
= SS(a, 0).
14.2. THE METHOD OF LEAST SQUARES 335
Theorem 14.2 The value of a that minimizes SS(a, 0) is a = ȳ.
Proof We can conclude that SS(a, 0)/n is minimal when a = ȳ by
applying part (2) of Theorem 6.1 to the empirical distribution of {y1, . . . , yn};
however, it is instructive to verify this conclusion by direct calculation:
SS(a, 0) =
n
X
i=1
(yi − a)2
=
n
X
i=1
(yi − ȳ + ȳ − a)2
=
n
X
i=1
(yi − ȳ)2
+
n
X
i=1
2 (yi − ȳ) (ȳ − a) +
n
X
i=1
(ȳ − a)2
= (n − 1)s2
y + 2 (ȳ − a)
" n
X
i=1
yi − nȳ
#
+ n (ȳ − a)2
= (n − 1)s2
y + n (ȳ − a)2
The second term in this expression is the only term that involves a. It
achieves its minimal value of zero when a = ȳ. ✷
For future reference, we define the total sum of squares to be
SST = SS (ȳ, 0) =
n
X
i=1
(yi − ȳ)2
= (n − 1)s2
y.
This is the smallest squared error possible when predicting y without infor-
mation about x.
Now we consider the problem of finding the line y = a + bx that best
predicts values of y from values of x. The method of least squares chooses
(a, b) to minimize SS(a, b). Let (a∗, b∗) denote the minimizing values of (a, b)
and define the error sum of squares to be
SSE = SS (a∗
, b∗
) .
Because we have not restricted attention to b = 0, ŷ(x) = a∗ + b∗x must
predict at least as well as ŷ = ȳ. Thus,
SSE = SS (a∗
, b∗
) ≤ SS (ȳ, 0) = SST .
We have already stated that y = a∗ + b∗x is the sample regression line.
We can verify that statement by a calculation that resembles the proof of
Theorem 14.2.
336 CHAPTER 14. SIMPLE LINEAR REGRESSION
Theorem 14.3 Let (xi, yi) ∈ ℜ2, i = 1, . . . , n, be a set of (x, y) pairs with
at least two distinct values of x. Let
b∗
= r
sy
sx
and a∗
= ȳ − b∗
x̄.
Then
SS (a∗
, b∗
) ≤ SS(a, b)
for all choices of (a, b).
Proof First, write
SS(a, b) =
n
X
i=1
(yi − a − bxi)2
=
n
X
i=1
(yi − ȳ + ȳ − bx̄ + bx̄ − a − bxi)2
=
n
X
i=1
[(yi − ȳ) + (ȳ − bx̄ − a) − b (xi − x̄)]2
.
Expanding the square in this expression results in six terms. The three
squared terms are:
n
X
i=1
(yi − ȳ)2
= (n − 1)s2
y,
n
X
i=1
(ȳ − bx̄ − a)2
= n (ȳ − bx̄ − a)2
,
n
X
i=1
(−b)2
(xi − x̄)2
= b2
n
X
i=1
(xi − x̄)2
= b2
(n − 1)s2
x.
The three cross-product terms are:
n
X
i=1
2 (yi − ȳ) (ȳ − bx̄ − a) = 2 (ȳ − bx̄ − a)
n
X
i=1
(yi − ȳ)
= 2 (ȳ − bx̄ − a)
" n
X
i=1
yi − nȳ
#
= 0,
n
X
i=1
2 (yi − ȳ) (−b) (xi − x̄) = −2b
n
X
i=1
(xi − x̄) (yi − ȳ)
= −2b(n − 1)sxsy
1
n−1
Pn
i=1 (yi − ȳ) (xi − x̄)
sxsy
= −2b(n − 1)sxsyr,
n
X
i=1
2 (ȳ − bx̄ − a) (−b) (xi − x̄) = −2b (ȳ − bx̄ − a)
n
X
i=1
(xi − x̄) = 0.
14.2. THE METHOD OF LEAST SQUARES 337
Hence,
SS(a, b) = (n − 1)s2
y + n (ȳ − bx̄ − a)2
+ b2
(n − 1)s2
x − 2b(n − 1)sxsyr
= n (ȳ − bx̄ − a)2
+ (n − 1)
h
b2
s2
x − 2bsxrsy + r2
s2
y
i
−(n − 1)r2
s2
y + (n − 1)s2
y
= n (ȳ − bx̄ − a)2
+ (n − 1) [bsx − rsy]2
+
³
1 − r2
´
(n − 1)s2
y.
The third term in this expression does not involve b or a. The second term
achieves its minimal value of zero when b = rsy/sx = b∗. The first term
is the only term that involves a. Whatever the value of b, the first term
achieves its minimal value of zero when a = ȳ − bx̄. Hence, for b = b∗, the
minimizing value of a is a = ȳ − b∗x̄ = a∗. ✷
The total sum of squares, SST , measures the prediction error from ŷ = ȳ.
The error sum of squares,
SSE = SS (a∗
, b∗
) =
n
X
i=1
[yi − (ȳ − b∗
x̄) − b∗
xi]2
=
n
X
i=1
[yi − ȳ − b∗
(xi − x̄)]2
=
n
X
i=1
·
(yi − ȳ) − r
sy
sx
(xi − x̄)
¸2
=
n
X
i=1
(yi − ȳ)2
− 2r
sy
sx
n
X
i=1
(xi − x̄) (yi − ȳ) + r2 s2
y
s2
x
n
X
i=1
(xi − x̄)2
= (n − 1)s2
y − 2rs2
y(n − 1)
1
n−1
Pn
i=1 (xi − x̄) (yi − ȳ)
sxsy
+ r2
s2
y(n − 1)
= (n − 1)s2
y − 2(n − 1)s2
yr2
+ r2
(n − 1)s2
y
= (n − 1)s2
y
³
1 − r2
´
=
³
1 − r2
´
SST ,
measures the prediction error from the sample regression line. Now we de-
fine the regression sum of squares to be the sum of the squared differences
between the two predictions,
SSR =
n
X
i=1
[ŷ − ŷ (xi)]2
=
n
X
i=1
·
ȳ − ȳ − r
sy
sx
(xi − x̄)
¸2
= r2 s2
y
s2
x
n
X
i=1
(xi − x̄)2
= r2
s2
y(n − 1) = r2
SST .
338 CHAPTER 14. SIMPLE LINEAR REGRESSION
The three sums of squares (SSR, SSE, SST ) are precisely analogous to the
three sums of squares (SSB, SSW , SST ) that arise in the analysis of variance
and they enjoy an identical property:
SSR + SSE = r2
SST +
³
1 − r2
´
SST = SST
This is the Pythagorean Theorem in n-dimensional Euclidean space! The
points
A =



ȳ
.
.
.
ȳ


 , B =




ȳ − r
sy
sx
(x1 − x̄)
.
.
.
ȳ − r
sy
sx
(xn − x̄)



 , C =



y1
.
.
.
yn



are the vertices of a right triangle in ℜn. The right angle occurs at vertex
B. The squared Euclidean distances of the sides that meet at B are
d2
(A, B) = SSR and d2
(B, C) = SSE
and the squared Euclidean distance of the hypotenuse is
d2
(A, C) = SST ,
so
d2
(A, B) + d2
(B, C) = SSR + SSE = SST = d2
(A, C).
To quantify the extent to which knowledge of x improves our ability to
predict y, we measure the proportion by which the squared error of prediction
is reduced when we use the sample regression line instead of the constant
prediction ŷ = ȳ. This proportion is just
SS (ȳ, 0) − SS(a, b)
SS (ȳ, 0)
=
SST − SSE
SST
=
SSR
SST
=
r2SST
SST
= r2
,
the sample coefficient of determination. Again, we conclude that the square
of Pearson’s product-moment correlation coefficient measures the proportion
of variation “explained” by simple linear regression.
Example 14.2 (continued) For the bivariate sample displayed in Fig-
ure 14.2, the total sum of squares is
SST = (n − 1)s2
y = 99 · 14.1754248 = 1403.3671
14.3. COMPUTATION 339
and the coefficient of determination is
r2
= 0.47073092
= 0.2215876.
Hence, the regression sum of squares is
SSR = r2
SST = 0.2215876 · 1403.367 = 310.9688
and the error sum of squares is
SSE = SST − SSR = 1403.3671 − 310.9688 = 1092.3983.
14.3 Computation
A bivariate sample consists of 2n numbers. However, all of the quantities
used in the preceding sections can be computed from just six fundamental
quantities:
n
Pn
i=1 xi
Pn
i=1 yi
Pn
i=1 x2
i
Pn
i=1 y2
i
Pn
i=1 xiyi
These quantities are used by many calculators. One reason that they are
so convenient is that they are easily incremented as new (x, y) pairs are
observed.
Example 14.2 (continued) For the bivariate sample displayed in Fig-
ure 14.2, the six fundamental quantities are as follows:
n = 100
Pn
i=1 xi = 1000.068
n
X
i=1
yi = 1939.859
n
X
i=1
x2
i = 10442.04
Pn
i=1 y2
i = 39033.91
n
X
i=1
xiyi = 19770.1
Now suppose that we draw another (x, y) pair from the same population, say
(8.9, 13.5). Then the new sample has the following fundamental quantities:
n = 100 + 1
n
X
i=1
x2
i = 10442.04 + 8.92
n
X
i=1
xi = 1000.068 + 8.9
n
X
i=1
y2
i = 39033.91 + 13.52
n
X
i=1
yi = 1939.859 + 13.5
n
X
i=1
xiyi = 19770.1 + 8.9 · 13.5
340 CHAPTER 14. SIMPLE LINEAR REGRESSION
Three useful quantities are easily computed from the six fundamental
quantities:
txx =
n
X
i=1
(xi − x̄) (xi − x̄) =
n
X
i=1
³
x2
i − 2x̄xi + x̄2
´
=
n
X
i=1
x2
i − 2x̄
n
X
i=1
xi + nx̄2
=
n
X
i=1
x2
i − 2nx̄2
+ nx̄2
=
n
X
i=1
x2
i −
1
n
à n
X
i=1
xi
!2
tyy =
n
X
i=1
(yi − ȳ) (yi − ȳ) =
n
X
i=1
y2
i −
1
n
à n
X
i=1
yi
!2
txy =
n
X
i=1
(xi − x̄) (yi − ȳ) =
n
X
i=1
³
y2
i − ȳxi − x̄yi + x̄ȳ
´
=
n
X
i=1
xiyi − ȳ
n
X
i=1
xi − x̄
n
X
i=1
yi + nx̄ȳ =
n
X
i=1
xiyi − nx̄ȳ
=
n
X
i=1
xiyi −
1
n
à n
X
i=1
xi
! Ã n
X
i=1
yi
!
These quantities are useful because all of the important quantities derived in
the preceding sections are easily computed from them. Here are the formulas:
1. Sample variances:
s2
x =
txx
n − 1
s2
y =
tyy
n − 1
2. Pearson’s correlation coefficient:
r =
1
n−1
Pn
i=1 (xi − x̄) (yi − ȳ)
sxsy
=
txy
√
txx
√
tyy
r2
=
t2
xy
txxtyy
3. Sample regression coefficients:
b∗
= r
sy
sx
=
1
n−1
Pn
i=1 (xi − x̄) (yi − ȳ)
s2
x
=
txy
txx
a∗
= ȳ − b∗
x̄ =
1
n
n
X
i=1
yi −
txy
txx
1
n
n
X
i=1
xi
14.4. THE SIMPLE LINEAR REGRESSION MODEL 341
4. Sums of squares:
SST =
n
X
i=1
(yi − ȳ)2
= tyy
SSR = r2
SST =
t2
xy
txxtyy
tyy =
t2
xy
txx
SSE = SST − SSR = tyy −
t2
xy
txx
14.4 The Simple Linear Regression Model
Let x1, . . . , xn be a list of real numbers for which sx > 0. Suppose that:
1. Associated with each xi is a random variable
Yi ∼ Normal
³
µi, σ2
´
.
Notice that the Yi have a common population variance σ2 > 0. This
is analogous to the homoscedasticity assumption of the analysis of
variance.
2. The population means µi satisfy the linear relation
µi = β0 + β1xi
for some β0, β1 ∈ ℜ. The population parameters (β0, β1) are called the
population regression coeffcients.
These assumptions define the simple linear regression model. Suppose that
we sample from a bivariate normal distribution, then condition on the ob-
served values x1, . . . , xn. It follows from Theorem 14.1 that this is a special
case of the simple linear regression model in which
β1 = ρ
σy
σx
,
β0 = µy − ρ
σy
σx
µx = µy − β1µx,
σ2
=
³
1 − ρ2
´
σ2
y.
342 CHAPTER 14. SIMPLE LINEAR REGRESSION
The simple linear regression model has three unknown parameters. The
method of least squares estimates (β0, β1) by
β̂1 = b∗
= r
sy
sx
=
txy
txx
,
β̂0 = a∗
= ȳ − β̂1x̄.
These are also the plug-in estimates of (β0, β1), and the plug-in estimate of
σ2 is
c
σ2 =
1
n
n
X
i=1
³
yi − β̂0 − β̂1xi
´2
=
1
n
SSE.
We proceed to explore some properties of the corresponding estimators.
These properties are consequences of the following key facts:
Theorem 14.4 Under the assumptions of the simple linear regression model,
the random variables β̂1 and SSE are independent and satisfy
β̂1 ∼ Normal
Ã
β1,
σ2
txx
!
(14.4)
SSE
σ2
∼ χ2
(n − 2). (14.5)
It follows from (14.4) that Eβ̂1 = β1, and consequently that
Eβ̂0 = E
Ã
1
n
n
X
i=1
Yi − β̂1
1
n
n
X
i=1
xi
!
=
1
n
n
X
i=1
E
³
Yi − β̂1xi
´
=
1
n
n
X
i=1
(β0 + β1xi − β1xi) =
1
n
n
X
i=1
β0 = β0.
Thus, (β̂0, β̂1) are unbiased estimators of (β0, β1). Furthermore, it follows
from (14.5) and Corollary 5.1 that E(SSE/σ2) = n − 2. Hence, E[SSE/(n −
2)] = σ2 and
MSE =
1
n − 2
SSE
is an unbiased estimator of σ2.
Converting (14.4) to standard units results in
β̂1 − β1
p
σ2/txx
∼ Normal(0, 1). (14.6)
14.4. THE SIMPLE LINEAR REGRESSION MODEL 343
Dividing (14.6) by (14.5), it follows from Definition 5.7 that
³
β̂1 − β1
´
/
p
σ2/txx
q
SSE
σ2 /(n − 2)
=
β̂1 − β1
p
MSE/txx
∼ t(n − 2).
This fact allows us to construct confidence intervals for β1. Given α, we first
compute the critical value
qt = qt(1 − α/2, n − 2).
Then
β̂1 ± qt
s
MSE
txx
is a (1 − α)-level confidence interval for β1.
Remark: It may be helpful to write
MSE
txx
=
¡
1 − r2
¢
SST /(n − 2)
(n − 1)s2
x
=
¡
1 − r2
¢
(n − 1)s2
y/(n − 2)
(n − 1)s2
x
=
³
1 − r2
´ s2
y
s2
x
/(n − 2).
Example 14.3 Suppose that n = 100 bivariate observations produce
the following estimates:
x̄ = 97.255564
ȳ = 103.872210
s2
x = 425.062476
s2
y = 872.229230
r = −0.485857
To construct a 0.95-level confidence interval for β1, we first compute
β̂1 = r
sy
sx
= −0.5070697 ·
r
414.7388683
434.9825540
= −0.695981,
qt = qt(.975, df = 98) = 1.984467, and
MSE
txx
=
1 − r2
n − 2
·
s2
y
s2
x
=
1 − 0.4858572
98
·
872.2292302
425.0624762
= 0.01599605.
344 CHAPTER 14. SIMPLE LINEAR REGRESSION
The desired confidence interval is then
β̂1 ± qt
s
MSE
txx
= −0.695981 ± 1.984467 ·
√
0.01599605
= (−0.9469675, −0.4449945).
Next we consider how to test H0 : β1 = 0 versus H1 : β1 6= 0. This
is an important decision because rejecting H0 : β1 = 0 means that we are
convinced that values of x help us to predict values of y. Furthermore, if we
sampled from a bivariate normal population, then
β1 = ρ
σy
σx
= 0
if and only if ρ = 0. Because normal random variables X and Y are inde-
pendent if and only if they are uncorrelated, the null hypothesis H0 : β1 = 0
is equivalent to the null hypothesis that X and Y are independent.
If β1 = 0, then
β̂1
p
MSE/txx
∼ t(n − 2).
Hence, the significance probability for testing H0 : β1 = 0 is
p = P
Ã
|T| ≥
¯
¯
¯
¯
¯
β̂1
p
MSE/txx
¯
¯
¯
¯
¯
!
,
where the random variable T ∼ t(n − 2), and we reject H0 : β1 = 0 if and
only if p ≤ α. Equivalently, we reject H0 : β1 = 0 if and only if we observe
¯
¯
¯
¯
¯
β̂1
p
MSE/txx
¯
¯
¯
¯
¯
≥ qt,
where qt is the critical value defined above. Notice that
β̂1
p
MSE/txx
=
txy/txx
p
MSE/txx
=
txy
√
txx
1
p
SSE/(n − 2)
=
txy
√
txx
√
tyy
√
tyy
√
n − 2
q
tyy − t2
xy/txx
= r
√
n − 2
q
1 − t2
xy/(txxtyy)
=
r
√
n − 2
√
1 − r2
,
14.4. THE SIMPLE LINEAR REGRESSION MODEL 345
so this is the same t-test that we described in Section 13.2.3 for testing
H0 : ρ = 0 versus H1 : ρ 6= 0.
It follows from Theorem 5.5 that
Ã
β̂1
p
MSE/txx
!2
∼ F(1, n − 2).
Hence, an F-test that is equivalent to the t-test derived in the preceding
paragraph rejects H0 : β1 = 0 if and only if we observe
(n − 2)
r2
1 − r2
≥ qF ,
where the critical value qF is defined by
qF = qf(1 − α, 1, n − 2).
Equivalently, we reject H0 : β1 = 0 if and only if the significance probability
p = P
Ã
F ≥ (n − 2)
r2
1 − r2
!
≤ α,
where the random variable F ∼ F(1, n − 2). The results of the F-test of
H0 : β1 = 0 are traditionally presented in the form of an ANOVA table:
Source of Sum of Degrees of Mean F-Test p-
Variation Squares Freedom Square Statistic Value
Regression r2SST 1 r2SST (n − 2) r2
1−r2 p
Error (1 − r2)SST n − 2 1−r2
n−2 SST
Total SST
Example 14.3 (continued) Let us now test H0 : β1 = 0 versus H1 :
β1 6= 0 at a significance level of α = 0.05. Of course, we know that we will
reject H0 because the 0.95-level confidence interval constructed from these
data did not contain the hypothesized slope β1 = 0.
The t-test statistic is
t =
β̂1
p
MSE/txx
=
−0.695981
√
0.01599605
= −5.502893,
which results in a significance probability of
p = 2 ∗ pt(−5.502893, df = 98) = 2.989589 × 10−7
.
346 CHAPTER 14. SIMPLE LINEAR REGRESSION
Because p < α, we reject H0 : β1 = 0.
Equivalently, we can compute SST = (n − 1)s2
x = 99 · 425.062476
.
=
42081.19 and r2 = 0.236057, then construct the following ANOVA table:
Source of Sum of DF Mean F-Test p-
Variation Squares Square Statistic Value
Regression 20383.689 1 20383.6885 30.28183 2.989589 × 10−7
Error 65967.005 98 673.1327
Total 86350.694
Again, we reject H0 : β1 = 0 because p < α. Notice that we obtain the same
significance probability with either test.
Although equivalent, the t-test and F-test of H0 : β1 = 0 each enjoy
certain advantages. The former is more flexible, as it is easily adapted to
test 1-sided hypotheses. The F-test is more readily generalized to testing a
variety of hypotheses that naturally arise when studying more complicated
regression models.
14.5 Regression Diagnostics
14.6. EXERCISES 347
14.6 Exercises
1. According to Stanford University Professor Claude M. Steele (Not just
a test, The Nation, May 3, 2004, page 40),
“The SAT, for example, correlates .42 with freshman grades. . .
This means that it measures about 18 percent of the charac-
teristics, whatever they are, that determine freshman grades.”
Comment on this passage. Do you agree with Professor Steele’s inter-
pretation of what r = 0.42 means?
2. Suppose that (X, Y ) have a bivariate normal distribution with param-
eters (5, 3, 1, 4, 0.5). Compute the following quantities:
(a) P(Y > 6)
(b) E(Y |X = 6.5)
(c) P(Y > 6|X = 6.5)
3. Assume that the population of all sister-brother heights has a bivariate
normal distribution and that the data in Exercise 13.5.3 were sampled
from this population. Use these data in the following:
(a) Consider the population of all sister-brother heights. Estimate
the proportion of all brothers who are at least 5′ 10′′.
(b) Suppose that Carol is 5′ 1′′. Predict her brother’s height.
(c) Consider the population of all sister-brother heights for which the
sister is 5′ 1′′. Estimate the proportion of these brothers who are
at least 5′ 10′′.
4. Assume that the population of all sister-brother heights has a bivariate
normal distribution and that the data in Exercise 13.5.2 were sampled
from this population. Use these data in the following:
(a) Compute the sample coefficient of determination, the proportion
of variation “explained” by simple linear regression.
(b) Let α = 0.05. Do these data provide convincing evidence that
knowing a sister’s height (x) helps one predict her brother’s height
(y)?
(c) Construct a 0.90-level confidence interval for the slope of the pop-
ulation regression line for predicting y from x.
348 CHAPTER 14. SIMPLE LINEAR REGRESSION
(d) Suppose that you are planning to conduct a more comprehensive
study of sibling heights. Your goal is to better estimate the slope
of the population regression line for predicting y from x. If you
want to construct a 0.95-level confidence interval of length 0.1,
then how many sister-brother pairs should you plan to observe?
Hint:
MSE
txx
=
³
1 − r2
´ s2
y
s2
x
/(n − 2).
5. A class of 35 students took two midterm tests. Jack missed the first
test and Jill missed the second test. The 33 students who took both
tests scored an average of 75 points on the first test, with a standard
deviation of 10 points, and an average of 64 points on the second test,
with a standard deviation of 12 points. The scatter diagram of their
scores is roughly ellipsoidal, with a correlation coefficient of r = 0.5.
Because Jack and Jill each missed one of the tests, their professor needs
to guess how each would have performed on the missing test in order
to compute their semester grades.
(a) Jill scored 80 points on Test 1. She suggests that her missing
score on Test 2 be replaced with her score on Test 1, 80 points.
What do you think of this suggestion? What score would you
advise the professor to assign?
(b) Jack scored 76 points on Test 2, precisely one standard deviation
above the Test 2 mean. He suggests that his missing score on
Test 1 be replaced with a score of 85 points, precisely one standard
deviation above the Test 1 mean. What do you think of this
suggestion? What score would you advise the professor to assign?
6. In a study of “Heredity of head form in man” (Genetica, 3:193–384,
1921), G.P. Frets reported two head measurements (in millimeters)
for each of the first two adult sons of 25 families. These data are
reproduced as Data Set 111 in A Handbook of Small Data Sets, and
14.6. EXERCISES 349
can also be downloaded from the web page for this couse.
First Son Second Son
Length Breadth Length Breadth
191 155 179 145
195 149 201 152
181 148 185 149
183 153 188 149
176 144 171 142
208 157 192 152
189 150 190 149
197 159 189 152
188 152 197 159
192 150 187 151
179 158 186 148
183 147 174 147
174 150 185 152
190 159 195 157
188 151 187 158
163 137 161 130
195 155 183 158
186 153 173 148
181 145 182 146
175 140 165 137
192 154 185 152
174 143 178 147
176 139 176 143
197 167 200 158
190 163 187 150
For each head, we will compute two variables:
size <- length+breadth
shape <- length-breadth
(a) Consider head size. Investigate the relation between first son head
size and second son head size. Can we reject the null hypothesis
that these variables are uncorrelated? Of the variation in second
son head size, what proportion is explained by variation in first
son head size?
350 CHAPTER 14. SIMPLE LINEAR REGRESSION
(b) Consider head shape. Investigate the relation between first son
head shape and second son head shape. Can we reject the null
hypothesis that these variables are uncorrelated? Of the varia-
tion in second son head shape, what proportion is explained by
variation in first son head shape?
(c) In another family from the same era, the first adult son’s head
had a length of 195 millimeters and a breadth of 160 millimeters.
Use this information to guess the size of the second adult son’s
head.
7. In the athletics event known as the shot put, male competitors “put”
the “shot,” a 16-pound metal ball. (Female competitors use a smaller
shot.) In the United States, high school male competitors put a 12-
pound shot, then graduate to the 16-pound shot used in NCAA, US-
ATF, and IAAF competition. In its August 2002 “Stat Corner,” the
respected athletics periodical Track & Field News proclaimed an “In-
verse Relationship Between 12 & 16lb Shots:”
“A look at the accompanying all-time Top 11 lists for high
schoolers with the 12lb shot—11 because there have been 11
of them over 70 [feet]—and for U.S. men with the 16 sends
two messages to aspiring prep putters:
• If you’re not very good in high school, don’t worry about
it; few of the big guys were either.
• If you’re great in high school, that may be about as good
as you’ll ever get.
“The numbers are astounding. We’ll leave it to a technical
expert to figure out why. . . ”
The numbers follow.1 Do you agree with T&FN’s two messages?
1
Perhaps the most astounding number is Michael Carter’s prodigious heave of 81-3.50,
arguably the most formidable record in all of track and field. Carter broke an 11-year-old
record by nine feet! He went on to a sensational college career at SMU, winning the NCAA
championship and a silver medal at the 1984 Olympic Games. He then opted for a career
in professional football, becoming an All-Pro defensive lineman for the NFL Champion
San Francisco 49er’s.
14.6. EXERCISES 351
ALL-TIME HIGH SCHOOL 70-FOOTERS
12 16 16–12
1. Michael Carter ’79 81-3.5 71-4.75 –9-10.75
2. Brent Noon ’90 76-2 70-5.75 –5-8.25
3. Arnold Campbell ’84 74-10.5 64-3 –10-7.5
4. Charles Moye ’87 72-8 57-1 –15-7
5. Sam Walker ’68 72-3.25 66-9.5 –5-5.75
6. Jesse Stuart ’70 71-11i 68-11.5i –2-11.5
7. Roger Roesler ’96 71-2 61-6.25 –11-7.75
8. Kevin Bookout ’02 71-1.5 (too early still)
9. Doug Lane ’68 70-11 66-11.25 –3-11.75
10. Dennis Black ’91 70-7 68-10 –1-9
11. Ron Semkiw ’72 70-1.75 70-0.5 –0-1.25
ALL-TIME U.S. TOP 11
16 12 16–12
1. Randy Barnes ’90 75-10.25 66-9.5 +9-0.75
2. Brian Oldfield ’75 75-0 58-10 +16-2
3. John Brenner ’87 73-10.75 64-5.5 +9-5.25
4. Adam Nelson ’02 73-10.25 63-2.25 +10-8
5. Kevin Toth ’02 72-9.75 58-11 +13-10.75
6. George Woods ’74 72-3i 60-11 +11-4
6. Dave Laut ’82 72-3 65-9 +6-6
6. John Godina ’99 72-3 64-1.25 +8-1.75
9. Gregg Trafalis ’92 72-1.5 57-0 +15-1.5
10. Terry Albritton ’76 71-8.5 67-9 +3-11.5
11. Andy Bloom ’00 71-7.25 64-2.5 +7-4.74
352 CHAPTER 14. SIMPLE LINEAR REGRESSION
Chapter 15
Simulation-Based Inference
15.1 Termite Foraging Revisited
353
354 CHAPTER 15. SIMULATION-BASED INFERENCE
Appendix R
A Statistical Programming
Language
R.1 Introduction
R.1.1 What is R?
In the 1970s, researchers at AT&T Bell Laboratories developed S, a high-
level statistical programming language that became popular with academic
statisticians. Bell Labs subsequently licensed S to a company that added
a variety of capabilities, creating the commercial product S-Plus. R is yet
another implementation of S. The R Project for Statistical Computing is an
ongoing effort by a group of statisticians to extend and improve R.
R is free, Open Source software, that can be downloaded in compiled or
source code form. It runs on a variety of UNIX platforms and similar systems
(including FreeBSD and Linux), Windows, and MacOS. The primary web
site for information about R is:
http://www.r-project.org/
R.1.2 Why Use R?
This question encompasses several issues. First, there is the question of
what role statistical software is to play in the course. Introductory statistics
courses may use software in different ways. Once upon a time, many in-
structors (myself included) avoided using software in the first semester. The
rationale for this approach is that one should begin one’s study of statis-
tics by focussing on basic concepts and learn what the computer is doing
355
356 APPENDIX R. A STATISTICAL PROGRAMMING LANGUAGE
before one uses the computer to do it. Unfortunately, this approach con-
demns one to analyzing fairly trivial data sets, and even then calculating by
hand and/or calculator quickly becomes extremely tedious. As a result, this
approach has fallen from favor.
At the other end of the spectrum, many introductory statistics courses
use statistics packages like Minitab, SPSS, or SAS to analyze data. Such
packages are extremely useful and every statistician should have some fa-
miliarity with at least one such package. However, if one begins to rely on
such packages too quickly, the package may be viewed as a black box and
the student may never really learn what that black box is doing.
There are many different ways to introduce the subject of statistics, and
no one way is best for all students. This book is intended for students
who want to understand what is going on inside the black box procedures
available in so many statistics packages. This intention determines the use
that we shall make of the computer. We will strive for an intermediate
approach, in which the computer is used to relieve the tedium of calculation,
but in which the student is obliged to tell the computer what intermediate
steps need to be performed in order to obtain the desired output. Such an
approach requires a high-level, interactive programming language. Several
such languages are available, but S-Plus and R have achieved the greatest
popularity within the statistics community. Acquiring some familiarity with
S-Plus and/or R will benefit students who continue to study statistics and/or
analyze data in the future.
Why R instead of S-Plus? For most of the examples in this book, R
and S-Plus are interchangeable—the same commands work for both. But
R has two compelling advantages. First, R is available for certain operating
systems for which S-Plus is not, e.g., MacOS. Second, R is free! As a result,
students who begin using R in this course can be confident that they will
always have access to R.
R.1.3 Installing R
To efficiently download software, documentation, etc., you should use a
nearby CRAN (Comprehensive R Archive Network) mirror site, e.g., Statlib
at Carnegie Mellon University:
http://lib.stat.cmu.edu/R/CRAN/
Most students will want to install R in compiled form by downloading ex-
ecutable binary files. On-line documentation and several manuals are in-
R.2. USING R 357
cluded, although you may find it easier to get started using the examples
provided in this book.
R.1.4 Learning About R
R is far too complicated to learn in one (or even several) lessons. I doubt
that any one person—including the R developers—knows everything about
R! But don’t be intimidated: the best way to learn R is to just start using R.
And, the best time to use R is when you’re trying to accomplish a specific
task. Try to learn bits and pieces of R as they’re introduced in the text
and/or you develop an interest in a specific capability.
Of course, it’s hard to learn anything without documentation. The ma-
terial in this book, both the examples scattered throughout various chapters
to illustrate various statistical methods and the tutorial material in this ap-
pendix, is a good way to get started. Once you know the name of one R
function, you can learn more about it and discover related functions using
various utilities included in your R installation. If you’re using the Windows
version of R then you can start by exploring the Help menu in RGui, which
will lead you to manuals, search utilities, and web pages. I tend to use R
functions (text) for help on specific functions.
R.2 Using R
R is an interpreted language, designed to be used interactively. The user is
prompted to issue a command as follows:
>
The cursor-up key allows the user to recall previous commands.
Except for a few standard arithmetic operations, R accomplishes things
by executing various functions. For example, to exit R one executes the quit
function:
> q()
When you quit, R will inquire if you want to “Save workspace image?” If
you answer yes (y), then all of the objects in your current workspace, e.g.,
any data sets and functions that you created, will be saved and restored the
next time that you start R.
358 APPENDIX R. A STATISTICAL PROGRAMMING LANGUAGE
R.2.1 Vectors
R can store and manipulate a variety of data objects, the most basic of which
is a vector. In R a vector is an ordered list of numbers, i.e., a list of numbers
with a designated first element, second element, etc. Vectors can be created
in various ways. In each of the following examples, the created vector is
assigned the name x.
Note that R has a large number of built-in functions. Assigning their
names to user-created objects will mask the built-in functions. For this
reason certain simple names, e.g., c and t, should be avoided.
Example R.1 To enter a list of numbers from the keyboard, use the
concatenate function:
> x <- c(20,5,15,18,5,13,1)
Notice that this can be done recursively, e.g.,
> x <- c(20,5,15)
> x <- c(x,18,5)
> x <- c(x,13,1)
To display the vector, type its name:
> x
Just typing
> c(20,5,15,18,5,13,1)
causes R to display the vector without saving it for future use.
Example R.2 To read a list of numbers from an ascii text file, say
data.txt, use the scan function. In most situations, you will need to spec-
ify the complete path of data.txt. How one does this depends on which
operating system your computer uses.
For example, suppose that you are using the Windows version of R and
data.txt resides in the directory c:CoursesMath351. Then the following
command will read the contents of data.txt into the vector x:
> x <- scan("c:CoursesMath351data.txt")
Notice that the single slashes in the path name must be entered as double
slashes in R.
R.2. USING R 359
Example R.3 Several functions are useful for creating sequences of
numbers, e.g.,
> x <- seq(from=1,to=15,by=2)
> x <- rep(1,times=10)
Consecutive integers are especially easy, e.g.,
x <- 11:20
Example R.4 R has a variety of functions for generating pseudoran-
dom samples.1
To draw 10 numbers from a uniform distribution on (0, π):
> x <- runif(10,min=0,max=pi)
To draw 20 numbers from a normal distribution with mean 5 and stan-
dard deviation 1.5:
> x <- rnorm(20,mean=5,sd=1.5)
To simulate rolling a fair die 30 times:
> die <- 1:6
> x <- sample(x=die,size=30,replace=T)
A subset of a vector can be identified by a vector of index values. For
example, to extract the 2nd, 3rd, and 5th elements of the vector x, one might
type:
> k <- c(2,3,5)
> x[k]
To extract the other elements, just type:
> x[-k]
One may wish to rearrange the elements, e.g.,
> y <- sort(x)
The preceding command is equivalent to
> y <- x[order(x)]
1
The precise meanings of the phrases that follow are explained in Chapters 3–5.
360 APPENDIX R. A STATISTICAL PROGRAMMING LANGUAGE
R.2.2 R is a Calculator!
R provides a variety of arithmetical operations and mathematical functions.
These operations/functions have been vectorized, i.e., they work on entire
vectors, not just individual numbers. Several examples follow.
First, let’s create two vectors:
> x <- 10:20
> y <- seq(from=1.8,to=2.2,length=length(x))
Now, each of the following is a valid R command:
> x+100
> x-20
> x*10
> x/10
> x^2
> sqrt(x)
> exp(x)
> log(x)
> x+y
> x-y
> x*y
> x/y
> x^y
R.2.3 Some Statistics Functions
R provides hundreds of functions that perform or facilitate a variety of statis-
tical analyses. Most R functions are not used in this book. (You may enjoy
discovering and using some of them on your own initiative.) Tables R.1 and
R.2 list some of the R functions that are used.
R.2.4 Creating New Functions
The full power of R emerges when one writes one’s own functions. To il-
lustrate, I’ve written a short function named Edist that computes the Eu-
clidean distance between two vectors. When I type Edist, R displays the
function:
> Edist
R.2. USING R 361
Function Distribution Section
pgeom Geometric 4.2
phyper Hypergeometric 4.2
pbinom Binomial 4.4
punif Uniform 5.3
pnorm Normal 5.4
pchisq Chi-Squared 5.5
pt Student’s t 5.5
pf Fisher’s F 5.5
Table R.1: Some R functions that evaluate the cumulative distribution func-
tion (cdf) for various families of probability distributions. The prefix p
designates a cdf function; the remainder of the function name specifies the
distribution. For the analogous quantile functions, use the prefix q, e.g.,
qnorm. To evaluate the analogous probability mass function (pmf) or prob-
ability density function (pdf), use the prefix d, e.g., dnorm. To generate a
pseudorandom sample, use the prefix r, e.g., rnorm.
function(u,v){
return(sqrt(sum((u-v)^2)))
}
>
Edist has two arguments, u and v, which it interprets as vectors of equal
length. Edist computes the vector of differences, squares each difference,
sums the squares, then takes the square root of the sum to obtain the dis-
tance. Finally, it returns the computed distance. I could have written Edist
as a sequence of intermediate steps, but there’s no need to do so.
I might have created Edist in any of the following ways:
Example R.5
> Edist <- function(u,v){ return(sqrt(sum((u-v)^2))) }
>
Example R.6
> Edist <- function(u,v){
362 APPENDIX R. A STATISTICAL PROGRAMMING LANGUAGE
Function Used to Compute/Display
sum sample sum
mean sample mean
median sample median
var sample variance
quantile sample quantile(s)
summary several useful quantities
plot.ecdf empirical cdf
boxplot box plot(s)
qqnorm normal probability plot
plot, density kernel estimate of pdf
Table R.2: Some R functions that compute or display useful information
about one or more univariate samples. See Chapter 7.
+ return(sqrt(sum((u-v)^2)))
+ }
>
Notice that R recognizes that the command creating Edist is not complete
and provides continuation prompts (+) until it is.
Examples R.5 and R.6 are useful for very short functions, but not for
anything complicated. Be warned: ff you mistype and R cannot interpret
what you did type, then R ignores the command and you have to retype it.
Using the cursor-up key to recall what you typed may help, but for anything
complicated it is best to create a permanent file that you can edit. This can
be done within R or outside of R.
Example R.7 To create moderately complicated functions in R, use
the edit function. For example, I might start by typing
> Edist <- function(u,v){u-v}
R.2. USING R 363
This creates an R object called Edist, but not the Edist that we want—this
Edist returns the vector of differences.2 So, I use edit to modify Edist.3
This process is initiated with the command
> Edist <- edit(Edist)
After making and saving the desired changes to Edist, I close the editor,
thereby returning control to R. R checks the edited version of Edist: if R
can interpret the edited version, then R replaces the previous version with
the edited version; if R cannot interpret the edited version, e.g., because of
typographical errors, then R issues an error message and retains the previous
version. Fortunately, R also retains a temporary version of whatever modi-
fications I attempted to make, so I have another chance at getting it right.
To access the temporary version, I type
> Edist <- edit()
Note that I should not retype
> Edist <- edit(Edist)
as this command returns to the original unedited version and discards what-
ever changes I attempted to make.
Example R.8 Objects created in R can be lost, e.g., if one forgets to
save one’s workspace image when one quits R. For this reason, I prefer to
create my R functions outside of R. To accomplish this, I first use a text
editor to create an ascii text file that contains whatever R commands I want
to execute, e.g., the command that creates Edist. For example, I might use
the Windows notepad editor to create an ascii text file that contains the
following:
Edist <- function(u,v)
{
return(sqrt(sum((u-v)^2)))
}
2
Using the return function is good practice, but often unnecessary. An R function will
automatically return the last quantity that it computes.
3
Each installation has a default editor. For the Windows operating system, the default
editor is the Windows notepad editor.
364 APPENDIX R. A STATISTICAL PROGRAMMING LANGUAGE
Let’s suppose that I call this file myRfcns.txt and save it in the directory
c:CoursesMath351. Then, I can start R and use the source function to
execute the commands in myRfcns.txt:
> source("c:CoursesMath351myRfcns.txt")
To check that I succeeded in creating Edist, I can produce a list of all the
objects in my workspace by typing
> objects()
R.2.5 Exploring Bivariate Normal Data
In Sections 13.2 and 14.1, we explored the structure of bivariate normal data
using five R functions:
binorm.ellipse
binorm.sample
binorm.estimate
binorm.scatter
binorm.regress
These functions are not part of your R installation—I created them for this
book/course. To obtain them, download the ascii text file binorm.R from
the web page for this book/course, then source its contents into your R
workspace. For example, suppose that you have a Windows operating system
and that you save binorm.R in the directory c:CoursesMath351. Then
the following command instructs R to execute the commands in binorm.R
that create the five binorm functions:
> source("c:CoursesMath351binorm.R")
Tables R.3–R.7 reproduce the commands in binorm.R. Notice that the
# symbol is used to insert comments, as R ignores lines that begin with #.
R.2.6 Simulating Termite Foraging
Sections 1.1.3 and 15.1 describe a study of termite foraging behavior. A test
statistic, T, assumes a small value when each subsequently attacked roll is
near a previously attacked roll. Thus, small values of T are evidence against
a null hypothesis of random foraging, under which each unattacked roll is
equally likely to be attacked next. To compute a significance probability
R.2. USING R 365
for a particular plot, e.g., Plot 20 depicted in Figure 1.1, we require the
probability distribution of T. This discrete distribution cannot be calculated
by the methods of Chapter 4; instead, we resort to computer simulation.
Dana Ranschaert (a former student) and I created an R function, forage,
that approximates the pmf of T by simulation. To obtain forage, down-
load the ascii text file termites.R from the web page for this book/course,
then source its contents into your R workspace. Table R.8 reproduces the
commands in termites.R.
To use forage, you must specify four arguments:
1. initial, a vector that contains the numbers of the initially attacked
rolls. The 5 × 5 rolls are numbered as follows:
1 2 3 4 5
6 7 8 9 10
11 12 13 14 15
16 17 18 19 20
21 22 23 24 25
In Figure 1.1, the vector of initially attacked rolls is c(3,5).
2. nsubsequent, the number of subsequently attacked rolls. In Figure 1.1,
there are 13 such rolls.
3. nsim, the number of simulated foraging histories. In the original study,
each plot was simulated 1 million times.
4. maxT, the largest value of T to be tabulated.
For example, the command
> pmf20 <- forage(c(3,5),13,10000,30)
computes a matrix with 30−13+1 rows and 2 columns. The first column of
pmf20 contains values of T, from 13 to 30. Corresponding to each value of
T, the corresponding number in the second column of pmf20 tabulates how
many of the 10000 simulated foraging histories produced that value of T.
366 APPENDIX R. A STATISTICAL PROGRAMMING LANGUAGE
binorm.ellipse <- function(pop) {
#
# This function plots the concentration ellipse of a bivariate
# normal distribution. The 5 bivariate normal parameters are
# specified in the vector pop in the following order:
# mean of X, mean of Y, variance of X, variance of Y,
# correlation of (X,Y).
# For example: pop <- c(0,0,1,4,.5)
#
n <- 628
m <- matrix(pop[1:2],nrow=2)
off <- pop[5] * sqrt(pop[3]*pop[4])
C <- matrix(c(pop[3],off,off,pop[4]),nrow=2)
E <- eigen(C,symmetric=T)
a <- 0:n/100
X <- cbind(cos(a),sin(a))
X <- X %*% diag(sqrt(E$values)) %*% t(E$vectors)
X <- X + matrix(rep(1,n+1),ncol=1) %*% t(m)
xmin <- min(X[,1])
xmax <- max(X[,1])
ymin <- min(X[,2])
ymax <- max(X[,2])
dif <- max(xmax-xmin,ymax-ymin)
xlim <- c(m[1]-dif,m[1]+dif)
ylim <- c(m[2]-dif,m[2]+dif)
par(pty="s")
plot(X,type="l",xlab="x",ylab="y",xlim=xlim,ylim=ylim)
title("Concentration Ellipse")
}
Table R.3: The command that creates the R function binorm.ellipse, de-
scribed in Section 13.2. This command is included in the file binorm.R.
R.2. USING R 367
binorm.sample <- function(pop,n) {
#
# This function returns a sample of n observations drawn from a
# bivariate normal distribution. The 5 bivariate normal
# parameters are specified in the vector pop in the following
# order: mean of X, mean of Y, variance of X, variance of Y,
# correlation of (X,Y). For example: pop <- c(0,0,1,4,.5)
# The sample is returned in the form of an n-by-2 data matrix,
# each row of which is an observed value of (X,Y).
#
m <- matrix(pop[1:2],nrow=2)
off <- pop[5] * sqrt(pop[3]*pop[4])
C <- matrix(c(pop[3],off,off,pop[4]),nrow=2)
E <- eigen(C,symmetric=T)
Data <- matrix(rnorm(2*n),nrow=n)
Data <- Data %*% diag(sqrt(E$values)) %*% t(E$vectors)
Data + matrix(rep(1,n),nrow=n) %*% t(m)
}
Table R.4: The command that creates the R function binorm.sample, de-
scribed in Section 13.2. This command is included in the file binorm.R.
368 APPENDIX R. A STATISTICAL PROGRAMMING LANGUAGE
binorm.estimate <- function(Data) {
#
# This function estimates bivariate normal parameters from a
# bivariate data matrix. Each row of the n-by-2 matrix Data
# contains a single observation of (X,Y). The function returns
# a vector of 5 estimated parameters: mean of X, mean of Y,
# variance of X, variance of Y, correlation of (X,Y).
#
n <- nrow(Data)
m <- c(sum(Data[,1]),sum(Data[,2]))/n
v <- c(var(Data[,1]),var(Data[,2]))
z1 <- (Data[,1]-m[1])/sqrt(v[1])
z2 <- (Data[,2]-m[2])/sqrt(v[2])
r <- sum(z1*z2)/(n-1)
c(m,v,r)
}
Table R.5: The command that creates the R function binorm.estimate,
described in Section 13.2. This command is included in the file binorm.R.
R.2. USING R 369
binorm.scatter <- function(Data) {
#
# This function produces a scatter diagram of the bivariate data
# contained in the n-by-2 data matrix Data. It also superimposes
# the sample concentration ellipse.
#
n <- 628
xmin <- min(Data[,1])
xmax <- max(Data[,1])
xmid <- (xmin+xmax)/2
ymin <- min(Data[,2])
ymax <- max(Data[,2])
ymid <- (ymin+ymax)/2
dif <- max(xmax-xmin,ymax-ymin)/2
xlim <- c(xmid-dif,xmid+dif)
ylim <- c(ymid-dif,ymid+dif)
par(pty="s")
plot(Data,xlab="x",ylab="y",xlim=xlim,ylim=ylim)
title("Scatter Diagram")
v <- binorm.estimate(Data)
m <- matrix(v[1:2],nrow=2)
off <- v[5] * sqrt(v[3]*v[4])
C <- matrix(c(v[3],off,off,v[4]),nrow=2)
E <- eigen(C,symmetric=T)
a <- 1:n/100
Y <- cbind(cos(a),sin(a))
Y <- Y %*% diag(sqrt(E$values)) %*% t(E$vectors)
Y <- Y + matrix(rep(1,n),nrow=n) %*% t(m)
lines(Y)
}
Table R.6: The command that creates the R function binorm.scatter, de-
scribed in Section 13.2. This command is included in the file binorm.R.
370 APPENDIX R. A STATISTICAL PROGRAMMING LANGUAGE
binorm.regress <- function(Data) {
#
# This function produces a scatter diagram of the bivariate data
# contained in the n-by-2 data matrix Data. It also superimposes
# the sample concentration ellipse and the regression line.
#
n <- 628
xmin <- min(Data[,1])
xmax <- max(Data[,1])
xmid <- (xmin+xmax)/2
ymin <- min(Data[,2])
ymax <- max(Data[,2])
ymid <- (ymin+ymax)/2
dif <- max(xmax-xmin,ymax-ymin)/2
xlim <- c(xmid-dif,xmid+dif)
ylim <- c(ymid-dif,ymid+dif)
par(pty="s")
plot(Data,xlab="x",ylab="y",xlim=xlim,ylim=ylim)
title("Regression Line")
v <- binorm.estimate(Data)
m <- matrix(v[1:2],nrow=2)
off <- v[5] * sqrt(v[3]*v[4])
C <- matrix(c(v[3],off,off,v[4]),nrow=2)
E <- eigen(C,symmetric=T)
a <- 0:n/100
Y <- cbind(cos(a),sin(a))
Y <- Y %*% diag(sqrt(E$values)) %*% t(E$vectors)
Y <- Y + matrix(rep(1,n+1),ncol=1) %*% t(m)
lines(Y)
x <- xlim[1] + (2*dif*(0:n))/n
slope <- v[5] * sqrt(v[4]/v[3])
y <- v[2] + slope*(x-v[1])
Y <- cbind(x,y)
Y <- Y[Y[,2] < ymax,]
Y <- Y[Y[,2] > ymin,]
lines(Y)
}
Table R.7: The command that creates the R function binorm.regress, de-
scribed in Section 14.1. This command is included in the file binorm.R.
R.2. USING R 371
forage <- function(initial,nsubsequent,nsim,maxT) {
#
# This function simulates nsim termite foraging histories.
# initial is the vector of initially attacked rolls;
# nsim is the number of subsequently attacked rolls.
# The function returns a matrix in which the first column
# contains values of the test statistic T (from nsubsequent
# to maxT) and the second column contains the corresponding
# number of histories that produced that value of T.
#
v <- rep(1:5,5)
w <- rep(1:5,rep(5,5))
D <- cbind(v,w)
D <- (diag(25) - matrix(1/25,25,25)) %*% D
D <- D %*% t(D)
v <- diag(D)
H <- diag(v) %*% matrix(1,25,25)
D <- H+t(H)-2*D
D[D<0] <- 0
H <- matrix(100,25,25)
for (rowi in 2:25)
for (colj in 1:(rowi-1)) {
H[rowi,colj] <- 0
}
v <- 1:length(initial)
w <- 1:(length(initial)+nsubsequent)
pmf <- rep(0,maxT)
for (isim in 1:nsim){
rolls <- c(initial, sample(x=(1:25)[-initial],
size=nsubsequent, replace=F))
D0 <- D[rolls,rolls] + H[w,w]
distance <- apply(D0,1,min)
total <- round(sum(distance[-v]))
if (total < maxT+0.5) {
pmf[total] <- pmf[total]+1
}
return(cbind(nsubsequent:maxT,pmf[-(1:(nsubsequent-1))]))
}
Table R.8: The command that creates the R function forage, described in
Section 15.1. This command is included in the file termites.R.

More Related Content

PDF
Cambridge University - Proficiency in English
PDF
Successful writing proficiency
PDF
NUS - Master Degree Certification - Gauthameswar Anandhasayanam
PDF
Certificate of Proficiency in English-University of Cambridge
PDF
b.tech degree certificate
PPTX
iot_application_casestudies.pptx
PDF
Recommendation Letter
PDF
Digitación flauta
Cambridge University - Proficiency in English
Successful writing proficiency
NUS - Master Degree Certification - Gauthameswar Anandhasayanam
Certificate of Proficiency in English-University of Cambridge
b.tech degree certificate
iot_application_casestudies.pptx
Recommendation Letter
Digitación flauta

Similar to An Introduction to Statistical Inference and Its Applications.pdf (20)

PDF
toaz.info-instructor-solution-manual-probability-and-statistics-for-engineers...
PDF
Manual Solution Probability and Statistic Hayter 4th Edition
PDF
Probability and Statistics by sheldon ross (8th edition).pdf
PDF
Mth201 COMPLETE BOOK
PDF
biometry MTH 201
PDF
probability_stats_for_DS.pdf
PDF
Applied Statistics With R
PDF
Navarro & Foxcroft (2018). Learning statistics with jamovi (1).pdf
PDF
Introductory Statistics Explained.pdf
PDF
Lecture notes on planetary sciences and orbit determination
PDF
Thats How We C
PDF
Statistics for economists
PDF
book.pdf
PDF
Methods for Applied Macroeconomic Research.pdf
PDF
Social Media Mining _indian edition available.pdf
PDF
Social Media Mining _indian edition available.pdf
PDF
Ric walter (auth.) numerical methods and optimization a consumer guide-sprin...
PDF
phd_unimi_R08725
PDF
MLBOOK.pdf
DOCX
Go to TOCStatistics for the SciencesCharles Peters.docx
toaz.info-instructor-solution-manual-probability-and-statistics-for-engineers...
Manual Solution Probability and Statistic Hayter 4th Edition
Probability and Statistics by sheldon ross (8th edition).pdf
Mth201 COMPLETE BOOK
biometry MTH 201
probability_stats_for_DS.pdf
Applied Statistics With R
Navarro & Foxcroft (2018). Learning statistics with jamovi (1).pdf
Introductory Statistics Explained.pdf
Lecture notes on planetary sciences and orbit determination
Thats How We C
Statistics for economists
book.pdf
Methods for Applied Macroeconomic Research.pdf
Social Media Mining _indian edition available.pdf
Social Media Mining _indian edition available.pdf
Ric walter (auth.) numerical methods and optimization a consumer guide-sprin...
phd_unimi_R08725
MLBOOK.pdf
Go to TOCStatistics for the SciencesCharles Peters.docx
Ad

More from Sharon Collins (20)

PDF
Creative Writing Prompts For Teens Social Icons
PDF
Owl Purdue Apa Apa Formatting And Style Guid
PDF
Science Research Paper Introduction Example Writ
PDF
Problem - Solution Essay Outline Introduction A. Ch
PDF
Staff Paper For Music Notation Stock Photo - Alamy
PDF
Can I Hire Someone To Write My Essay Read Thi
PDF
10 Formal Outline Templates - Free Sample Example
PDF
How To Write Critical Analysis Essay Wit
PDF
Opinion Writing For 2Nd Graders. Online assignment writing service.
PDF
Journal Article Methods NSE Communication Lab
PDF
Good Authors To Write A Research Paper About - Writ
PDF
125 Mylintys Tėvai Citatos Ir Posakiai Apie Šeimą Ir Para
PDF
Good Comparative Essay Thesis. Online assignment writing service.
PDF
Essay Writing Competition Lawschole LEXPEEPS .IN
PDF
Paper Mate Write Bros Black Medium Point Pens - Shop
PDF
HowToWriteASeminar. Online assignment writing service.
PDF
Rhetorical Analysis Thesis Statements Templates At
PDF
A critical study of international higher education development capital, cap...
PDF
Autoethnography subjectivity and experimental strategy in the social science...
PDF
ARCHITECTURE FOR AUTISM Autism ASPECTSS in School Design.pdf
Creative Writing Prompts For Teens Social Icons
Owl Purdue Apa Apa Formatting And Style Guid
Science Research Paper Introduction Example Writ
Problem - Solution Essay Outline Introduction A. Ch
Staff Paper For Music Notation Stock Photo - Alamy
Can I Hire Someone To Write My Essay Read Thi
10 Formal Outline Templates - Free Sample Example
How To Write Critical Analysis Essay Wit
Opinion Writing For 2Nd Graders. Online assignment writing service.
Journal Article Methods NSE Communication Lab
Good Authors To Write A Research Paper About - Writ
125 Mylintys Tėvai Citatos Ir Posakiai Apie Šeimą Ir Para
Good Comparative Essay Thesis. Online assignment writing service.
Essay Writing Competition Lawschole LEXPEEPS .IN
Paper Mate Write Bros Black Medium Point Pens - Shop
HowToWriteASeminar. Online assignment writing service.
Rhetorical Analysis Thesis Statements Templates At
A critical study of international higher education development capital, cap...
Autoethnography subjectivity and experimental strategy in the social science...
ARCHITECTURE FOR AUTISM Autism ASPECTSS in School Design.pdf
Ad

Recently uploaded (20)

PPTX
Lesson notes of climatology university.
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PDF
A systematic review of self-coping strategies used by university students to ...
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
Complications of Minimal Access Surgery at WLH
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PPTX
Pharma ospi slides which help in ospi learning
PDF
O7-L3 Supply Chain Operations - ICLT Program
PPTX
Presentation on HIE in infants and its manifestations
PPTX
Institutional Correction lecture only . . .
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PDF
01-Introduction-to-Information-Management.pdf
PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PDF
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
Classroom Observation Tools for Teachers
PDF
Microbial disease of the cardiovascular and lymphatic systems
Lesson notes of climatology university.
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
A systematic review of self-coping strategies used by university students to ...
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
Final Presentation General Medicine 03-08-2024.pptx
Complications of Minimal Access Surgery at WLH
102 student loan defaulters named and shamed – Is someone you know on the list?
Pharma ospi slides which help in ospi learning
O7-L3 Supply Chain Operations - ICLT Program
Presentation on HIE in infants and its manifestations
Institutional Correction lecture only . . .
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
01-Introduction-to-Information-Management.pdf
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
Module 4: Burden of Disease Tutorial Slides S2 2025
Chinmaya Tiranga quiz Grand Finale.pdf
Classroom Observation Tools for Teachers
Microbial disease of the cardiovascular and lymphatic systems

An Introduction to Statistical Inference and Its Applications.pdf

  • 1. An Introduction to Statistical Inference and Its Applications Michael W. Trosset Department of Mathematics College of William & Mary August 11, 2006
  • 2. I of dice possess the science and in numbers thus am skilled. From The Story of Nala, the third book of the Indian epic Mahábarata. This book is dedicated to Richard A. Tapia, my teacher, mentor, collaborator, and friend.
  • 3. Contents 1 Experiments 7 1.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.1.1 Spinning a Penny . . . . . . . . . . . . . . . . . . . . . 8 1.1.2 The Speed of Light . . . . . . . . . . . . . . . . . . . . 9 1.1.3 Termite Foraging Behavior . . . . . . . . . . . . . . . 11 1.2 Randomization . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.3 The Importance of Probability . . . . . . . . . . . . . . . . . 17 1.4 Games of Chance . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2 Mathematical Preliminaries 27 2.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.2 Counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.3 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.4 Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3 Probability 45 3.1 Interpretations of Probability . . . . . . . . . . . . . . . . . . 45 3.2 Axioms of Probability . . . . . . . . . . . . . . . . . . . . . . 46 3.3 Finite Sample Spaces . . . . . . . . . . . . . . . . . . . . . . . 55 3.4 Conditional Probability . . . . . . . . . . . . . . . . . . . . . 60 3.5 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . 72 3.6 Case Study: Padrolling in Milton Murayama’s All I asking for is my body . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 1
  • 4. 2 CONTENTS 4 Discrete Random Variables 93 4.1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.3 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.4 Binomial Distributions . . . . . . . . . . . . . . . . . . . . . . 110 4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5 Continuous Random Variables 121 5.1 A Motivating Example . . . . . . . . . . . . . . . . . . . . . . 121 5.2 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5.3 Elementary Examples . . . . . . . . . . . . . . . . . . . . . . 128 5.4 Normal Distributions . . . . . . . . . . . . . . . . . . . . . . . 132 5.5 Normal Sampling Distributions . . . . . . . . . . . . . . . . . 136 5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 6 Quantifying Population Attributes 145 6.1 Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 6.2 Quantiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 6.2.1 The Median of a Population . . . . . . . . . . . . . . . 151 6.2.2 The Interquartile Range of a Population . . . . . . . . 151 6.3 The Method of Least Squares . . . . . . . . . . . . . . . . . . 152 6.3.1 The Mean of a Population . . . . . . . . . . . . . . . . 153 6.3.2 The Standard Deviation of a Population . . . . . . . . 153 6.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 7 Data 159 7.1 The Plug-In Principle . . . . . . . . . . . . . . . . . . . . . . 160 7.2 Plug-In Estimates of Mean and Variance . . . . . . . . . . . . 163 7.3 Plug-In Estimates of Quantiles . . . . . . . . . . . . . . . . . 164 7.3.1 Box Plots . . . . . . . . . . . . . . . . . . . . . . . . . 166 7.3.2 Normal Probability Plots . . . . . . . . . . . . . . . . 168 7.4 Kernel Density Estimates . . . . . . . . . . . . . . . . . . . . 171 7.5 Case Study: Are Forearm Lengths Normally Distributed? . . 174 7.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 8 Lots of Data 183 8.1 Averaging Decreases Variation . . . . . . . . . . . . . . . . . 185 8.2 The Weak Law of Large Numbers . . . . . . . . . . . . . . . . 187 8.3 The Central Limit Theorem . . . . . . . . . . . . . . . . . . . 190
  • 5. CONTENTS 3 8.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 9 Inference 199 9.1 A Motivating Example . . . . . . . . . . . . . . . . . . . . . . 200 9.2 Point Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 202 9.2.1 Estimating a Population Mean . . . . . . . . . . . . . 202 9.2.2 Estimating a Population Variance . . . . . . . . . . . 204 9.3 Heuristics of Hypothesis Testing . . . . . . . . . . . . . . . . 204 9.4 Testing Hypotheses About a Population Mean . . . . . . . . . 214 9.4.1 One-Sided Hypotheses . . . . . . . . . . . . . . . . . . 218 9.4.2 Formulating Suitable Hypotheses . . . . . . . . . . . . 219 9.4.3 Statistical Significance and Material Significance . . . 223 9.5 Set Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 224 9.5.1 Sample Size . . . . . . . . . . . . . . . . . . . . . . . . 227 9.5.2 One-Sided Confidence Intervals . . . . . . . . . . . . . 228 9.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 10 1-Sample Location Problems 235 10.1 The Normal 1-Sample Location Problem . . . . . . . . . . . . 238 10.1.1 Point Estimation . . . . . . . . . . . . . . . . . . . . . 238 10.1.2 Hypothesis Testing . . . . . . . . . . . . . . . . . . . . 239 10.1.3 Interval Estimation . . . . . . . . . . . . . . . . . . . . 243 10.2 The General 1-Sample Location Problem . . . . . . . . . . . . 245 10.2.1 Hypothesis Testing . . . . . . . . . . . . . . . . . . . . 245 10.2.2 Point Estimation . . . . . . . . . . . . . . . . . . . . . 248 10.2.3 Interval Estimation . . . . . . . . . . . . . . . . . . . . 249 10.3 The Symmetric 1-Sample Location Problem . . . . . . . . . . 250 10.4 A Case Study from Neuropsychology . . . . . . . . . . . . . . 250 10.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 11 2-Sample Location Problems 257 11.1 The Normal 2-Sample Location Problem . . . . . . . . . . . . 259 11.1.1 Known Variances . . . . . . . . . . . . . . . . . . . . . 261 11.1.2 Unknown Common Variance . . . . . . . . . . . . . . 263 11.1.3 Unknown Variances . . . . . . . . . . . . . . . . . . . 266 11.2 The Case of a General Shift Family . . . . . . . . . . . . . . . 270 11.3 The Symmetric Behrens-Fisher Problem . . . . . . . . . . . . 270 11.4 Case Study: Etruscan versus Italian Head Breadth . . . . . . 270 11.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
  • 6. 4 CONTENTS 12 k-Sample Location Problems 283 12.1 The Case of a Normal Shift Family . . . . . . . . . . . . . . . 284 12.1.1 The Fundamental Null Hypothesis . . . . . . . . . . . 284 12.1.2 Testing the Fundamental Null Hypothesis . . . . . . . 285 12.1.3 Planned Comparisons . . . . . . . . . . . . . . . . . . 291 12.1.4 Post Hoc Comparisons . . . . . . . . . . . . . . . . . . 301 12.2 The Case of a General Shift Family . . . . . . . . . . . . . . . 303 12.2.1 The Kruskal-Wallis Test . . . . . . . . . . . . . . . . . 303 12.3 The Behrens-Fisher Problem . . . . . . . . . . . . . . . . . . 303 12.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 13 Association 309 13.1 Categorical Random Variables . . . . . . . . . . . . . . . . . . 309 13.2 Normal Random Variables . . . . . . . . . . . . . . . . . . . . 309 13.2.1 Bivariate Normal Distributions . . . . . . . . . . . . . 310 13.2.2 Bivariate Normal Samples . . . . . . . . . . . . . . . . 313 13.2.3 Inferences about Correlation . . . . . . . . . . . . . . 316 13.3 Monotonic Association . . . . . . . . . . . . . . . . . . . . . . 321 13.4 Spurious Association . . . . . . . . . . . . . . . . . . . . . . . 321 13.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 14 Simple Linear Regression 325 14.1 The Regression Line . . . . . . . . . . . . . . . . . . . . . . . 325 14.2 The Method of Least Squares . . . . . . . . . . . . . . . . . . 331 14.3 Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 14.4 The Simple Linear Regression Model . . . . . . . . . . . . . . 341 14.5 Regression Diagnostics . . . . . . . . . . . . . . . . . . . . . . 346 14.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 15 Simulation-Based Inference 353 15.1 Termite Foraging Revisited . . . . . . . . . . . . . . . . . . . 353 R A Statistical Programming Language 355 R.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 R.1.1 What is R? . . . . . . . . . . . . . . . . . . . . . . . . 355 R.1.2 Why Use R? . . . . . . . . . . . . . . . . . . . . . . . 355 R.1.3 Installing R . . . . . . . . . . . . . . . . . . . . . . . . 356 R.1.4 Learning About R . . . . . . . . . . . . . . . . . . . . 357 R.2 Using R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
  • 7. CONTENTS 5 R.2.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 358 R.2.2 R is a Calculator! . . . . . . . . . . . . . . . . . . . . . 360 R.2.3 Some Statistics Functions . . . . . . . . . . . . . . . . 360 R.2.4 Creating New Functions . . . . . . . . . . . . . . . . . 360 R.2.5 Exploring Bivariate Normal Data . . . . . . . . . . . . 364 R.2.6 Simulating Termite Foraging . . . . . . . . . . . . . . 364
  • 9. Chapter 1 Experiments Statistical methods have proven enormously valuable in helping scientists interpret the results of their experiments—and in helping them design ex- periments that will produce interpretable results. In a quite general sense, the purpose of statistical analysis is to organize a data set in ways that reveal its structure. Sometimes this is so easy that one does not think that one is doing “statistics;” sometimes it is so difficult that one seeks the assistance of a professional statistician. This is a book about how statisticians draw conclusions from experimen- tal data. Its primary goal is to introduce the reader to an important type of reasoning that statisticians call “statistical inference.” Rather than provide a superficial introduction to a wide variety of inferential methods, we will concentrate on fundamental concepts and study a few methods in depth. Although statistics can be studied at many levels with varying degrees of sophistication, there is no escaping the simple fact that statistics is a mathe- matical discipline. Statistical inference rests on the mathematical foundation of probability. The better one desires to understand statistical inference, the more that one needs to know about probability. Accordingly, we will devote several chapters to probability before we begin our study of statistics. To motivate the reader to embark on this program of study, the present chapter describes the important role that probability plays in scientific investigation. 1.1 Examples This section describes several scientific experiments. Each involves chance variation in a different way. The common theme is that chance variation 7
  • 10. 8 CHAPTER 1. EXPERIMENTS cannot be avoided in scientific experimentation. 1.1.1 Spinning a Penny In August 1994, while attending the 15th International Symposium on Math- ematical Programming in Ann Arbor, MI, I read an article in which the au- thor asserted that spinning (as opposed to tossing/flipping) a typical penny is not fair, i.e., that Heads and Tails are not equally likely to result. Specif- ically, the author asserted that the chance of obtaining Heads by spinning a penny is about 30%.1 I was one of several people in a delegation from Rice University. That evening, we ended up at a local Subway restaurant for dinner and talk turned to whether or not spinning pennies is fair. Before long we were each spin- ning pennies and counting Heads. At first it seemed that about 70% of the spins were Heads, but this proved to be a temporary anomaly. By the time that we tired of our informal experiment, our results seemed to confirm the plausibility of the author’s assertion. I subsequently used penny-spinning as an example in introductory sta- tistics courses, each time asserting that the chance of obtaining Heads by spinning a penny is about 30%. Students found this to be an interesting bit of trivia, but no one bothered to check it—until 2001. In the spring of 2001, three students at the College of William & Mary spun pennies, counted Heads, and obtained some intriguing results. For example, Matt, James, and Sarah selected one penny that had been minted in the year 2000 and spun it 300 times, observing 145 Heads. This is very nearly 50% and the discrepancy might easily be explained by chance variation—perhaps spinning their penny is fair! They tried different pennies 1 Years later, I have been unable to discover what I read or who wrote it. It seems to be widely believed that the chance is less than 50%. The most extreme assertion that I have discovered is by R. L. Graham, D. E. Knuth, and O. Patashnik (Concrete Mathematics, Second Edition, Addison-Wesley, 1994, page 401), who claimed that the chance is approximately 10% “when you spin a newly minted U.S. penny on a smooth table.” A fairly comprehensive discussion of “Flipping, spinning, and tilting coins” can be found at http://www.dartmouth.edu/~chance/chance news/recent news/ chance news 11.02.html#item2, in which various individuals emphasize that the chance of Heads depends on such factors as the year in which the penny was minted, the surface on which the penny is spun, and the quality of the spin. For pennies minted in the 1960s, one individual reported 1878 Heads in 5520 spins, about 34%.
  • 11. 1.1. EXAMPLES 9 and obtained different percentages. Perhaps all pennies are not alike! (Pen- nies minted before 1982 are 95% copper and 5% zinc; pennies minted after 1982 are 97.5% zinc and 2.5% copper.) Or perhaps the differences were due to chance variation. Were one to undertake a scientific study of penny spinning, there are many questions that one might ask. Here are several: • Choose a penny. What is the chance of obtaining Heads by spinning that penny? (This question is the basis for Exercise 1 at the end of this chapter.) • Choose two pennies. Are they equally likely to produce Heads when spun? • Choose several pennies minted before 1982 and several pennies minted after 1982. As groups, are pre-1982 pennies and post-1982 pennies equally likely to produce Heads when spun? 1.1.2 The Speed of Light According to Albert Einstein’s special theory of relativity, the speed of light through a vacuum is a universal constant c. Since 1974, that speed has been given as c = 299, 792.458 kilometers per second.2 Long before Ein- stein, however, philosophers had debated whether or not light is transmitted instantaneously and, if not, at what speed it moved. In this section, we consider Albert Abraham Michelson’s famous 1879 experiment to determine the speed of light.3 Aristotle believed that light “is not a movement” and therefore has no speed. Francis Bacon, Johannes Kepler, and René Descartes believed that light moved with infinite speed, whereas Galileo Galilei thought that its speed was finite. In 1638 Galileo proposed a terrestrial experiment to resolve the dispute, but two centuries would pass before this experiment became technologically practicable. Instead, early determinations of the speed of light were derived from astronomical data. 2 Actually, a second is defined to be 9, 192, 631, 770 periods of radiation from cesium- 133 and a kilometer is defined to be the distance travelled by light through a vacuum in 1/299792458 seconds! 3 A. A. Michelson (1880). Experimental determination of the velocity of light made at the U.S. Naval Academy, Annapolis. Astronomical Papers, 1:109–145. The material in this section is taken from R. J. MacKay and R. W. Oldford (2000), Scientific method, statistical method and the speed of light, Statistical Science, 15:254–278.
  • 12. 10 CHAPTER 1. EXPERIMENTS The first empirical evidence that light is not transmitted instantaneously was presented by the Danish astronomer Ole Römer, who studied a series of eclipses of Io, Jupiter’s largest moon. In September 1676, Römer correctly predicted a 10-minute discrepancy in the time of an impending eclipse. He argued that this discrepancy was due to the finite speed of light, which he estimated to be about 214, 000 kilometers per second. In 1729, James Bradley discovered an annual variation in stellar positions that could be explained by the earth’s motion if the speed of light was finite. Bradley estimated that light from the sun took 8 minutes and 12 seconds to reach the earth and that the speed of light was 301,000 kilometers per second. In 1809, Jean-Baptiste Joseph Delambre used 150 years of data on eclipses of Jupiter’s moons to estimate that light travels from sun to earth in 8 minutes and 13.2 seconds, at a speed of 300,267.64 kilometers per second. In 1849, Hippolyte Fizeau became the first scientist to estimate the speed of light from a terrestrial experiment, a refinement of the one proposed by Galileo. An accurately machined toothed wheel was spun in front of a light source, automatically covering and uncovering it. The light emitted in the gaps between the teeth travelled 8633 meters to a fixed flat mirror, which reflected the light back to its source. The returning light struck either a tooth or a gap, depending on the wheel’s speed of rotation. By varying the speed of rotation and observing the resulting image from reflected light beams, Fizeau was able to measure the speed of light. In 1851, Leon Foucault further refined Galileo’s experiment, replacing Fizeau’s toothed wheel with a rotating mirror. Michelson further refined Foucault’s experimental setup. A precise account of the experiment is be- yond the scope of this book, but Mackay’s and Oldford’s account of how Michelson produced each of his 100 measurements of the speed of light pro- vides some sense of what was involved. More importantly, their account reveals the multiple ways in which Michelson’s measurements were subject to error. 1. The distance |RM| from the rotating mirror to the fixed mirror was measured five times, each time allowing for temperature, and the average used as the “true distance” between the mirrors for all determinations. 2. The fire for the pump was started about a half hour before mea- surement began. After this time, there was sufficient pressure to begin the determinations.
  • 13. 1.1. EXAMPLES 11 3. The fixed mirror M was adjusted. . . and the heliostat placed and adjusted so that the Sun’s image was directed at the slit. 4. The revolving mirror was adjusted on two different axes.. . . 5. The distance |SR| from the revolving mirror to the crosshair of the eyepiece was measured using the steel tape. 6. The vertical crosshair of the eyepiece of the micrometer was cen- tred on the slit and its position recorded in terms of the position of the screw. 7. The electric tuning fork was started. The frequency of the fork was measured two or three times for each set of observations. 8. The temperature was recorded. 9. The revolving mirror was started. The eyepiece was set approx- imately to capture the displaced image. If the image did not appear in the eyepiece, the mirror was inclined forward or back until it came into sight. 10. The speed of rotation of the mirror was adjusted until the image of the revolving mirror came to rest. 11. The micrometer eyepiece was moved by turning the screw until its vertical crosshair was centred on the return image of the slit. The number of turns of the screw was recorded. The displacement is the difference in the two positions. To express this as the distance |IS| in millimetres the measured number of turns was multiplied by the calibrated number of millimetres per turn of the screw. 12. Steps 10 and 11 were repeated until 10 measurements of the dis- placement |IS| were made. 13. The rotating mirror was stopped, the temperature noted and the frequency of the electric fork was determined again. Michelson used the procedure described above to obtain 100 measure- ments of the speed of light in air. Each measurement was computed using the average of the 10 measured displacements in Step 12. These measure- ments, reported in Table 1.1, subsequently were adjusted for temperature and corrected by a factor based on the refractive index of air. Michelson re- ported the speed of light in a vacuum as 299, 944±51 kilometers per second. 1.1.3 Termite Foraging Behavior In the mid-1980s, Susan Jones was a USDA entomologist and a graduate student in the Department of Entomology at the University of Arizona. Her dissertation research concerned the foraging ecology of subterranean termites
  • 14. 12 CHAPTER 1. EXPERIMENTS 50 −60 100 270 130 50 150 180 180 80 200 180 130 −150 −40 10 200 200 160 160 160 140 160 140 80 0 50 80 100 40 30 −10 10 80 80 30 0 −10 −40 0 80 80 80 60 −80 −80 −180 60 170 150 80 110 50 70 40 40 50 40 40 40 90 10 10 20 0 −30 −40 −60 −50 −40 110 120 90 60 80 −80 40 50 50 −20 90 40 −20 10 −40 10 −10 10 10 50 70 70 10 −60 10 140 150 0 10 70 Table 1.1: Michelson’s 100 unadjusted measurements of the speed of light in air. Add 299, 800 to obtain measurements in units of kilometers per second. in the Sonoran Desert. Her field studies were conducted on the Santa Rita Experimental Range, about 40 kilometers south of Tucson, AZ: The foraging activity of H. aureus4 was studied in 30 plots, each consisting of a grid (6 by 6 m) of 25 toilet-paper rolls which served as baits. . . Plots were selected on the basis of two criteria: the presence of H. aureus foragers in dead wood, and separation by at least 12 m from any other plot. A 6-by-6-m area was then marked off within the vicinity of infested wood, and toilet-paper rolls were aligned in five rows and five columns and spaced at 1.5-m intervals. The rolls were positioned on the soil surface and each was held in place with a heavy wire stake. All pieces of wood ca. 15 cm long and longer were removed from each plot and ca. 3 m around the periphery to minimize the availability of natural wood as an alternative food source. Before infested wood was removed from the site, termites were allowed to retreat into their galleries in the soil to avoid depleting the numbers of surface foragers. All plots were established within a 1-wk period during late June 1984. Plots were examined once a week during the first 5 wk after es- tablishment, and then at least once monthly thereafter until August 1985.5 4 Heterotermes aureus (Snyder) is the most common subterranean termite species in the Sonoran Desert. Haverty, Nutting, and LaFage (Density of colonies and spatial distribution of foraging territories of the desert subterranean termite, Heterotermes aureues (Snyder), Environmental Entomology, 4:105–109, 1975) estimated the population density of this species in the Santa Rita Experimental Range at 4.31 × 106 termites per hectare. 5 Jones, Trosset, and Nutting. Biotic and abiotic influences on foraging of Heteroter-
  • 15. 1.1. EXAMPLES 13 An important objective of the above study was . . . to investigate the relationship between food-source distance (on a scale 6 by 6 m) and foraging behavior. This was accomplished by analyzing the order in which different toilet-paper rolls in the same plot were attacked.. . . Specifically, a statistical methodology was devel- oped to test the null hypothesis that any previously unattacked roll was equally likely to be the next roll attacked (random foraging). Al- ternative hypotheses supposed that the likelihood that a previously unattacked roll would be the next attacked roll decreased with increas- ing distance from previously attacked rolls (systematic foraging).6 ✐ ✐ 4 ✐ 5 ✐ 4 3 ✐ ✐ ✐ 4 4 4 ✐ 4 2 4 2 5 ✐ 1 ② ✐ ② Figure 1.1: Order of H. aureus attack in Plot 20. The order in which the toilet-paper rolls in Plot 20 were attacked is displayed in Figure 1.1. The unattacked rolls are denoted by ◦, the initially mes aureus (Snyder) (Isoptera: Rhinotermitidae), Environmental Entomology, 16:791–795, 1987. 6 Ibid.
  • 16. 14 CHAPTER 1. EXPERIMENTS attacked rolls are denoted by •, and the subsequently attacked rolls are denoted (in order of attack) by 1, 2, 3, 4, and 5. Notice that these numbers do not specify a unique order of attack: . . . because the plots were not observed continuously, a number of rolls seemed to have been attacked simultaneously. Therefore, it was not al- ways possible to determine the exact order in which they were attacked. Accordingly, all permutations consistent with the observed ties in order were considered. . . 7 In a subsequent chapter, we will return to the question of whether or not H. aureus forages randomly and describe the statistical methodology that was developed to answer it. Along the way, we will develop rigorous interpretations of the phrases that appear in the above passages, e.g., “per- mutations”, “equally likely”, “null hypothesis”, “alternative hypotheses”, etc. 1.2 Randomization This section illustrates an important principle in the design of experiments. We begin by describing two famous studies that produced embarrassing re- sults because they failed to respect this principle. The Lanarkshire Milk Experiment A 1930 experiment in the schools of Lanarkshire attempted to ascertain the effect of milk supplements on Scot- tish children. For four months, 5000 children received a daily supplement of 3/4 pint of raw milk, 5000 children received a daily supplement of 3/4 pint of pasteurized milk, and 10, 000 children received no daily milk supplement. Each child was weighed (while wearing indoor clothing) and measured for height before the study commenced (in February) and after it ended (in June). The final observations of the control group exceeded the final obser- vations of the treatment groups by average amounts equivalent to 3 months growth in weight and 4 months growth in weight, thereby suggesting that the milk supplements actually retarded growth! What went wrong? To explain the results of the Lanarkshire milk experiment, one must examine how the 20, 000 children enrolled in the study were assigned to the study groups. An initial division into treatment versus control groups 7 Ibid.
  • 17. 1.2. RANDOMIZATION 15 was made arbitrarily, e.g., using the alphabet. However, if the initial di- vision appeared to produce groups with unbalanced numbers of well-fed or ill-nourished children, then teachers were allowed to swap children between the two groups in order to obtain (apparently) better balanced groups. It is thought that well-meaning teachers, concerned about the plight of ill- nourished children and “knowing” that milk supplements would be bene- ficial, consciously or subconsciously availed themselves of the opportunity to swap ill-nourished children into the treatment group. This resulted in a treatment group that was lighter and shorter than the control group. Fur- thermore, it is likely that differences in weight gains were confounded with a tendency for well-fed children to come from families that could afford warm (heavier) winter clothing, as opposed to a tendency for ill-nourished children to come from poor families that provided shabbier (lighter) clothing.8 The Pre-Election Polls of 1948 The 1948 presidential election pitted Harry Truman, the Democratic incumbent who had succeeded to the pres- idency when Franklin Roosevelt died in office, against Thomas Dewey, the Republican governor of New York.9 Each of the three major polling orga- nizations that covered the campaign predicted that Dewey would win: the Crossley poll predicted 50% of the popular vote for Dewey and 45% for Tru- man, the Gallup poll predicted 50% for Dewey and 44% for Truman, and the Roper poll predicted 53% for Dewey and 38% for Truman. Dewey was considered “as good as elected” until the votes were actually counted: in one of the great upsets in American politics, Truman received slightly less than 50% of the popular vote and Dewey received slightly more than 45%.10 What went wrong? Poll predictions are based on data collected from a sample of prospective 8 For additional details and commentary, see Student (1931), The Lanarkshire milk experiment, Biometrika, 23:398, and Section 5.4 (Justification of Randomization) of Cox (1958), Planning of Experiments, John Wiley & Sons, New York. 9 As a crusading district attorney in New York City, Dewey was a national hero in the 1930s. In late 1938, two Hollywood films attempted to capitalize on his popularity, RKO’s Smashing the Rackets and Warner Brothers’ Racket Busters. The special prosecutor in the latter film was played by Walter Abel, who bore a strong physical resemblance to Dewey. 10 A famous photograph shows an exuberant Truman holding a copy of the Chicago Tribune with the headline Dewey Defeats Truman. On election night, Dewey confidently asked his wife, “How will it be to sleep with the president of the United States?” “A high honor, and quite frankly, darling, I’m looking forward to it,” she replied. At breakfast next morning, having learned of Truman’s upset victory, Frances playfully teased her husband: “Tell me, Tom, am I going to Washington or is Harry coming here?”
  • 18. 16 CHAPTER 1. EXPERIMENTS voters. For example, Gallup’s prediction was based on 50, 000 interviews. To assess the quality of Gallup’s prediction, one must examine how his sample was selected. In 1948, all three polling organizations used a method called quota sampling that attempts to hand-pick a sample that is representative of the entire population. First, one attempts to identify several important characteristics that may be associated with different voting patterns, e.g., place of residence, sex, age, race, etc. Second, one attempts to obtain a sample that resembles the entire population with respect to those charac- teristics. For example, a Gallup interviewer in St. Louis was instructed to interview 13 subjects. Exactly 6 were to live in the suburbs, 7 in the city; exactly 7 were to be men, 6 women. Of the 7 men, exactly 3 were to be less than 40 years old and exactly 1 was to be black. Monthly rent categories for the 6 white men were specified. Et cetera, et cetera, et cetera. Although the quotas used in quota sampling are reasonable, the method does not work especially well. The reason is that quota sampling does not specify how to choose the sample within the quotas—these choices are left to the discretion of the interviewer. Human choice is unpredictable and often subject to bias. In 1948, Republicans were more accessible than Democrats: they were more likely to have permanent addresses, own telephones, etc. Within their carefully prescribed quotas, Gallup interviewers were slightly more likely to find Republicans than Democrats. This unintentional bias toward Republicans had distorted previous polls; in 1948, the election was close enough that the polls picked the wrong candidate.11 In both the Lanarkshire milk experiment and the pre-election polls of 1948, subjective attempts to hand-pick representative samples resulted in embarrassing failures. Let us now exploit our knowledge of what not to do and design a simple experiment. An instructor—let’s call him Ishmael—of one section of Math 106 (Elementary Statistics) has prepared two versions of a final exam. Ishmael hopes that the two versions are equivalent, but he recognizes that this will have to be determined experimentally. He therefore decides to divide his class of 40 students into two groups, each of which will receive a different version of the final. How should he proceed? Ishmael recognizes that he requires two comparable groups if he hopes to draw conclusions about his two exams. For example, suppose that he 11 For additional details and commentary, see Mosteller et al. (1949), The Pre-Election Polls of 1948, Social Science Research Council, New York, and Section 19.3 (The Year the Polls Elected Dewey) of Freedman, Pisani, and Purves (1998), Statistics, Third Edition, W. W. Norton & Company, New York.
  • 19. 1.3. THE IMPORTANCE OF PROBABILITY 17 administers one exam to the students who attained an A average on the midterms and the other exam to the other students. If the average score on exam A is 20 points higher than the average score on exam B, then what can he conclude? It might be that exam A is 20 points easier than exam B. Or it might be that the two exams are equally difficult, but that the A students are 20 points more capable than the B students. Or it might be that exam A is actually 10 points more difficult than exam B, but that the A students are 30 points more capable than the B students. There is no way to decide—exam version and student capability are confounded in this experiment. The lesson of the Lanarkshire milk experiment and the pre-election polls of 1948 is that it is difficult to hand-pick representative samples. Accordingly, Ishmael decides to randomly assign the exams, relying on chance variation to produce balanced groups. This can be done in various ways, but a common principle prevails: each student is equally likely to receive exam A or B. Here are two possibilities: 1. Ishmael creates 40 identical slips of paper. He writes the name of each student on one slip, mixes the slips in a large jar, then draws 20 slips. (After each draw, the selected slip is set aside and the next draw uses only those slips that remain in the jar, i.e., sampling occurs without replacement.) The 20 students selected receive exam A; the remaining 20 students receive exam B. This is called simple random sampling. 2. Ishmael notices that his class comprises 30 freshmen and 10 non- freshman. Believing that it is essential to have 3/4 freshmen in each group, he assigns freshmen and non-freshmen separately. Again, Ish- mael creates 40 identical slips of paper and writes the name of each student on one slip. This time he separates the 30 freshman slips from the 10 non-freshman slips. To assign the freshmen, he mixes the 30 freshman slips and draws 15 slips. The 15 freshmen selected receive exam A; the remaining 15 freshmen receive exam B. To assign the non- freshmen, he mixes the 10 non-freshman slips and draws 5 slips. The 5 non-freshmen selected receive exam A; the remaining 5 non-freshmen receive exam B. This is called stratified random sampling. 1.3 The Importance of Probability Each of the experiments described in Sections 1.1 and 1.2 reveals something about the role of chance variation in scientific experimentation.
  • 20. 18 CHAPTER 1. EXPERIMENTS It is beyond our ability to predict with certainty if a spinning penny will come to rest with Heads facing up. Even if we believe that the outcome is completely determined, we cannot measure all the relevant variables with sufficient precision, nor can we perform the necessary calculations, to know what it will be. We express our inability to predict Heads versus Tails in the language of probability, e.g., “there is a 30% chance that Heads will result.” (Section 3.1 discusses how such statements may be interpreted.) Thus, even when studying allegedly deterministic phenomena, probability models may be of enormous value. When measuring the speed of light, it is not the phenomenon itself but the experiment that admits chance variation. Despite his excruciat- ing precautions, Michelson was unable to remove chance variation from his experiment—his measurements differ. Adjusting the measurements for tem- perature removes one source of variation, but it is impossible to remove them all. Later experiments with more sophisticated equipment produced better measurements, but did not succeed in completely removing all sources of variation. Experiments are never perfect,12 and probability models may be of enormous value in modelling errors that the experimenter is unable to remove or control. Probability plays another, more subtle role in statistical inference. When studying termites, it is not clear whether or not one is observing a systematic foraging strategy. Probability was introduced as a hypothetical benchmark: what if termites forage randomly? Even if termites actually do forage deter- ministically, understanding how they would behave if they foraged randomly provides insights that inform our judgments about their behavior. Thus, probability helps us answer questions that naturally arise when analyzing experimental data. Another example arose when we remarked that Matt, James, and Sarah observed nearly 50% Heads, specifically 145 Heads in 300 spins. What do we mean by “nearly”? Is this an important discrepancy or can chance variation account for it? To find out, we might study the behavior of penny spinning under the mathematical assumption that it is fair. If we learn that 300 spins of a fair penny rarely produce a discrepancy of 5 (or more) Heads, then we might conclude that penny spinning is not fair. If we learn that discrepancies of this magnitude are 12 Another example is described by Freedman, Pisani, and Purves in Section 6.2 of Statistics (Third Edition, W. W. Norton & Company, 1998). The National Bureau of Standards repeatedly weighs the national prototype kilogram under carefully controlled conditions. The measurements are extremely precise, but nevertheless subject to small variations.
  • 21. 1.4. GAMES OF CHANCE 19 common, then we would be reluctant to draw this conclusion. The ability to use the tools of probability to understand the behavior of inferential procedures is so powerful that good experiments are designed with this in mind. Besides avoiding the pitfalls of subjective methods, ran- domization allows us to answer questions about how well our methods work. For example, Ishmael might ask “How likely is simple random sampling to result in exactly 5 non-freshman receiving exam A?” Such questions derive meaning from the use of probability methods. When a scientist performs an experiment, s/he observes a sample of possible experimental values. The set of all values that might have been observed is a population. Probability helps us describe the population and understand the data generating process that produced the sample. It also helps us understand the behavior of the statistical procedures used to analyze experimental data, e.g., averaging 100 measurements to produce an estimate. This linkage, of sample to population through probability, is the foundation on which statistical inference is based. Statistical inference is relatively new, but the linkage that we have described is wonderfully encapsulated in a remarkable passage from The Book of Nala, the third book of the ancient Indian epic Mahábarata.13 Rtuparna examines a single twig of a spreading tree and accurately estimates the number of fruit on two great branches. Nala marvels at this ability, and Rtuparna rejoins: I of dice possess the science and in numbers thus am skilled. 1.4 Games of Chance In The Book of Nala, Rtuparna’s skill in estimation is connected with his prowess at dicing. Throughout history, probabilistic concepts have invari- ably been illustrated using simple games of chance. There are excellent reasons for us to embrace this pedagogical cliché. First, many fundamen- tal probabilistic concepts were invented for the purpose of understanding certain games of chance; it is pleasant to incorporate a bit of this fascinat- ing, centuries-old history into a modern program of study. Second, games of chance serve as idealized experiments that effectively reveal essential is- sues without the distraction of the many complicated nuances associated 13 This passage is summarized in Ian Hacking’s The Emergence of Probability, Cambridge University Press, 1975, pp. 6–7, which quotes H. H. Milman’s 1860 translation.
  • 22. 20 CHAPTER 1. EXPERIMENTS with most scientific experiments. Third, as idealized experiments, games of chance provide canonical examples of various recurring experimental struc- tures. For example, tossing a coin is a useful abstraction of such diverse ex- periments as observing whether a baby is male or female, observing whether an Alzheimer’s patient does or does not know the day of the week, or ob- serving whether a pond is or is not inhabited by geese. A scientist who is familiar with these idealized experiments will find it easier to diagnose the mathematical structure of an actual scientific experiment. Many of the examples and exercises in subsequent chapters will refer to simple games of chance. The present section collects some facts and trivia about several of the most common. Coins According to the Encylopædia Britannica, “Early cast-bronze animal shapes of known and readily identifi- able weight, provided for the beam-balance scales of the Middle Eastern civilizations of the 7th millennium BC, are evidence of the first attempts to provide a medium of exchange. . . . The first true coins, that is, cast disks of standard weight and value specifi- cally designed as a medium of exchange, were probably produced by the Lydians of Anatolia in about 640 BC from a natural alloy of gold containing 20 to 35 percent silver.”14 Despite (or perhaps because of) the simplicity of tossing a coin and observing which side (canonically identified as Heads or Tails) comes to lie facing up, it appears that coins did not play an important role in the early history of probability. Nevertheless, the use of coin tosses (or their equivalents) as randomizing agents is ubiquitous in modern times. In football, an official tosses a coin and a representative of one team calls Heads or Tails. If his call matches the outcome of the toss, then his team may choose whether to kick or receive (or, which goal to defend); otherwise, the opposing team chooses. A similar practice is popular in tennis, except that one player spins a racquet instead of tossing a coin. In each of these practices, it is presumed that the “coin” is balanced or fair, i.e., that each side is equally likely to turn up; see Section 1.1.1 for a discussion of whether or not spinning a penny is fair. 14 “Coins and coinage,” The New Encyclopædia Britannica in 30 Volumes, Macropædia, Volume 4, 1974, pp. 821–822.
  • 23. 1.4. GAMES OF CHANCE 21 Dice The noun dice is the plural form of the noun die.15 A die is a small cube, marked on each of its six faces with a number of pips (spots, dots). To generate a random outcome, the die is cast (tossed, thrown, rolled) on a smooth surface and the number of pips on the uppermost face is observed. If each face is equally likely to be uppermost, then the die is balanced or fair; otherwise, it is unbalanced or loaded. The casting of dice is an ancient practice. According to F. N. David, “The earliest dice so far found are described as being of well-fired buff pottery and date from the beginning of the third millenium. . . . consecutive order of the pips must have continued for some time. It is still to be seen in dice of the late XVIIIth Dynasty (Egypt c. 1370 B.C.), but about that time, or soon after, the arrangement must have settled into the 2-partitions of 7 familiar to us at the present time. Out of some fifty dice of the classical period which I have seen, forty had the ‘modern’ arrangement of the pips.”16 Today, pure dice games include craps, in which two dice are cast, and Yahtzee, in which five dice are cast. More commonly, the casting of dice is used as a randomizing agent in a variety of board games, e.g., backgammon and MonopolyTM. Typically, two dice are cast and the outcome is defined to be the sum of the pips on the two uppermost faces. Astragali Even more ancient than dice are astragali, the singular form of which is astragalus. The astragalus is a bone in the heel of many verte- brate animals; it lies directly above the talus, and is roughly symmetrical in hooved mammals, e.g., deer. Such astragali have been found in abundance in excavations of prehistoric man, who may have used them for counting. They were used for board games at least as early as the First Dynasty in Egypt (c. 3500 B.C.) and were the principal randomizing agent in classical Greece and Rome. According to F. N. David, “The astragalus has only four sides on which it will rest, since the other two are rounded. . . A favourite research of the scholars of 15 In The Devil’s Dictionary, Ambrose Bierce defined die as the singular of dice, remark- ing that “we seldom hear the word, because there is a prohibitory proverb, ‘Never say die.’ ” 16 F. N. David, Games, Gods and Gambling: A History of Probability and Statistical Ideas, 1962, p. 10 (Dover Publications).
  • 24. 22 CHAPTER 1. EXPERIMENTS the Italian Renaissance was to try to deduce the scoring used. It was generally agreed from a close study of the writings of classical times that the upper side of the bone, broad and slightly convex, counted 4; the opposite side, broad and slightly concave, 3; the lateral side, flat and narrow, scored 1, and the opposite narrow lateral side, which is slightly hollow, 6. The numbers 2 and 5 were omitted.”17 Accordingly, we can think of an astragalus as a 4-sided die with possible outcomes 1, 3, 4, and 6. An astragalus is not balanced. From tossing a modern sheep’s astragalus, David estimated the chances of throwing a 1 or a 6 at roughly 10 percent each and the chances of throwing a 3 or a 4 at roughly 40 percent each. The Greeks and Romans invariably cast four astragali. The most de- sirable result, the venus, occurred when the four uppermost sides were all different; the dog, which occurred when each uppermost side was a 1, was undesirable. In Asia Minor, five astragali were cast and different results were identified with the names of different gods, e.g., the throw of Saviour Zeus (one one, two threes, and two fours), the throw of child-eating Cronos (three fours and two sixes), etc. In addition to their use in gaming, astragali were cast for the purpose of divination, i.e., to ascertain if the gods favored a proposed undertaking. In 1962, David reported that “it is not uncommon to see children in France and Italy playing games with them [astragali] today;” for the most part, however, unbalanced astragali have given way to balanced dice. A whimsical contemporary example of unbalanced dice that evoke astragali are the pig dice used in Pass the PigTM (formerly PigmaniaTM). Cards David estimated that playing cards “were not invented until c. A.D. 1350, but once in use, they slowly began to display dice both as instruments of play and for fortune-telling.” By a standard deck of playing cards, we shall mean the familiar deck of 52 cards, organized into four suits (clubs, diamonds, hearts, spades) of thirteen ranks or denominations (2–10, jack, queen, king, ace). The diamonds and hearts are red; the clubs and spades are black. When we say that a deck has been shuffled, we mean that the order of the cards in the deck has been randomized. When we say that cards are dealt, we mean that they are removed from a shuffled deck in sequence, 17 F. N. David, Games, Gods and Gambling: A History of Probability and Statistical Ideas, 1962, p. 7 (Dover Publications).
  • 25. 1.4. GAMES OF CHANCE 23 beginning with the top card. The cards received by a player constitute that player’s hand. The quality of a hand depends on the game being played; however, unless otherwise specified, the order in which the player received the cards in her hand is irrelevant. Poker involves hands of five cards. The following types of hands are arranged in order of decreasing value. An ace is counted as either the highest or the lowest rank, whichever results in the more valuable hand. Thus, every possible hand is of exactly one type. 1. A straight flush contains five cards of the same suit and of consecutive ranks. 2. A hand with 4 of a kind contains cards of exactly two ranks, four cards of one rank and one of the other rank. 3. A full house contains cards of exactly two ranks, three cards of one rank and two cards of the other rank. 4. A flush contains five cards of the same suit, not of consecutive rank. 5. A straight contains five cards of consecutive rank, not all of the same suit. 6. A hand with 3 of a kind contains cards of exactly three ranks, three cards of one rank and one card of each of the other two ranks. 7. A hand with two pairs contains contains cards of exactly three ranks, two cards of one rank, two cards of a second rank, and one card of a third rank. 8. A hand with one pair contains cards of exactly four ranks, two cards of one rank and one card each of a second, third, and fourth rank. 9. Any other hand contains no pair. Urns For the purposes of this book, an urn is a container from which ob- jects are drawn, e.g., a box of raffle tickets or a jar of marbles. Modern lotteries often select winning numbers by using air pressure to draw num- bered ping pong balls from a clear plastic container. When an object is drawn from an urn, it is presumed that each object in the urn is equally likely to be selected.
  • 26. 24 CHAPTER 1. EXPERIMENTS That urn models have enormous explanatory power was first recognized by J. Bernoulli (1654–1705), who used them in Ars Conjectandi, his brilliant treatise on probability. It is not difficult to devise urn models that are equivalent to other randomizing agents considered in this section. Example 1.1: Urn Model for Tossing a Fair Coin Imagine an urn that contains one red marble and one black marble. A marble is drawn from this urn. If it is red, then the outcome is Heads; if it is black, then the outcome is Tails. This is equivalent to tossing a fair coin once. Example 1.2: Urn Model for Throwing a Fair Die Imagine an urn that contains six tickets, labelled 1 through 6. Drawing one ticket from this urn is equivalent to throwing a fair die once. If we want to throw the die a second time, then we return the selected ticket to the urn and repeat the procedure. This is an example of drawing with replacement. Example 1.3: Urn Model for Throwing an Astragalus Imagine an urn that contains ten tickets, one labelled 1, four labelled 3, four labelled 4, and one labelled 6. Drawing one ticket from this urn is equivalent to throwing an astragalus once. If we want to throw four astragali, then we repeat this procedure four times, each time returning the selected ticket to the urn. This is another example of drawing with replacement. Example 1.4: Urn Model for Drawing a Poker Hand Place a standard deck of playing cards in an urn. Draw one card, then a second, then a third, then a fourth, then a fifth. Because each card in the deck can only be dealt once, we do not return a card to the urn after drawing it. This is an example of drawing without replacement. In the preceding examples, the statements about the equivalence of the urn model and another randomizing agent were intended to appeal to your intuition. Subsequent chapters will introduce mathematical tools that will allow us to validate these assertions. 1.5 Exercises 1. Select a penny minted in any year other than 1982. Find a smooth surface on which to spin it. Practice spinning the penny until you are
  • 27. 1.5. EXERCISES 25 able to do so in a reasonably consistent manner. Develop an experi- mental protocol that specifies precisely how you spin your penny. Spin your penny 100 times in accordance with this protocol. Record the outcome of each spin, including aberrant events (e.g., the penny spun off the table and therefore neither Heads nor Tails was recorded). Your report of this experiment should include the following: • The penny itself, taped to your report. Note any features of the penny that seem relevant, e.g., the year and city in which it was minted, its condition, etc. • A description of the surface on which you spun it and of any possibly relevant environmental considerations. • A description of your experimental protocol. • The results of your 100 spins. This means a list, in order, of what happened on each spin. • A summary of your results. This means (i) the total number of spins that resulted in either Heads or Tails (ideally, this number, n, will equal 100) and (ii) the number of spins that resulted in Heads (y). • The observed frequency of heads, y/n. 2. The Department of Mathematics at the College of William & Mary is housed in Jones Hall. To find the department, one passes through the building’s main entrance, into its lobby, and immediately turns left. In Jones 131, the department’s seminar room, is a long rectangular wood table. Let L denote the length of this table. The purpose of this experiment is to measure L using a standard (12-inch) ruler. You will need a 12-inch ruler that is marked in increments of 1/16 inches. Groups of students may use the same ruler, but it is important that each student obtain his/her own measurement of L. Please do not attempt to obtain your measurement at a time when Jones 131 is being used for a seminar or faculty meeting! Your report of this experiment should include the following informa- tion: • A description of the ruler that you used. From what was it made? In what condition is it? Who owns it? What other students used the same ruler?
  • 28. 26 CHAPTER 1. EXPERIMENTS • A description of your measuring protocol. How did you position the ruler initially? How did you reposition it? How did you ensure that you were measuring along a straight line? • An account of the experiment. When did you measure? How long did it take you? Please note any unusual circumstances that might bear on your results. • Your estimate (in inches, to the nearest 1/16 inch) of L. 3. Statisticians say that a procedure that tends to either underestimate or overestimate the quantity that it is being used to determine is biased. (a) In the preceding problem, suppose that you tried to measure the length of the table with a ruler that—unbeknownst to you—was really 11.9 inches long instead of the nominal 12 inches. Would you tend to underestimate or overestimate the true length of the table? Explain. (b) In the Lanarkshire milk experiment, would a tendency for well- fed children to wear heavier winter clothing than ill-nourished children cause weight gains due to milk supplements to be under- estimated or overestimated? Explain.
  • 29. Chapter 2 Mathematical Preliminaries This chapter collects some fundamental mathematical concepts that we will use in our study of probability and statistics. Most of these concepts should seem familiar, although our presentation of them may be a bit more formal than you have previously encountered. This formalism will be quite useful as we study probability, but it will tend to recede into the background as we progress to the study of statistics. 2.1 Sets It is an interesting bit of trivia that “set” has the most different meanings of any word in the English language. To describe what we mean by a set, we suppose the existence of a designated universe of possible objects. In this book, we will often denote the universe by S. By a set, we mean a collection of objects with the property that each object in the universe either does or does not belong to the collection. We will tend to denote sets by uppercase Roman letters toward the beginning of the alphabet, e.g., A, B, C, etc. The set of objects that do not belong to a designated set A is called the complement of A. We will denote complements by Ac, Bc, Cc, etc. The complement of the universe is the empty set, denoted Sc = ∅. An object that belongs to a designated set is called an element or member of that set. We will tend to denote elements by lower case Roman letters and write expressions such as x ∈ A, pronounced “x is an element of the set A.” Sets with a small number of elements are often identified by simple enumeration, i.e., by writing down a list of elements. When we do so, we will enclose the list in braces and separate the elements by commas or semicolons. 27
  • 30. 28 CHAPTER 2. MATHEMATICAL PRELIMINARIES For example, the set of all feature films directed by Sergio Leone is { A Fistful of Dollars; For a Few Dollars More; The Good, the Bad, and the Ugly; Once Upon a Time in the West; Duck, You Sucker!; Once Upon a Time in America } In this book, of course, we usually will be concerned with sets defined by certain mathematical properties. Some familiar sets to which we will refer repeatedly include: • The set of natural numbers, N = {1, 2, 3, . . .}. • The set of integers, Z = {. . . , −3, −2, −1, 0, 1, 2, 3, . . .}. • The set of real numbers, ℜ = (−∞, ∞). If A and B are sets and each element of A is also an element of B, then we say that A is a subset of B and write A ⊂ B. For example, N ⊂ Z ⊂ ℜ. Quite often, a set A is defined to be those elements of another set B that satisfy a specified mathematical property. In such cases, we often specify A by writing a generic element of B to the left of a colon, the property to the right of the colon, and enclosing this syntax in braces. For example, A = {x ∈ Z : x2 < 5} = {−2, −1, 0, 1, 2}, is pronounced “A is the set of integers x such that x2 is less than 5.” Given sets A and B, there are several important sets that can be con- structed from them. The union of A and B is the set A ∪ B = {x ∈ S : x ∈ A or x ∈ B} and the intersection of A and B is the set A ∩ B = {x ∈ S : x ∈ A and x ∈ B}. For example, if A is as above and B = {x ∈ Z : |x − 2| ≤ 1} = {1, 2, 3},
  • 31. 2.1. SETS 29 then A ∪ B = {−2, −1, 0, 1, 2, 3} and A ∩ B = {1, 2}. Notice that unions and intersections are symmetric constructions, i.e., A ∪ B = B ∪ A and A ∩ B = B ∩ A. If A∩B = ∅, i.e., if A and B have no elements in common, then A and B are disjoint or mutually exclusive. By convention, the empty set is a subset of every set, so ∅ ⊂ A ∩ B ⊂ A ⊂ A ∪ B ⊂ S and ∅ ⊂ A ∩ B ⊂ B ⊂ A ∪ B ⊂ S. These facts are illustrated by the Venn diagram in Figure 2.1, in which sets are qualitatively indicated by connected subsets of the plane. We will make frequent use of Venn diagrams as we develop basic facts about probabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A B · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · Figure 2.1: A Venn diagram. The shaded region represents the intersection of the nondisjoint sets A and B. It is often useful to extend the concepts of union and intersection to more than two sets. Let {Ak} denote an arbitrary collection of sets, where k is an index that identifies the set. Then x ∈ S is an element of the union of {Ak},
  • 32. 30 CHAPTER 2. MATHEMATICAL PRELIMINARIES denoted [ k Ak, if and only if there exists some k0 such that x ∈ Ak0 . Also, x ∈ S is an element of the intersection of {Ak}, denoted k Ak, if and only if x ∈ Ak for every k. For example, if Ak = {0, 1, . . . , k} for k = 1, 2, 3, . . ., then [ k Ak = {0, 1, 2, 3, . . .} and k Ak = {0, 1}. Furthermore, it will be important to distinguish collections of sets with the following property: Definition 2.1 A collection of sets is pairwise disjoint if and only if each pair of sets in the collection has an empty intersection. Unions and intersections are related to each other by two distributive laws: B ∩ à [ k Ak ! = [ k (B ∩ Ak) and B ∪ à k Ak ! = k (B ∪ Ak) . Furthermore, unions and intersections are related to complements by De- Morgan’s laws: à [ k Ak !c = k Ac k and à k Ak !c = [ k Ac k. The first law states that an object is not in any of the sets in the collection if and only if it is in the complement of each set; the second law states that
  • 33. 2.2. COUNTING 31 an object is not in every set in the collection if it is in the complement of at least one set. Finally, we consider another important set that can be constructed from A and B. Definition 2.2 The Cartesian product of two sets A and B, denoted A×B, is the set of ordered pairs whose first component is an element of A and whose second component is an element of B, i.e., A × B = {(a, b) : a ∈ A, b ∈ B}. For example, if A = {−2, −1, 0, 1, 2} and B = {1, 2, 3}, then the set A × B contains the following elements: (−2, 1) (−1, 1) (0, 1) (1, 1) (2, 1) (−2, 2) (−1, 2) (0, 2) (1, 2) (2, 2) (−2, 3) (−1, 3) (0, 3) (1, 3) (2, 3) A familiar example of a Cartesian product is the Cartesian coordinatization of the plane, ℜ2 = ℜ × ℜ = {(x, y) : x, y ∈ ℜ}. Of course, this construction can also be extended to more than two sets, e.g., ℜ3 = {(x, y, z) : x, y, z ∈ ℜ}. 2.2 Counting This section is concerned with determining the number of elements in a specified set. One of the fundamental concepts that we will exploit in our brief study of counting is the notion of a one-to-one correspondence between two sets. We begin by illustrating this notion with an elementary example. Example 2.1 Define two sets, A1 = {diamond, emerald, ruby, sapphire} and B = {blue, green, red, white} . The elements of these sets can be paired in such a way that to each element of A1 there is assigned a unique element of B and to each element of B there
  • 34. 32 CHAPTER 2. MATHEMATICAL PRELIMINARIES is assigned a unique element of A1. Such a pairing can be accomplished in various ways; a natural assignment is the following: diamond ↔ white emerald ↔ green ruby ↔ red sapphire ↔ blue This assignment exemplifies a one-to-one correspondence. Now suppose that we augment A1 by forming A2 = A1 ∪ {aquamarine} . Although we can still assign a color to each gemstone, we cannot do so in such a way that each gemstone corresponds to a different color. There does not exist a one-to-one correspondence between A2 and B. From Example 2.1, we abstract Definition 2.3 Two sets can be placed in one-to-one correspondence if their elements can be paired in such a way that each element of either set is asso- ciated with a unique element of the other set. The concept of one-to-one correspondence can then be exploited to obtain a formal definition of a familiar concept: Definition 2.4 A set A is finite if there exists a natural number N such that the elements of A can be placed in one-to-one correspondence with the elements of {1, 2, . . . , N}. If A is finite, then the natural number N that appears in Definition 2.4 is unique. It is, in fact, the number of elements in A. We will denote this quantity, sometimes called the cardinality of A, by #(A). In Example 2.1 above, #(A1) = #(B) = 4 and #(A2) = 5. The Multiplication Principle Most of our counting arguments will rely on a fundamental principle, which we illustrate with an example. Example 2.2 Suppose that each gemstone in Example 2.1 has been mounted on a ring. You desire to wear one of these rings on your left hand and another on your right hand. How many ways can this be done?
  • 35. 2.2. COUNTING 33 First, suppose that you wear the diamond ring on your left hand. Then there are three rings available for your right hand: emerald, ruby, sapphire. Next, suppose that you wear the emerald ring on your left hand. Again there are three rings available for your right hand: diamond, ruby, sapphire. Suppose that you wear the ruby ring on your left hand. Once again there are three rings available for your right hand: diamond, emerald, sapphire. Finally, suppose that you wear the sapphire ring on your left hand. Once more there are three rings available for your right hand: diamond, emerald, ruby. We have counted a total of 3 + 3 + 3 + 3 = 12 ways to choose a ring for each hand. Enumerating each possibility is rather tedious, but it reveals a useful shortcut. There are 4 ways to choose a ring for the left hand and, for each such choice, there are three ways to choose a ring for the right hand. Hence, there are 4 · 3 = 12 ways to choose a ring for each hand. This is an instance of a general principle: Suppose that two decisions are to be made and that there are n1 possible outcomes of the first decision. If, for each outcome of the first decision, there are n2 possible outcomes of the second decision, then there are n1n2 possible outcomes of the pair of decisions. Permutations and Combinations We now consider two more concepts that are often employed when counting the elements of finite sets. We mo- tivate these concepts with an example. Example 2.3 A fast-food restaurant offers a single entree that comes with a choice of 3 side dishes from a total of 15. To address the perception that it serves only one dinner, the restaurant conceives an advertisement that identifies each choice of side dishes as a distinct dinner. Assuming that each entree must be accompanied by 3 distinct side dishes, e.g., {stuffing, mashed potatoes, green beans} is permitted but {stuffing, stuffing, mashed potatoes} is not, how many distinct dinners are available?1 Answer 2.3a The restaurant reasons that a customer, asked to choose 3 side dishes, must first choose 1 side dish from a total of 15. There are 1 This example is based on an actual incident involving the Boston Chicken (now Boston Market) restaurant chain and a high school math class in Denver, CO.
  • 36. 34 CHAPTER 2. MATHEMATICAL PRELIMINARIES 15 ways of making this choice. Having made it, the customer must then choose a second side dish that is different from the first. For each choice of the first side dish, there are 14 ways of choosing the second; hence 15 × 14 ways of choosing the pair. Finally, the customer must choose a third side dish that is different from the first two. For each choice of the first two, there are 13 ways of choosing the third; hence 15 × 14 × 13 ways of choosing the triple. Accordingly, the restaurant advertises that it offers a total of 15 × 14 × 13 = 2730 possible dinners. Answer 2.3b A high school math class considers the restaurant’s claim and notes that the restaurant has counted side dishes of { stuffing, mashed potatoes, green beans }, { stuffing, green beans, mashed potatoes }, { mashed potatoes, stuffing, green beans }, { mashed potatoes, green beans, stuffing }, { green beans, stuffing, mashed potatoes }, and { green beans, mashed potatoes, stuffing } as distinct dinners. Thus, the restaurant has counted dinners that differ only with respect to the order in which the side dishes were chosen as distinct. Reasoning that what matters is what is on one’s plate, not the order in which the choices were made, the math class concludes that the restaurant has overcounted. As illustrated above, each triple of side dishes can be ordered in 6 ways: the first side dish can be any of 3, the second side dish can be any of the remaining 2, and the third side dish must be the remaining 1 (3 × 2 × 1 = 6). The math class writes a letter to the restaurant, arguing that the restaurant has overcounted by a factor of 6 and that the correct count is 2730÷6 = 455. The restaurant cheerfully agrees and donates $1000 to the high school’s math club. From Example 2.3 we abstract the following definitions: Definition 2.5 The number of permutations (ordered choices) of r objects from n objects is P(n, r) = n × (n − 1) × · · · × (n − r + 1). Definition 2.6 The number of combinations (unordered choices) of r ob- jects from n objects is C(n, r) = P(n, r) ÷ P(r, r).
  • 37. 2.2. COUNTING 35 In Example 2.3, the restaurant claimed that it offered P(15, 3) dinners, while the math class argued that a more plausible count was C(15, 3). There, as always, the distinction was made on the basis of whether the order of the choices is or is not relevant. Permutations and combinations are often expressed using factorial nota- tion. Let 0! = 1 and let k be a natural number. Then the expression k!, pronounced “k factorial” is defined recursively by the formula k! = k × (k − 1)!. For example, 3! = 3 × 2! = 3 × 2 × 1! = 3 × 2 × 1 × 0! = 3 × 2 × 1 × 1 = 3 × 2 × 1 = 6. Because n! = n × (n − 1) × · · · × (n − r + 1) × (n − r) × · · · × 1 = P(n, r) × (n − r)!, we can write P(n, r) = n! (n − r)! and C(n, r) = P(n, r) ÷ P(r, r) = n! (n − r)! ÷ r! (r − r)! = n! r!(n − r)! . Finally, we note (and will sometimes use) the popular notation C(n, r) = Ã n r ! , pronounced “n choose r”. Example 2.4 A coin is tossed 10 times. How many sequences of 10 tosses result in a total of exactly 2 Heads?
  • 38. 36 CHAPTER 2. MATHEMATICAL PRELIMINARIES Answer A sequence of Heads and Tails is completely specified by knowing which tosses resulted in Heads. To count how many sequences result in 2 Heads, we simply count how many ways there are to choose the pair of tosses on which Heads result. This is choosing 2 tosses from 10, or à 10 2 ! = 10! 2!(10 − 2)! = 10 · 9 2 · 1 = 45. Example 2.5 Consider the hypothetical example described in Section 1.2. In a class of 40 students, how many ways can one choose 20 students to receive exam A? Assuming that the class comprises 30 freshmen and 10 non- freshmen, how many ways can one choose 15 freshmen and 5 non-freshmen to receive exam A? Solution There are à 40 20 ! = 40! 20!(40 − 20)! = 40 · 39 · · · 22 · 21 20 · 19 · · · 2 · 1 = 137, 846, 528, 820 ways to choose 20 students from 40. There are à 30 15 ! = 30! 15!(30 − 15)! = 30 · 29 · · · 17 · 16 15 · 14 · · · 2 · 1 = 155, 117, 520 ways to choose 15 freshmen from 30 and à 10 5 ! = 10! 5!(10 − 5)! = 10 · 9 · 8 · 7 · 6 5 · 4 · 3 · 2 · 1 = 252 ways to choose 5 non-freshmen from 10; hence, 155, 117, 520 · 252 = 39, 089, 615, 040 ways to choose 15 freshmen and 5 non-freshmen to receive exam A. Notice that, of all the ways to choose 20 students to receive exam A, about 28% result in exactly 15 freshman and 5 non-freshman. Countability Thus far, our study of counting has been concerned exclu- sively with finite sets. However, our subsequent study of probability will require us to consider sets that are not finite. Toward that end, we intro- duce the following definitions:
  • 39. 2.2. COUNTING 37 Definition 2.7 A set is infinite if it is not finite. Definition 2.8 A set is denumerable if its elements can be placed in one- to-one correspondence with the natural numbers. Definition 2.9 A set is countable if it is either finite or denumerable. Definition 2.10 A set is uncountable if it is not countable. Like Definition 2.4, Definition 2.8 depends on the notion of a one-to-one correspondence between sets. However, whereas this notion is completely straightforward when at least one of the sets is finite, it can be rather elu- sive when both sets are infinite. Accordingly, we provide some examples of denumerable sets. In each case, we superscript each element of the set in question with the corresponding natural number. Example 2.6 Consider the set of even natural numbers, which excludes one of every two consecutive natural numbers It might seem that this set cannot be placed in one-to-one correspondence with the natural numbers in their entirety; however, infinite sets often possess counterintuitive properties. Here is a correspondence that demonstrates that this set is denumerable: 21 , 42 , 63 , 84 , 105 , 126 , 147 , 168 , 189 , . . . Example 2.7 Consider the set of integers. It might seem that this set, which includes both a positive and a negative copy of each natural number, cannot be placed in one-to-one correspondence with the natural numbers; however, here is a correspondence that demonstrates that this set is denumerable: . . . , −49 , −37 , −25 , −13 , 01 , 12 , 24 , 36 , 48 , . . . Example 2.8 Consider the Cartesian product of the set of natural numbers with itself. This set contains one copy of the entire set of natural numbers for each natural number—surely it cannot be placed in one-to-one correspondence with a single copy of the set of natural numbers! In fact, the
  • 40. 38 CHAPTER 2. MATHEMATICAL PRELIMINARIES following correspondence demonstrates that this set is also denumerable: (1, 1)1 (1, 2)2 (1, 3)6 (1, 4)7 (1, 5)15 . . . (2, 1)3 (2, 2)5 (2, 3)8 (2, 4)14 (2, 5)17 . . . (3, 1)4 (3, 2)9 (3, 3)13 (3, 4)18 (3, 5)26 . . . (4, 1)10 (4, 2)12 (4, 3)19 (4, 4)25 (4, 5)32 . . . (5, 1)11 (5, 2)20 (5, 3)24 (5, 4)33 (5, 5)41 . . . . . . . . . . . . . . . . . . ... In light of Examples 2.6–2.8, the reader may wonder what is required to construct a set that is not countable. We conclude this section by remarking that the following intervals are uncountable sets, where a, b ∈ ℜ and a < b. (a, b) = {x ∈ ℜ : a < x < b} [a, b) = {x ∈ ℜ : a ≤ x < b} (a, b] = {x ∈ ℜ : a < x ≤ b} [a, b] = {x ∈ ℜ : a ≤ x ≤ b} We will make frequent use of such sets, often referring to (a, b) as an open interval and [a, b] as a closed interval. 2.3 Functions A function is a rule that assigns a unique element of a set B to each element of another set A. A familiar example is the rule that assigns to each real number x the real number y = x2, e.g., that assigns y = 4 to x = 2. Notice that each real number has a unique square (y = 4 is the only number that this rule assigns to x = 2), but that more than one number may have the same square (y = 4 is also assigned to x = −2). The set A is the function’s domain. Notice that each element of A must be assigned some element of B, but that an element of B need not be assigned to any element of A. Thus, in the preceding example, every x ∈ A = ℜ has a squared value y ∈ B = ℜ, but not every y ∈ B is the square of some number x ∈ A. (For example, y = −4 is not the square of any real number.) The elements of B that are assigned to elements of A constitute the image of the function. In the preceding example, the image of f(x) = x2 is f(ℜ) = [0, ∞). We will use a variety of letters to denote various types of functions. Examples include P, X, Y, f, g, F, G, φ. If φ is a function with domain A and
  • 41. 2.4. LIMITS 39 range B, then we write φ : A → B, often pronounced “φ maps A into B”. If φ assigns b ∈ B to a ∈ A, then we say that b is the value of φ at a and we write b = φ(a). If φ : A → B, then for each b ∈ B there is a subset (possibly empty) of A comprising those elements of A at which φ has value b. We denote this set by φ−1 (b) = {a ∈ A : φ(a) = b}. For example, if φ : ℜ → ℜ is the function defined by φ(x) = x2, then φ−1 (4) = {−2, 2}. More generally, if B0 ⊂ B, then φ−1 (B0) = {a ∈ A : φ(a) ∈ B0} . Using the same example, φ−1 ([4, 9]) = n x ∈ ℜ : x2 ∈ [4, 9] o = [−3, −2] ∪ [2, 3]. The object φ−1 is called the inverse of φ and φ−1(B0) is called the inverse image of B0. 2.4 Limits In Section 2.2 we examined several examples of denumerable sets of real numbers. In each of these examples, we imposed an order on the set when we placed it in one-to-one correspondence with the natural numbers. Once an order has been specified, we can inquire how the set behaves as we progress through its values in the prescribed sequence. For example, the real numbers in the ordered denumerable set ½ 1, 1 2 , 1 3 , 1 4 , 1 5 , . . . ¾ (2.1) steadily decrease as one progresses through them. Furthermore, as in Zeno’s famous paradoxes, the numbers seem to approach the value zero without ever actually attaining it. To describe such sets, it is helpful to introduce some specialized terminology and notation. We begin with
  • 42. 40 CHAPTER 2. MATHEMATICAL PRELIMINARIES Definition 2.11 A sequence of real numbers is an ordered denumerable sub- set of ℜ. Sequences are often denoted using a dummy variable that is specified or understood to index the natural numbers. For example, we might identify the sequence (2.1) by writing {1/n} for n = 1, 2, 3, . . .. Next we consider the phenomenon that 1/n approaches 0 as n increases, although each 1/n > 0. Let ǫ denote any strictly positive real number. What we have noticed is the fact that, no matter how small ǫ may be, eventually n becomes so large that 1/n < ǫ. We formalize this observation in Definition 2.12 Let {yn} denote a sequence of real numbers. We say that {yn} converges to a constant value c ∈ ℜ if, for every ǫ > 0, there exists a natural number N such that yn ∈ (c − ǫ, c + ǫ) for each n ≥ N. If the sequence of real numbers {yn} converges to c, then we say that c is the limit of {yn} and we write either yn → c as n → ∞ or limn→∞ yn = c. In particular, lim n→∞ 1 n = 0. 2.5 Exercises 1. A classic riddle inquires: As I was going to St. Ives, I met a man with seven wives. Each wife had seven sacks, Each sack had seven cats, Each cat had seven kits. Kits, cats, sacks, wives— How many were going to St. Ives? (a) How many creatures (human and feline) were in the entourage that the narrator encountered? (b) What is the answer to the riddle? 2. A well-known carol, “The Twelve Days of Christmas,” describes a progression of gifts that the singer receives from her true love:
  • 43. 2.5. EXERCISES 41 On the first day of Christmas, my true love gave to me: A partridge in a pear tree. On the second day of Christmas, my true love gave to me: Two turtle doves, and a partridge in a pear tree. Et cetera.2 How many birds did the singer receive from her true love? 3. The throw of an astragalus (see Section 1.4) has four possible outcomes, {1, 3, 4, 6}. When throwing four astragali, (a) How many ways are there to obtain a dog, i.e., for each astragalus to produce a 1? (b) How many ways are there to obtain a venus, i.e., for each astra- galus to produce a different outcome? Hint: Label each astragalus (e.g., antelope, bison, cow, deer) and keep track of the outcome of each distinct astragalus. 4. When throwing five astragali, (a) How many ways are there to obtain the throw of child-eating Cronos, i.e., to obtain three fours and two sixes? (b) How many ways are there to obtain the throw of Saviour Zeus, i.e., to obtain one one, two threes, and two fours? 5. The throw of one die has six possible outcomes, {1, 2, 3, 4, 5, 6}. A medieval poem, “The Chance of the Dyse,” enumerates the fortunes that could be divined from casting three dice. Order does not matter, e.g., the fortune associated with 6-5-3 is also associated with 3-5-6. How many fortunes does the poem enumerate? 6. Suppose that five cards are dealt from a standard deck of playing cards. (a) How many hands are possible? (b) How many straight-flush hands are possible? (c) How many 4-of-a-kind hands are possible? (d) Why do you suppose that a straight flush beats 4-of-a-kind? 2 You should be able to find the complete lyrics by doing a web search.
  • 44. 42 CHAPTER 2. MATHEMATICAL PRELIMINARIES 7. In the television reality game show Survivor, 16 contestants (the “cast- aways”) compete for $1 million. The castaways are stranded in a re- mote location, e.g., an uninhabited island in the China Sea. Initially, the castaways are divided into two tribes. The tribes compete in a se- quence of immunity challenges. After each challenge, the losing tribe must vote out one of its members and that person is eliminated from the game. Eventually, the tribes merge and the surviving castaways compete in a sequence of individual immunity challenges. The winner receives immunity and the merged tribe must then vote out one of its other members. After the merged tribe has been reduced to two mem- bers, a jury of the last 7 castaways to have been eliminated votes on who should be the Sole Survivor and win $1 million. (Technically, the jury votes for the Sole Survivor, but this is equivalent to eliminating one of the final two castaways.) (a) Suppose that we define an outcome of Survivor to be the name of the Sole Survivor. In any given game of Survivor, how many outcomes are possible? (b) Suppose that we define an outcome of Survivor to be a list of the castaways’ names, arranged in the order in which they were eliminated. In any given game of Survivor, how many outcomes are possible? 8. The final eight castaways in Survivor 2: Australian Outback included four men (Colby, Keith, Nick, and Rodger) and four women (Amber, Elisabeth, Jerri, and Tina). They participated in a reward challenge that required them to form four teams of two persons, one male and one female. (The teams raced over an obstacle course, recording the time of the slower team member.) The castaways elected to pair off by drawing lots. (a) How many ways were there for the castaways to form four teams? (b) Jerri was opposed to drawing lots—she wanted to team with Colby. How many ways are there for the castaways to form four male-female teams if one of the teams is Colby-Jerri? (c) If all pairings (male-male, male-female, female-female) are al- lowed, then how many ways are there for the castaways to form four teams?
  • 45. 2.5. EXERCISES 43 9. In Major League Baseball’s World Series, the winners of the National (N) and American (A) League pennants play a sequence of games. The first team to win four games wins the Series. Thus, the Series must last at least four games and can last no more than seven games. Let us define an outcome of the World Series by identifying which League’s pennant winner won each game. For example, the outcome of the 1975 World Series, in which the Cincinnati Reds represented the National League and the Boston Red Sox represented the American League, was ANNANAN. How many World Series outcomes are possible? 10. The following table defines a function that assigns to each feature film directed by Sergio Leone the year in which it was released. A Fistful of Dollars 1964 For a Few Dollars More 1965 The Good, the Bad, and the Ugly 1966 Once Upon a Time in the West 1968 Duck, You Sucker! 1972 Once Upon a Time in America 1984 What is the inverse image of the set known as The Sixties? 11. For n = 0, 1, 2, . . ., let yn = n X k=0 2−k = 2−0 + 2−1 + · · · + 2−n . (a) Compute y0, y1, y2, y3, and y4. (b) The sequence {y0, y1, y2, . . .} is an example of a sequence of partial sums. Guess the value of its limit, usually written lim n→∞ yn = lim n→∞ n X k=0 2−k = ∞ X k=0 2−k .
  • 46. 44 CHAPTER 2. MATHEMATICAL PRELIMINARIES
  • 47. Chapter 3 Probability The goal of statistical inference is to draw conclusions about a population from “representative information” about it. In future chapters, we will dis- cover that a powerful way to obtain representative information about a pop- ulation is through the planned introduction of chance. Thus, probability is the foundation of statistical inference—to study the latter, we must first study the former. Fortunately, the theory of probability is an especially beautiful branch of mathematics. Although our purpose in studying proba- bility is to provide the reader with some tools that will be needed when we study statistics, we also hope to impart some of the beauty of those tools. 3.1 Interpretations of Probability Probabilistic statements can be interpreted in different ways. For example, how would you interpret the following statement? There is a 40 percent chance of rain today. Your interpretation is apt to vary depending on the context in which the statement is made. If the statement was made as part of a forecast by the National Weather Service, then something like the following interpretation might be appropriate: In the recent history of this locality, of all days on which present atmospheric conditions have been experienced, rain has occurred on approximately 40 percent of them. 45
  • 48. 46 CHAPTER 3. PROBABILITY This is an example of the frequentist interpretation of probability. With this interpretation, a probability is a long-run average proportion of occurence. Suppose, however, that you had just peered out a window, wondering if you should carry an umbrella to school, and asked your roommate if she thought that it was going to rain. Unless your roommate is studying metere- ology, it is not plausible that she possesses the knowledge required to make a frequentist statement! If her response was a casual “I’d say that there’s a 40 percent chance,” then something like the following interpretation might be appropriate: I believe that it might very well rain, but that it’s a little less likely to rain than not. This is an example of the subjectivist interpretation of probability. With this interpretation, a probability expresses the strength of one’s belief. The philosopher I. Hacking has observed that dual notions of proba- bility, one aleatory (frequentist) and one epistemological (subjectivist) have co-existed throughout history, and that “philosophers seem singularly unable to put [them] asunder. . . ”1 We shall not attempt so perilous an undertak- ing. But however we decide to interpret probabilities, we will need a formal mathematical description of probability to which we can appeal for insight and guidance. The remainder of this chapter provides an introduction to the most commonly adopted approach to axiomatic probability. The chap- ters that follow tend to emphasize a frequentist interpretation of probability, but the mathematical formalism can also be used with a subjectivist inter- pretation. 3.2 Axioms of Probability The mathematical model that has dominated the study of probability was formalized by the Russian mathematician A. N. Kolmogorov in a monograph published in 1933. The central concept in this model is a probability space, which is assumed to have three components: S A sample space, a universe of “possible” outcomes for the experiment in question. 1 I. Hacking, The Emergence of Probability, Cambridge University Press, 1975, Chapter 2: Duality.
  • 49. 3.2. AXIOMS OF PROBABILITY 47 C A designated collection of “observable” subsets (called events) of the sample space. P A probability measure, a function that assigns real numbers (called probabilities) to events. We describe each of these components in turn. The Sample Space The sample space is a set. Depending on the nature of the experiment in question, it may or may not be easy to decide upon an appropriate sample space. Example 3.1 A coin is tossed once. A plausible sample space for this experiment will comprise two outcomes, Heads and Tails. Denoting these outcomes by H and T, we have S = {H, T}. Remark: We have discounted the possibility that the coin will come to rest on edge. This is the first example of a theme that will recur throughout this text, that mathematical models are rarely—if ever—completely faithful representations of nature. As described by Mark Kac, “Models are, for the most part, caricatures of reality, but if they are good, then, like good caricatures, they portray, though per- haps in distorted manner, some of the features of the real world. The main role of models is not so much to explain and predict— though ultimately these are the main functions of science—as to polarize thinking and to pose sharp questions.”2 In Example 3.1, and in most of the other elementary examples that we will use to illustrate the fundamental concepts of axiomatic probability, the fi- delity of our mathematical descriptions to the physical phenomena described should be apparent. Practical applications of inferential statistics, however, often require imposing mathematical assumptions that may be suspect. Data analysts must constantly make judgments about the plausibility of their as- sumptions, not so much with a view to whether or not the assumptions are completely correct (they almost never are), but with a view to whether or not the assumptions are sufficient for the analysis to be meaningful. 2 Mark Kac, “Some mathematical models in science,” Science, 1969, 166:695–699.
  • 50. 48 CHAPTER 3. PROBABILITY Example 3.2 A coin is tossed twice. A plausible sample space for this experiment will comprise four outcomes, two outcomes per toss. Here, S = ( HH TH HT TT ) . Example 3.3 An individual’s height is measured. In this example, it is less clear what outcomes are possible. All human heights fall within certain bounds, but precisely what bounds should be specified? And what of the fact that heights are not measured exactly? Only rarely would one address these issues when choosing a sample space. For this experiment, most statisticians would choose as the sample space the set of all real numbers, then worry about which real numbers were actually observed. Thus, the phrase “possible outcomes” refers to conceptual rather than practical possibility. The sample space is usually chosen to be mathe- matically convenient and all-encompassing. The Collection of Events Events are subsets of the sample space, but how do we decide which subsets of S should be designated as events? If the outcome s ∈ S was observed and E ⊂ S is an event, then we say that E occurred if and only if s ∈ E. A subset of S is observable if it is always possible for the experimenter to determine whether or not it occurred. Our intent is that the collection of events should be the collection of observable subsets. This intent is often tempered by our desire for mathematical con- venience and by our need for the collection to possess certain mathematical properties. In practice, the issue of observability is rarely considered and certain conventional choices are automatically adopted. For example, when S is a finite set, one usually designates all subsets of S to be events. Whether or not we decide to grapple with the issue of observability, the collection of events must satisfy the following properties: 1. The sample space is an event. 2. If E is an event, then Ec is an event. 3. The union of any countable collection of events is an event. A collection of subsets with these properties is sometimes called a sigma-field. Taken together, the first two properties imply that both S and ∅ must be events. If S and ∅ are the only events, then the third property holds;
  • 51. 3.2. AXIOMS OF PROBABILITY 49 hence, the collection {S, ∅} is a sigma-field. It is not, however, a very useful collection of events, as it describes a situation in which the experimental outcomes cannot be distinguished! Example 3.1 (continued) To distinguish Heads from Tails, we must assume that each of these individual outcomes is an event. Thus, the only plausible collection of events for this experiment is the collection of all subsets of S, i.e., C = {S, {H}, {T}, ∅} . Example 3.2 (continued) If we designate all subsets of S as events, then we obtain the following collection: C =                        S, {HH, HT, TH}, {HH, HT, TT}, {HH, TH, TT}, {HT, TH, TT}, {HH, HT}, {HH, TH}, {HH, TT}, {HT, TH}, {HT, TT}, {TH, TT}, {HH}, {HT}, {TH}, {TT}, ∅                        . This is perhaps the most plausible collection of events for this experiment, but others are also possible. For example, suppose that we were unable to distinguish the order of the tosses, so that we could not distinguish be- tween the outcomes HT and TH. Then the collection of events should not include any subsets that contain one of these outcomes but not the other, e.g., {HH, TH, TT}. Thus, the following collection of events might be deemed appropriate: C =              S, {HH, HT, TH}, {HT, TH, TT}, {HH, TT}, {HT, TH}, {HH}, {TT}, ∅              . The interested reader should verify that this collection is indeed a sigma- field. The Probability Measure Once the collection of events has been des- ignated, each event E ∈ C can be assigned a probability P(E). This must
  • 52. 50 CHAPTER 3. PROBABILITY be done according to specific rules; in particular, the probability measure P must satisfy the following properties: 1. If E is an event, then 0 ≤ P(E) ≤ 1. 2. P(S) = 1. 3. If {E1, E2, E3, . . .} is a countable collection of pairwise disjoint events, then P Ã ∞ [ i=1 Ei ! = ∞ X i=1 P(Ei). We discuss each of these properties in turn. The first property states that probabilities are nonnegative and finite. Thus, neither the statement that “the probability that it will rain today is −.5” nor the statement that “the probability that it will rain today is infinity” are meaningful. These restrictions have certain mathematical con- sequences. The further restriction that probabilities are no greater than unity is actually a consequence of the second and third properties. The second property states that the probability that an outcome occurs, that something happens, is unity. Thus, the statement that “the probability that it will rain today is 2” is not meaningful. This is a convention that simplifies formulae and facilitates interpretation. The third property, called countable additivity, is the most interesting. Consider Example 3.2, supposing that {HT} and {TH} are events and that we want to compute the probability that exactly one Head is observed, i.e., the probability of {HT} ∪ {TH} = {HT, TH}. Because {HT} and {TH} are events, their union is an event and therefore has a probability. Because they are mutually exclusive, we would like that probability to be P ({HT, TH}) = P ({HT}) + P ({TH}) . We ensure this by requiring that the probability of the union of any two disjoint events is the sum of their respective probabilities. Having assumed that A ∩ B = ∅ ⇒ P(A ∪ B) = P(A) + P(B), (3.1)
  • 53. 3.2. AXIOMS OF PROBABILITY 51 it is easy to compute the probability of any finite union of pairwise disjoint events. For example, if A, B, C, and D are pairwise disjoint events, then P (A ∪ B ∪ C ∪ D) = P (A ∪ (B ∪ C ∪ D)) = P(A) + P (B ∪ C ∪ D) = P(A) + P (B ∪ (C ∪ D)) = P(A) + P(B) + P (C ∪ D) = P(A) + P(B) + P(C) + P(D) Thus, from (3.1) can be deduced the following implication: If E1, . . . , En are pairwise disjoint events, then P Ã n [ i=1 Ei ! = n X i=1 P (Ei) . This implication is known as finite additivity. Notice that the union of E1, . . . , En must be an event (and hence have a probability) because each Ei is an event. An extension of finite additivity, countable additivity is the following implication: If E1, E2, E3, . . . are pairwise disjoint events, then P Ã ∞ [ i=1 Ei ! = ∞ X i=1 P (Ei) . The reason for insisting upon this extension has less to do with applications than with theory. Although some axiomatic theories of probability assume only finite additivity, it is generally felt that the stronger assumption of countable additivity results in a richer theory. Again, notice that the union of E1, E2, . . . must be an event (and hence have a probability) because each Ei is an event. Finally, we emphasize that probabilities are assigned to events. It may or may not be that the individual experimental outcomes are events. If they are, then they will have probabilities. In some such cases (see Chapter 4), the probability of any event can be deduced from the probabilities of the individual outcomes; in other such cases (see Chapter 5), this is not possible.
  • 54. 52 CHAPTER 3. PROBABILITY All of the facts about probability that we will use in studying statistical inference are consequences of the assumptions of the Kolmogorov probability model. It is not the purpose of this book to present derivations of these facts; however, three elementary (and useful) propositions suggest how one might proceed along such lines. In each case, a Venn diagram helps to illustrate the proof. Theorem 3.1 If E is an event, then P (Ec ) = 1 − P(E). S E · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · Figure 3.1: A Venn diagram for the probability of Ec. Proof Refer to Figure 3.1. Ec is an event because E is an event. By definition, E and Ec are disjoint events whose union is S. Hence, 1 = P(S) = P (E ∪ Ec ) = P(E) + P (Ec ) and the theorem follows upon subtracting P(E) from both sides. ✷
  • 55. 3.2. AXIOMS OF PROBABILITY 53 Theorem 3.2 If A and B are events and A ⊂ B, then P(A) ≤ P(B). S · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · A B Figure 3.2: A Venn diagram for the probability of A ⊂ B. Proof Refer to Figure 3.2. Ac is an event because A is an event. Hence, B ∩ Ac is an event and B = A ∪ (B ∩ Ac ) . Because A and B ∩ Ac are disjoint events, P(B) = P(A) + P (B ∩ Ac ) ≥ P(A), as claimed. ✷ Theorem 3.3 If A and B are events, then P(A ∪ B) = P(A) + P(B) − P(A ∩ B).
  • 56. 54 CHAPTER 3. PROBABILITY S · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · A · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · B Figure 3.3: A Venn diagram for the probability of A ∪ B. Proof Refer to Figure 3.3. Both A ∪ B and A ∩ B = (Ac ∪ Bc)c are events because A and B are events. Similarly, A ∩ Bc and B ∩ Ac are also events. Notice that A∩Bc, B∩Ac, and A∩B are pairwise disjoint events. Hence, P(A) + P(B) − P(A ∩ B) = P ((A ∩ Bc ) ∪ (A ∩ B)) + P ((B ∩ Ac ) ∪ (A ∩ B)) − P(A ∩ B) = P (A ∩ Bc ) + P(A ∩ B) + P (B ∩ Ac ) + P(A ∩ B) − P(A ∩ B) = P (A ∩ Bc ) + P(A ∩ B) + P (B ∩ Ac ) = P ((A ∩ Bc ) ∪ (A ∩ B) ∪ (B ∩ Ac )) = P(A ∪ B), as claimed. ✷ Theorem 3.3 provides a general formula for computing the probability of the union of two sets. Notice that, if A and B are in fact disjoint, then P (A ∩ B) = P(∅) = P (Sc ) = 1 − P(S) = 1 − 1 = 0
  • 57. 3.3. FINITE SAMPLE SPACES 55 and we recover our original formula for that case. 3.3 Finite Sample Spaces Let S = {s1, . . . , sN } denote a sample space that contains N outcomes and suppose that every subset of S is an event. For notational convenience, let pi = P ({si}) denote the probability of outcome i, for i = 1, . . . , N. Then, for any event A, we can write P(A) = P   [ si∈A {si}   = X si∈A P ({si}) = X si∈A pi. (3.2) Thus, if the sample space is finite, then the probabilities of the individual outcomes determine the probability of any event. The same reasoning applies if the sample space is denumerable. In this section, we focus on an important special case of finite probability spaces, the case of “equally likely” outcomes. By a fair coin, we mean a coin that when tossed is equally likely to produce Heads or Tails, i.e., the probability of each of the two possible outcomes is 1/2. By a fair die, we mean a die that when tossed is equally likely to produce any of six possible outcomes, i.e., the probability of each outcome is 1/6. In general, we say that the outcomes of a finite sample space are equally likely if pi = 1 N (3.3) for i = 1, . . . , N. In the case of equally likely outcomes, we substitute (3.3) into (3.2) and obtain P(A) = X si∈A 1 N = P si∈A 1 N = #(A) #(S) . (3.4) This equation reveals that, when the outcomes in a finite sample space are equally likely, calculating probabilities is just a matter of counting. The counting may be quite difficult, but the probabilty is trivial. We illustrate this point with some examples.
  • 58. 56 CHAPTER 3. PROBABILITY Example 3.4 A fair coin is tossed twice. What is the probability of observing exactly one Head? The sample space for this experiment was described in Example 3.2. Because the coin is fair, each of the four outcomes in S is equally likely. Let A denote the event that exactly one Head is observed. Then A = {HT, TH} and P(A) = #(A) #(S) = 2 4 = 1 2 = 0.5. Example 3.5 A fair die is tossed once. What is the probability that the number of dots on the top face of the die is a prime number? The sample space for this experiment is S = {1, 2, 3, 4, 5, 6}. Because the die is fair, each of the six outcomes in S is equally likely. Let A denote the event that a prime number is observed. If we agree to count 1 as a prime number, then A = {1, 2, 3, 5} and P(A) = #(A) #(S) = 4 6 = 2 3 . Example 3.6 A deck of 40 cards, labelled 1,2,3,. . . ,40, is shuffled and cards are dealt as specified in each of the following scenarios. (a) One hand of four cards is dealt to Arlen. What is the probability that Arlen’s hand contains four even numbers? Let S denote the possible hands that might be dealt. Because the order in which the cards are dealt is not important, #(S) = Ã 40 4 ! . Let A denote the event that the hand contains four even numbers There are 20 even cards, so the number of ways of dealing 4 even cards is #(A) = Ã 20 4 ! . Substituting these expressions into (3.4), we obtain P(A) = #(A) #(S) = ¡20 4 ¢ ¡40 4 ¢ = 51 962 . = 0.0530.
  • 59. 3.3. FINITE SAMPLE SPACES 57 (b) One hand of four cards is dealt to Arlen. What is the probability that this hand is a straight, i.e., that it contains four consecutive numbers? Let S denote the possible hands that might be dealt. Again, #(S) = Ã 40 4 ! . Let A denote the event that the hand is a straight. The possible straights are: 1-2-3-4 2-3-4-5 3-4-5-6 . . . 37-38-39-40 By simple enumeration (just count the number of ways of choosing the smallest number in the straight), there are 37 such hands. Hence, P(A) = #(A) #(S) = 37 ¡40 4 ¢ = 1 2470 . = 0.0004. (c) One hand of four cards is dealt to Arlen and a second hand of four cards is dealt to Mike. What is the probability that Arlen’s hand is a straight and Mike’s hand contains four even numbers? Let S denote the possible pairs of hands that might be dealt. Dealing the first hand requires choosing 4 cards from 40. After this hand has been dealt, the second hand requires choosing an additional 4 cards from the remaining 36. Hence, #(S) = Ã 40 4 ! · Ã 36 4 ! . Let A denote the event that Arlen’s hand is a straight and Mike’s hand contains four even numbers. There are 37 ways for Arlen’s hand to be a straight. Each straight contains 2 even numbers, leaving 18 even numbers available for Mike’s hand. Thus, for each way of dealing a straight to Arlen, there are ¡18 4 ¢ ways of dealing 4 even numbers to Mike. Hence, P(A) = #(A) #(S) = 37 · ¡18 4 ¢ ¡40 4 ¢ · ¡36 4 ¢ . = 2.1032 × 10−5 .
  • 60. 58 CHAPTER 3. PROBABILITY Example 3.7 Five fair dice are tossed simultaneously. Let S denote the possible outcomes of this experiment. Each die has 6 possible outcomes, so #(S) = 6 · 6 · 6 · 6 · 6 = 65 . (a) What is the probability that the top faces of the dice all show the same number of dots? Let A denote the specified event; then A comprises the following out- comes: 1-1-1-1-1 2-2-2-2-2 3-3-3-3-3 4-4-4-4-4 5-5-5-5-5 6-6-6-6-6 By simple enumeration, #(A) = 6. (Another way to obtain #(A) is to observe that the first die might result in any of six numbers, after which only one number is possible for each of the four remaining dice. Hence, #(A) = 6 · 1 · 1 · 1 · 1 = 6.) It follows that P(A) = #(A) #(S) = 6 65 = 1 1296 . = 0.0008. (b) What is the probability that the top faces of the dice show exactly four different numbers? Let A denote the specified event. If there are exactly 4 different num- bers, then exactly 1 number must appear twice. There are 6 ways to choose the number that appears twice and ¡5 2 ¢ ways to choose the two dice on which this number appears. There are 5 · 4 · 3 ways to choose the 3 different numbers on the remaining dice. Hence, P(A) = #(A) #(S) = 6 · ¡5 2 ¢ · 5 · 4 · 3 65 = 25 54 . = 0.4630. (c) What is the probability that the top faces of the dice show exactly three 6’s or exactly two 5’s?
  • 61. 3.3. FINITE SAMPLE SPACES 59 Let A denote the event that exactly three 6’s are observed and let B denote the event that exactly two 5’s are observed. We must calculate P(A ∪ B) = P(A) + P(B) − P(A ∩ B) = #(A) + #(B) − #(A ∩ B) #(S) . There are ¡5 3 ¢ ways of choosing the three dice on which a 6 appears and 5 · 5 ways of choosing a different number for each of the two remaining dice. Hence, #(A) = Ã 5 3 ! · 52 . There are ¡5 2 ¢ ways of choosing the two dice on which a 5 appears and 5 · 5 · 5 ways of choosing a different number for each of the three remaining dice. Hence, #(B) = Ã 5 2 ! · 53 . There are ¡5 3 ¢ ways of choosing the three dice on which a 6 appears and only 1 way in which a 5 can then appear on the two remaining dice. Hence, #(A ∩ B) = Ã 5 3 ! · 1. Thus, P(A ∪ B) = ¡5 3 ¢ · 52 + ¡5 2 ¢ · 53 − ¡5 3 ¢ 65 = 1490 65 . = 0.1916. Example 3.8 (The Birthday Problem) In a class of k students, what is the probability that at least two students share a common birthday? As is inevitably the case with constructing mathematical models of actual phenomena, some simplifying assumptions are required to make this problem tractable. We begin by assuming that there are 365 possible birthdays, i.e., we ignore February 29. Then the sample space, S, of possible birthdays for k students comprises 365k outcomes. Next we assume that each of the 365k outcomes is equally likely. This is not literally correct, as slightly more babies are born in some seasons than
  • 62. 60 CHAPTER 3. PROBABILITY in others. Furthermore, if the class contains twins, then only certain pairs of birthdays are possible outcomes for those two students! In most situations, however, the assumption of equally likely outcomes is reasonably plausible. Let A denote the event that at least two students in the class share a birthday. We might attempt to calculate P(A) = #(A) #(S) , but a moment’s reflection should convince the reader that counting the num- ber of outcomes in A is an extremely difficult undertaking. Instead, we invoke Theorem 3.1 and calculate P(A) = 1 − P(Ac ) = 1 − #(Ac) #(S) . This is considerably easier, because we count the number of outcomes in which each student has a different birthday by observing that 365 possible birthdays are available for the oldest student, after which 364 possible birth- days remain for the next oldest student, after which 363 possible birthdays remain for the next, etc. The formula is # (Ac ) = 365 · 364 · · · (366 − k) and so P(A) = 1 − 365 · 364 · · · (366 − k) 365 · 365 · · · 365 . The reader who computes P(A) for several choices of k may be astonished to discover that a class of just k = 23 students is required to obtain P(A) > 0.5! 3.4 Conditional Probability Consider a sample space with 10 equally likely outcomes, together with the events indicated in the Venn diagram that appears in Figure 3.4. Applying the methods of Section 3.3, we find that the (unconditional) probability of A is P(A) = #(A) #(S) = 3 10 = 0.3. Suppose, however, that we know that we can restrict attention to the ex- perimental outcomes that lie in B. Then the conditional probability of the
  • 63. 3.4. CONDITIONAL PROBABILITY 61 event A given the occurrence of the event B is P(A|B) = #(A ∩ B) #(S ∩ B) = 1 5 = 0.2. Notice that (for this example) the conditional probability, P(A|B), differs from the unconditional probability, P(A). S ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Figure 3.4: A Venn diagram that illustrates conditional probability. Each ⋆ represents an individual outcome. To develop a definition of conditional probability that is not specific to finite sample spaces with equally likely outcomes, we now write P(A|B) = #(A ∩ B) #(S ∩ B) = #(A ∩ B)/#(S) #(B)/#(S) = P(A ∩ B) P(B) . We take this as a definition: Definition 3.1 If A and B are events, and P(B) > 0, then P(A|B) = P(A ∩ B) P(B) . (3.5)
  • 64. 62 CHAPTER 3. PROBABILITY The following consequence of Definition 3.1 is extremely useful. Upon multiplication of equation (3.5) by P(B), we obtain P(A ∩ B) = P(B)P(A|B) when P(B) > 0. Furthermore, upon interchanging the roles of A and B, we obtain P(A ∩ B) = P(B ∩ A) = P(A)P(B|A) when P(A) > 0. We will refer to these equations as the multiplication rule for conditional probability. Used in conjunction with tree diagrams, the multiplication rule provides a powerful tool for analyzing situations that involve conditional probabilities. Example 3.9 Consider three fair coins, identical except that one coin (HH) is Heads on both sides, one coin (HT) is Heads on one side and Tails on the other, and one coin (TT) is Tails on both sides. A coin is selected at random and tossed. The face-up side of the coin is Heads. What is the probability that the face-down side of the coin is Heads? This problem was once considered by Marilyn vos Savant in her syndi- cated column, Ask Marilyn. As have many of the probability problems that she has considered, it generated a good deal of controversy. Many readers reasoned as follows: 1. The observation that the face-up side of the tossed coin is Heads means that the selected coin was not TT. Hence the selected coin was either HH or HT. 2. If HH was selected, then the face-down side is Heads; if HT was selected, then the face-down side is Tails. 3. Hence, there is a 1 in 2, or 50 percent, chance that the face-down side is Heads. At first glance, this reasoning seems perfectly plausible and readers who advanced it were dismayed that Marilyn insisted that .5 is not the correct probability. How did these readers err? A tree diagram of this experiment is depicted in Figure 3.5. The branches represent possible outcomes and the numbers associated with the branches are the respective probabilities of those outcomes. The initial triple of branches represents the initial selection of a coin—we have interpreted “at
  • 65. 3.4. CONDITIONAL PROBABILITY 63 • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 3 1 3 1 3 coin=HH coin=HT coin=TT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 2 1 2 1 up=H up=H up=T up=T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 1 1 down=H down=T down=H down=T 1 3 1 6 1 6 1 3 Figure 3.5: A tree diagram for Example 3.9. random” to mean that each coin is equally likely to be selected. The second level of branches represents the toss of the coin by identifying its resulting up-side. For HH and TT, only one outcome is possible; for HT, there are two equally likely outcomes. Finally, the third level of branches represents the down-side of the tossed coin. In each case, this outcome is determined by the up-side. The multiplication rule for conditional probability makes it easy to calcu- late the probabilities of the various paths through the tree. The probability that HT is selected and the up-side is Heads and the down-side is Tails is P(HT ∩ up=H ∩ down=T) = P(HT ∩ up=H) · P(down=T|HT ∩ up=H) = P(HT) · P(up=H|HT) · 1 = (1/3) · (1/2) · 1 = 1/6 and the probability that HH is selected and the up-side is Heads and the down-side is Heads is P(HH ∩ up=H ∩ down=H) = P(HH ∩ up=H) · P(down=H|HH ∩ up=H) = P(HH) · P(up=H|HH) · 1 = (1/3) · 1 · 1 = 1/3.
  • 66. 64 CHAPTER 3. PROBABILITY Once these probabilities have been computed, it is easy to answer the original question: P(down=H|up=H) = P(down=H ∩ up=H) P(up=H) = 1/3 (1/3) + (1/6) = 2 3 , which was Marilyn’s answer. From the tree diagram, we can discern the fallacy in our first line of reasoning. Having narrowed the possible coins to HH and HT, we claimed that HH and HT were equally likely candidates to have produced the observed Head. In fact, HH was twice as likely as HT. Once this fact is noted it seems completely intuitive (HH has twice as many Heads as HT), but it is easily overlooked. This is an excellent example of how the use of tree diagrams may prevent subtle errors in reasoning. Example 3.10 (Bayes Theorem) An important application of con- ditional probability can be illustrated by considering a population of patients at risk for contracting the HIV virus. The population can be partitioned into two sets: those who have contracted the virus and developed antibodies to it, and those who have not contracted the virus and lack antibodies to it. We denote the first set by D and the second set by Dc. An ELISA test was designed to detect the presence of HIV antibodies in human blood. This test also partitions the population into two sets: those who test positive for HIV antibodies and those who test negative for HIV antibodies. We denote the first set by + and the second set by −. Together, the partitions induced by the true disease state and by the observed test outcome partition the population into four sets, as in the following Venn diagram: D ∩ + D ∩ − Dc ∩ + Dc ∩ − (3.6) In two of these cases, D ∩ + and Dc ∩ −, the test provides the correct diagnosis; in the other two cases, Dc ∩ + and D ∩ −, the test results in a diagnostic error. We call Dc ∩ + a false positive and D ∩ − a false negative. In such situations, several quantities are likely to be known, at least approximately. The medical establishment is likely to have some notion of P(D), the probability that a patient selected at random from the popula- tion is infected with HIV. This is the proportion of the population that is
  • 67. 3.4. CONDITIONAL PROBABILITY 65 infected—it is called the prevalence of the disease. For the calculations that follow, we will assume that P(D) = .001. Because diagnostic procedures undergo extensive evaluation before they are approved for general use, the medical establishment is likely to have a fairly precise notion of the probabilities of false positive and false negative test results. These probabilities are conditional: a false positive is a positive test result within the set of patients who are not infected and a false negative is a negative test results within the set of patients who are infected. Thus, the probability of a false positive is P(+|Dc) and the probability of a false negative is P(−|D). For the calculations that follow, we will assume that P(+|Dc) = .015 and P(−|D) = .003.3 Now suppose that a randomly selected patient has a positive ELISA test result. Obviously, the patient has an extreme interest in properly assessing the chances that a diagnosis of HIV is correct. This can be expressed as P(D|+), the conditional probability that a patient has HIV given a positive ELISA test. This quantity is called the predictive value of the test. • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0.001 0.999 D Dc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0.997 0.003 0.015 0.985 + − + − Figure 3.6: A tree diagram for Example 3.10. To motivate our calculation of P(D|+), it is again helpful to construct a tree diagram, as in Figure 3.6. This diagram was constructed so that the branches depicted in the tree have known probabilities, i.e., we first branch on the basis of disease state because P(D) and P(Dc) are known, then on the basis of test result because P(+|D), P(−|D), P(+|Dc), and P(−|Dc) are known. Notice that each of the four paths in the tree corresponds to exactly one of the four sets in (3.6). Furthermore, we can calculate the probability of 3 See E.M. Sloan et al. (1991), “HIV Testing: State of the Art,” Journal of the American Medical Association, 266:2861–2866.
  • 68. 66 CHAPTER 3. PROBABILITY each set by multiplying the probabilities that occur along its corresponding path: P(D ∩ +) = P(D) · P(+|D) = 0.001 · 0.997, P(D ∩ −) = P(D) · P(−|D) = 0.001 · 0.003, P(Dc ∩ +) = P(Dc ) · P(+|Dc ) = 0.999 · 0.015, P(Dc ∩ −) = P(Dc ) · P(−|Dc ) = 0.999 · 0.985. The predictive value of the test is now obtained by computing P(D|+) = P(D ∩ +) P(+) = P(D ∩ +) P(D ∩ +) + P(Dc ∩ +) = 0.001 · 0.997 0.001 · 0.997 + 0.999 · 0.015 . = 0.0624. This probability may seem quite small, but consider that a positive test result can be obtained in two ways. If the person has the HIV virus, then a positive result is obtained with high probability, but very few people actually have the virus. If the person does not have the HIV virus, then a positive result is obtained with low probability, but so many people do not have the virus that the combined number of false positives is quite large relative to the number of true positives. This is a common phenomenon when screening for diseases. The preceding calculations can be generalized and formalized in a formula known as Bayes Theorem; however, because such calculations will not play an important role in this book, we prefer to emphasize the use of tree diagrams to derive the appropriate calculations on a case-by-case basis. Independence We now introduce a concept that is of fundamental im- portance in probability and statistics. The intuitive notion that we wish to formalize is the following: Two events are independent if the occurrence of either is unaf- fected by the occurrence of the other. This notion can be expressed mathematically using the concept of condi- tional probability. Let A and B denote events and assume for the moment that the probability of each is strictly positive. If A and B are to be regarded as independent, then the occurrence of A is not affected by the occurrence of B. This can be expressed by writing P(A|B) = P(A). (3.7)
  • 69. 3.4. CONDITIONAL PROBABILITY 67 Similarly, the occurrence of B is not affected by the occurrence of A. This can be expressed by writing P(B|A) = P(B). (3.8) Substituting the definition of conditional probability into (3.7) and mul- tiplying by P(B) leads to the equation P(A ∩ B) = P(A) · P(B). Substituting the definition of conditional probability into (3.8) and multi- plying by P(A) leads to the same equation. We take this equation, called the multiplication rule for independence, as a definition: Definition 3.2 Two events A and B are independent if and only if P(A ∩ B) = P(A) · P(B). We proceed to explore some consequences of this definition. Example 3.11 Notice that we did not require P(A) > 0 or P(B) > 0 in Definition 3.2. Suppose that P(A) = 0 or P(B) = 0, so that P(A)·P(B) = 0. Because A ∩ B ⊂ A, P(A ∩ B) ≤ P(A); similarly, P(A ∩ B) ≤ P(B). It follows that 0 ≤ P(A ∩ B) ≤ min(P(A), P(B)) = 0 and therefore that P(A ∩ B) = 0 = P(A) · P(B). Thus, if either of two events has probability zero, then the events are neces- sarily independent. Example 3.12 Consider the disjoint events depicted in Figure 3.7 and suppose that P(A) > 0 and P(B) > 0. Are A and B independent? Many students instinctively answer that they are, but independence is very dif- ferent from mutual exclusivity. In fact, if A occurs then B does not (and vice versa), so Figure 3.7 is actually a fairly extreme example of dependent events. This can also be deduced from Definition 3.2: P(A) · P(B) > 0, but P(A ∩ B) = P(∅) = 0 so A and B are not independent.
  • 70. 68 CHAPTER 3. PROBABILITY S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A B Figure 3.7: A Venn diagram for Example 3.12. Example 3.13 For each of the following, explain why the events A and B are or are not independent. (a) P(A) = 0.4, P(B) = 0.5, P([A ∪ B]c) = 0.3. It follows that P(A ∪ B) = 1 − P ([A ∪ B]c ) = 1 − 0.3 = 0.7 and, because P(A ∪ B) = P(A) + P(B) − P(A ∩ B), that P(A ∩ B) = P(A) + P(B) − P(A ∪ B) = 0.4 + 0.5 − 0.7 = 0.2. Then, since P(A) · P(B) = 0.5 · 0.4 = 0.2 = P(A ∩ B), it follows that A and B are independent events. (b) P(A ∩ Bc) = 0.3, P(Ac ∩ B) = 0.2, P(Ac ∩ Bc) = 0.1. Refer to the Venn diagram in Figure 3.8 to see that P(A) · P(B) = 0.7 · 0.6 = 0.42 6= 0.40 = P(A ∩ B) and hence that A and B are dependent events.
  • 71. 3.4. CONDITIONAL PROBABILITY 69 S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A B 0.3 0.2 0.1 Figure 3.8: A Venn diagram for Example 3.13. Thus far we have verified that two events are independent by verifying that the multiplication rule for independence holds. In applications, how- ever, we usually reason somewhat differently. Using our intuitive notion of independence, we appeal to common sense, our knowledge of science, etc., to decide if independence is a property that we wish to incorporate into our mathematical model of the experiment in question. If it is, then we assume that two events are independent and the multiplication rule for independence becomes available to us for use as a computational formula. Example 3.14 Consider an experiment in which a typical penny is first tossed, then spun. Let A denote the event that the toss results in Heads and let B denote the event that the spin results in Heads. What is the probability of observing two Heads? We assume that, for a typical penny, P(A) = 0.5 and P(B) = 0.3 (see Section 1.1.1). Common sense tells us that the occurrence of either event is unaffected by the occurrence of the other. (Time is not reversible, so obviously the occurrence of A is not affected by the occurrence of B. One
  • 72. 70 CHAPTER 3. PROBABILITY might argue that tossing the penny so that A occurs results in wear that is slightly different than the wear that results if Ac occurs, thereby slightly affecting the subsequent probability that B occurs. However, this argument strikes most students as completely preposterous. Even if it has a modicum of validity, the effect is undoubtedly so slight that we can safely neglect it in constructing our mathematical model of the experiment.) Therefore, we assume that A and B are independent and calculate that P(A ∩ B) = P(A) · P(B) = 0.5 · 0.3 = 0.15. Example 3.15 For each of the following, explain why the events A and B are or are not independent. (a) Consider the population of William & Mary undergraduate students, from which one student is selected at random. Let A denote the event that the student is female and let B denote the event that the student is concentrating in elementary education. I’m told that P(A) is roughly 60 percent, while it appears to me that P(A|B) exceeds 90 percent. Whatever the exact probabilities, it is evident that the probability that a random elementary education con- centrator is female is considerably greater than the probability that a random student is female. Hence, A and B are dependent events. (b) Consider the population of registered voters, from which one voter is selected at random. Let A denote the event that the voter belongs to a country club and let B denote the event that the voter is a Republican. It is generally conceded that one finds a greater proportion of Repub- licans among the wealthy than in the general population. Since one tends to find a greater proportion of wealthy persons at country clubs than in the general population, it follows that the probability that a random country club member is a Republican is greater than the prob- ability that a randomly selected voter is a Republican. Hence, A and B are dependent events.4 4 This phenomenon may seem obvious, but it was overlooked by the respected Literary Digest poll. Their embarrassingly awful prediction of the 1936 presidential election resulted in the previously popular magazine going out of business. George Gallup’s relatively accurate prediction of the outcome (and his uncannily accurate prediction of what the Literary Digest poll would predict) revolutionized polling practices.
  • 73. 3.4. CONDITIONAL PROBABILITY 71 Before progressing further, we ask what it should mean for A, B, and C to be three mutually independent events. Certainly each pair should comprise two independent events, but we would also like to write P(A ∩ B ∩ C) = P(A) · P(B) · P(C). It turns out that this equation cannot be deduced from the pairwise inde- pendence of A, B, and C, so we have to include it in our definition of mutual independence. Similar equations must be included when defining the mutual independence of more than three events. Here is a general definition: Definition 3.3 Let {Aα} be an arbitrary collection of events. These events are mutually independent if and only if, for every finite choice of events Aα1 , . . . , Aαk , P (Aα1 ∩ · · · ∩ Aαk ) = P (Aα1 ) · · · P (Aαk ) . Example 3.16 In the preliminary hearing for the criminal trial of O.J. Simpson, the prosecution presented conventional blood-typing evidence that blood found at the murder scene possessed three characteristics also pos- sessed by Simpson’s blood. The prosecution also presented estimates of the prevalence of each characteristic in the general population, i.e., of the proba- bilities that a person selected at random from the general population would possess these characteristics. Then, to obtain the estimated probability that a randomly selected person would possess all three characteristics, the pros- ecution multiplied the three individual probabilities, resulting in an estimate of 0.005. In response to this evidence, defense counsel Gerald Uehlman objected that the prosecution had not established that the three events in question were independent and therefore had not justified their use of the multipli- cation rule. The prosecution responded that it was standard practice to multiply such probabilities and Judge Kennedy-Powell admitted the 0.005 estimate on that basis. No attempt was made to assess whether or not the standard practice was proper; it was inferred from the fact that the practice was standard that it must be proper. In this example, science and law di- verge. From a scientific perspective, Gerald Uehlman was absolutely correct in maintaining that an assumption of independence must be justified.
  • 74. 72 CHAPTER 3. PROBABILITY 3.5 Random Variables Informally, a random variable is a rule for assigning real numbers to exper- imental outcomes. By convention, random variables are usually denoted by upper case Roman letters near the end of the alphabet, e.g., X, Y , Z. Example 3.17 A coin is tossed once and Heads (H) or Tails (T) is observed. The sample space for this experiment is S = {H, T}. For reasons that will become apparent, it is often convenient to assign the real number 1 to Heads and the real number 0 to Tails. This assignment, which we denote by the random variable X, can be depicted as follows: H T X −→ 1 0 In functional notation, X : S → ℜ and the rule of assignment is defined by X(H) = 1, X(T) = 0. Example 3.18 A coin is tossed twice and the number of Heads is counted. The sample space for this experiment is S = {HH, HT, TH, TT}. We want to assign the real number 2 to the outcome HH, the real number 1 to the outcomes HT and TH, and the real number 0 to the outcome TT. Several representations of this assignment are possible: (a) Direct assignment, which we denote by the random variable Y , can be depicted as follows: HH HT TH TT Y −→ 2 1 1 0 In functional notation, Y : S → ℜ and the rule of assignment is defined by Y (HH) = 2, Y (HT) = Y (TH) = 1, Y (TT) = 0. (b) Instead of directly assigning the counts, we might take the intermediate step of assigning an ordered pair of numbers to each outcome. As in
  • 75. 3.5. RANDOM VARIABLES 73 Example 3.17, we assign 1 to each occurence of Heads and 0 to each occurence of Tails. We denote this assignment by X : S → ℜ2. In this context, X = (X1, X2) is called a random vector. Each component of the random vector X is a random variable. Next, we define a function g : ℜ2 → ℜ by g(x1, x2) = x1 + x2. The composition g(X) is equivalent to the random variable Y , as re- vealed by the following depiction: HH HT TH TT X −→ (1, 1) (1, 0) (0, 1) (0, 0) g −→ 2 1 1 0 (c) The preceding representation suggests defining two random variables, X1 and X2, as in the following depiction: 1 1 0 0 X1 ←− HH HT TH TT X2 −→ 1 0 1 0 As in the preceding representation, the random variable X1 counts the number of Heads observed on the first toss and the random variable X2 counts the number of Heads observed on the second toss. The sum of these random variables, X1 +X2, is evidently equivalent to the random variable Y . The primary reason that we construct a random variable, X, is to replace the probability space that is naturally suggested by the experiment in ques- tion with a familiar probability space in which the possible outcomes are real numbers. Thus, we replace the original sample space, S, with the familiar number line, ℜ. To complete the transference, we must decide which subsets of ℜ will be designated as events and we must specify how the probabilities of these events are to be calculated. It is an interesting fact that it is impossible to construct a probability space in which the set of outcomes is ℜ and every subset of ℜ is an event. For this reason, we define the collection of events to be the smallest collection of subsets that satisfies the assumptions of the Kolmogorov probability model and that contains every interval of the form (−∞, y]. This collection is called the Borel sets and it is a very large collection of subsets of ℜ. In particular, it contains every interval of real numbers and every set that can
  • 76. 74 CHAPTER 3. PROBABILITY be constructed by applying a countable number of set operations (union, intersection, complementation) to intervals. Most students will never see a set that is not a Borel set! Finally, we must define a probability measure that assigns probabilities to Borel sets. Of course, we want to do so in a way that preserves the probability structure of the experiment in question. The only way to do so is to define the probability of each Borel set B to be the probability of the set of outcomes to which X assigns a value in B. This set of outcomes is denoted by X−1 (B) = {s ∈ S : X(s) ∈ B} and is depicted in Figure 3.9. S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X−1(B) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ℜ B Figure 3.9: The inverse image of a Borel set. How do we know that the set of outcomes to which X assigns a value in B is an event and therefore has a probability? We don’t, so we guarantee that it is by including this requirement in our formal definition of a random variable.
  • 77. 3.5. RANDOM VARIABLES 75 Definition 3.4 A function X : S → ℜ is a random variable if and only if P ({s ∈ S : X(s) ≤ y}) exists for all choices of y ∈ ℜ. We will denote the probability measure induced by the random variable X by PX. The following equation defines various representations of PX: PX ((−∞, y]) = P ³ X−1 ((−∞, y]) ´ = P ({s ∈ S : X(s) ∈ (−∞, y]}) = P (−∞ < X ≤ y) = P (X ≤ y) A probability measure on the Borel sets is called a probability distribution and PX is called the distribution of the random variable X. A hallmark feature of probability theory is that we study the distributions of random variables rather than arbitrary probability measures. One important reason for this emphasis is that many different experiments may result in identical distributions. For example, the random variable in Example 3.17 might have the same distribution as a random variable that assigns 1 to male newborns and 0 to female newborns. Cumulative Distribution Functions Our construction of the proba- bility measure induced by a random variable suggests that the following function will be useful in describing the properties of random variables. Definition 3.5 The cumulative distribution function (cdf) of a random var- iable X is the function F : ℜ → ℜ defined by F(y) = P(X ≤ y). Example 3.17 (continued) We consider two probability structures that might obtain in the case of a typical penny. (a) A typical penny is tossed. For this experiment, P(H) = P(T) = 0.5, and the following values of the cdf are easily determined:
  • 78. 76 CHAPTER 3. PROBABILITY – If y < 0, e.g., y = −9.1185 or y = −0.3018, then F(y) = P(X ≤ y) = P(∅) = 0. – F(0) = P(X ≤ 0) = P({T}) = 0.5. – If y ∈ (0, 1), e.g., y = 0.6241 or y = 0.9365, then F(y) = P(X ≤ y) = P({T}) = 0.5. – F(1) = P(X ≤ 1) = P({T, H}) = 1. – If y > 1, e.g., y = 1.5248 or y = 7.7397, then F(y) = P(X ≤ y) = P({T, H}) = 1. The entire cdf is plotted in Figure 3.10. y F(y) -2 -1 0 1 2 3 0.0 0.5 1.0 Figure 3.10: The cumulative distribution function for tossing a penny with P(Heads) = 0.5. (b) A typical penny is spun. For this experiment, we assume that P(H) = 0.3 and P(T) = 0.7 (see Section 1.1.1). Then the following values of the cdf are easily deter- mined:
  • 79. 3.5. RANDOM VARIABLES 77 – If y < 0, e.g., y = −1.6633 or y = −0.5485, then F(y) = P(X ≤ y) = P(∅) = 0. – F(0) = P(X ≤ 0) = P({T}) = 0.7. – If y ∈ (0, 1), e.g., y = 0.0685 or y = 0.4569, then F(y) = P(X ≤ y) = P({T}) = 0.7. – F(1) = P(X ≤ 1) = P({T, H}) = 1. – If y > 1, e.g., y = 1.4789 or y = 2.6117, then F(y) = P(X ≤ y) = P({T, H}) = 1. The entire cdf is plotted in Figure 3.11. y F(y) -2 -1 0 1 2 3 0.0 0.2 0.4 0.6 0.8 1.0 Figure 3.11: The cumulative distribution function for spinning a penny with P(Heads) = 0.3.
  • 80. 78 CHAPTER 3. PROBABILITY Example 3.18 (continued) Suppose that the coin is fair, so that each of the four possible outcomes in S is equally likely, i.e., has probability 0.25. Then the following values of the cdf are easily determined: • If y < 0, e.g., y = −4.2132 or y = −0.5615, then F(y) = P(X ≤ y) = P(∅) = 0. • F(0) = P(X ≤ 0) = P({TT}) = 0.25. • If y ∈ (0, 1), e.g., y = 0.3074 or y = 0.6924, then F(y) = P(X ≤ y) = P({TT}) = 0.25. • F(1) = P(X ≤ 1) = P({TT, HT, TH}) = 0.75. • If y ∈ (1, 2), e.g., y = 1.4629 or y = 1.5159, then F(y) = P(X ≤ y) = P({TT, HT, TH}) = 0.75. • F(2) = P(X ≤ 2) = P({TT, HT, TH, HH}) = 1. • If y > 2, e.g., y = 2.1252 or y = 3.7790, then F(y) = P(X ≤ y) = P({TT, HT, TH, HH}) = 1. The entire cdf is plotted in Figure 3.12. Let us make some observations about the cdfs that we have plotted. First, each cdf assumes its values in the unit interval, [0, 1]. This is a general property of cdfs: each F(y) = P(X ≤ y), and probabilities necessarily assume values in [0, 1]. Second, each cdf is nondecreasing; i.e., if y2 > y1, then F(y2) ≥ F(y1). This is also a general property of cdfs, for suppose that we observe an out- come s such that X(s) ≤ y1. Because y1 < y2, it follows that X(s) ≤ y2. Thus, {X ≤ y1} ⊂ {X ≤ y2} and therefore F (y1) = P (X ≤ y1) ≤ P (X ≤ y2) = F (y2) . Finally, each cdf equals 1 for sufficiently large y and 0 for sufficiently small y. This is not a general property of cdfs—it occurs in our examples because X(S) is a bounded set, i.e., there exist finite real numbers a and b such that every x ∈ X(S) satisfies a ≤ x ≤ b. However, all cdfs do satisfy the following properties: lim y→∞ F(y) = 1 and lim y→−∞ F(y) = 0.
  • 81. 3.5. RANDOM VARIABLES 79 y F(y) -2 -1 0 1 2 3 4 0.0 0.2 0.4 0.6 0.8 1.0 Figure 3.12: The cumulative distribution function for tossing two pennies with P(Heads) = 0.5 and counting the number of Heads. Independence We say that two random variables, X1 and X2, are inde- pendent if each event defined by X1 is independent of each event defined by X2. More precisely, Definition 3.6 Let X1 : S → ℜ and X2 : S → ℜ be random variables. X1 and X2 are independent if and only if, for each y1 ∈ ℜ and each y2 ∈ ℜ, P(X1 ≤ y1, X2 ≤ y2) = P(X1 ≤ y1) · P(X2 ≤ y2). This definition can be extended to mutually independent collections of ran- dom variables in precisely the same way that we extended Definition 3.2 to Definition 3.3. Intuitively, two random variables are independent if the distribution of either does not depend on the value of the other. As we discussed in Section 3.4, in most applications we will appeal to common sense, our knowledge of science, etc., to decide if independence is a property that we wish to incorpo- rate into our mathematical model of the experiment in question. If it is, then we will assume that the appropriate random variables are independent. This
  • 82. 80 CHAPTER 3. PROBABILITY assumption will allow us to apply many powerful theorems from probability and statistics that are only true of independent random variables. 3.6 Case Study: Padrolling in Milton Murayama’s All I asking for is my body The American dice game Craps evolved from the English dice game Hazard: “According to tradition, blacks living around New Orleans tried their hand at Hazard. . . In the course of time they modifed the rules and playing procedures so greatly that they ended up in- venting the game of Craps (in the U.S. idiom known as Crap- shooting or Shooting Craps and here identified as Private Craps to distinguish it from Open Craps and the more formalized vari- ants offered in gambling casinos). . . . The popularity of the pri- vate game of Craps with the U.S. military personnel during World Wars I and II helped to spread that game to many parts of the world.”5 Craps is played with two fair dice, each marked in a specific way. According to Hoyle, Each face of [each] die is marked with one to six dots, opposite faces representing. . . numbers adding to seven; if the vertical face toward you is 5, and the horizontal face on top of the die is 6, [then] the 3 should be on the vertical face to your right.”6 The shooter rolls the pair of dice, resulting in one of 6 × 6 = 36 possible outcomes. Of interest is the combined number of dots on the horizontal faces atop the two dice, a number that we denote by the random variable X. The possible values of X are displayed in Figure 3.13. Let x denote the value of X produced by the first roll. The game ends immediately if x ∈ {2, 3, 7, 11, 12}. If x ∈ {7, 11}, then x is a natural and the shooter wins; if x ∈ {2, 3, 12}, then x is craps and the shooter loses; otherwise, x becomes the shooter’s point. If the first roll is not decisive, then the shooter continues to roll until he either (a) again rolls x (makes 5 “Dice and dice games,” The New Encyclopædia Britannica in 30 Volumes, Macropæ- dia, Volume 5, 1974, pp. 702–706. 6 Richard L. Frey, According to Hoyle, Fawcett Publications, 1970, p. 266.
  • 83. 3.6. CASE STUDY: PADROLLING DICE 81 1 2 3 4 5 6 1 2 3 4 5 6 7 2 3 4 5 6 7 8 3 4 5 6 7 8 9 4 5 6 7 8 9 10 5 6 7 8 9 10 11 6 7 8 9 10 11 12 Figure 3.13: The possible outcomes of rolling two standard dice. his point), in which case he wins, or (b) rolls 7 (craps out), in which case he loses. A game of craps is fair when each of the 36 outcomes in Figure 3.13 is equally likely. Fairness is usually ensured by tossing the dice from a cup, or, more crudely, by tossing them against a wall. In a fair game of craps, we have the following probabilities: P(X = 7) = 6/36 P(X = 6) = P(X = 8) = 5/36 P(X = 5) = P(X = 9) = 4/36 P(X = 4) = P(X = 10) = 3/36 P(X = 3) = P(X = 11) = 2/36 P(X = 2) = P(X = 12) = 1/36 Let us begin by calculating the probability that the shooter wins a fair game of craps. There are several ways for the shooter to win. We will calculate the probability of each, then sum these probabilities. • Roll a natural. P(X ∈ {7, 11}) = 6 + 2 36 = 2 32 . • Roll x = 6 or x = 8, then make point. First, P(X ∈ {6, 8}) = 5 + 5 36 = 5 18 .
  • 84. 82 CHAPTER 3. PROBABILITY Then, the shooter must roll x before rolling 7. Other outcomes are ignored. There are 5 ways to roll x versus 6 ways to roll 7, so the conditional probability of making point is 5/11. Hence, the probability of the shooter winning in this way is 5 18 · 5 11 = 25 2 · 32 · 11 . • Roll x = 5 or x = 9, then make point. First, P(X ∈ {5, 9}) = 4 + 4 36 = 2 9 . Then, the shooter must roll x before rolling 7. Other outcomes are ignored. There are 4 ways to roll x versus 6 ways to roll 7, so the conditional probability of making point is 4/10. Hence, the probability of the shooter winning in this way is 2 9 · 4 10 = 4 32 · 5 . • Roll x = 4 or x = 10, then make point. First, P(X ∈ {4, 10}) = 3 + 3 36 = 1 6 . Then, the shooter must roll x before rolling 7. Other outcomes are ignored. There are 3 ways to roll x versus 6 ways to roll 7, so the conditional probability of making point is 3/9. Hence, the probability of the shooter winning in this way is 1 6 · 3 9 = 1 2 · 32 . The probability that the shooter wins is 2 32 + 25 2 · 32 · 11 + 4 32 · 5 + 1 2 · 32 = 244 495 . = 0.4929. Thus, the shooter is slightly more likely to lose than to win a fair game of craps. Milton Murayama’s 1959 novel, All I asking for is my body, is a brilliant evocation of nisei (second-generation Japanese American) life on Hawaiian
  • 85. 3.6. CASE STUDY: PADROLLING DICE 83 sugar plantations in the 1930s.7 One of its central concerns is the concept of Japanese honor and its implicatons for the young protagonist/narrator, Kiyoshi, and his siblings. Years earlier, Kiyoshi’s parents had sacrificed their future to pay Kiyoshi’s grandfather’s debts; now they owe the impossible sum of $6000 and they expect their children to do likewise. Toward the novel’s end, Japan attacks Pearl Harbor and Kiyoshi subsequently volunteers for an all-nisei regiment that will fight in Europe. In the final chapter, he contrives to win $6000 by playing Craps. Kiyoshi had watched a former classmate, Hiroshi Sakai, play Craps at the Citizens’ Quarters in Kahana. “It was weird the way he kept winning. Whenever he rolled, the dice rolled in unison like the wheels of a cart, and even when one die rolled ahead of the other, neither flipped on its side. The Kahana players finally refused to fade [bet against] him, and he stopped coming.” We subsequently learn that Hiroshi’s technique is called padrolling. In the Army, “Everybody had money and every third guy was a crapshooter. The sight of all that money drove me mad. There was $25,000 at least floating around in the crap games.. . . Most of the games were played on blankets on barrack floors, the dice rolled by hand. There were a few guys who rolled the dice the way Hiroshi did at the Citizens’ Quarters in Kahana. The dice didn’t bounce but rolled out in unison like the wheels of a cart. There had to be an advantage to that.” Kiyoshi buys a pair of dice and examines them carefully. He realizes that, by rolling the dice “like the wheels of a cart,” he can keep the sides of the dice that form the axis of the wheels from appearing. Then, by combining certain numbers to form the axis, he can improve his chance of winning. Kiyoshi teaches himself to padroll and develops the following system for choosing the axis: 1. For the initial roll, use the 1-6 axis for each die. Padrolling this axis has the effect of eliminating the first and sixth rows and columns in Figure 3.13, resulting in the following set of possible 7 I am indebted to M. Lynn Weiss for bringing this novel to my attention.
  • 86. 84 CHAPTER 3. PROBABILITY outcomes: 2 3 4 5 2 4 5 6 7 3 5 6 7 8 4 6 7 8 9 5 7 8 9 10 Notice that this choice eliminates the possibility of crapping out! Fur- thermore, assuming that the 16 remaining outcomes are equally likely, it also improves the chance of rolling a natural from 4/18 to 4/16. 2. If x ∈ {6, 8}, then use the 1-6 axis on one die and the 2-5 axis on the other. Padrolling this axis results in the following set of possible outcomes: 1 3 4 6 2 3 5 6 8 3 4 6 7 9 4 5 7 8 10 5 6 8 9 11 With this choice, there are 3 ways to roll x versus 2 ways to roll 7. Again assuming that the 16 remaining outcomes are equally likely, this choice improves the conditional probability of making point from 5/11 to 3/5. 3. If x ∈ {4, 5, 9, 10}, then use the 1-6 axis on one die and the 3-4 axis on the other. Padrolling this axis results in the following set of possible outcomes: 1 2 5 6 2 3 4 7 8 3 4 5 8 9 4 5 6 9 10 5 6 7 10 11 With this choice, there are 2 ways to roll x versus 2 ways to roll 7. Again, assume that the 16 remaining outcomes are equally likely. If x ∈ {5, 9}, then this choice improves the conditional probability of making point from 4/10 to 2/4. If x ∈ {4, 10}, then this choice improves the conditional probability of making point from 3/9 to 2/4.
  • 87. 3.7. EXERCISES 85 If a shooter padrolls successfully, then the probability that he will win using Kiyoshi’s system is 4 16 + 6 16 · 3 5 + 4 16 · 2 4 + 2 16 · 2 4 = 53 80 = 0.6625, a substantial improvement on his chance of winning a fair game. “And,” Kiyoshi rationalizes, “it wasn’t really cheating. The others had the option of stopping any of your rolls, or they could play with a cup, or have the roller bang the dice against the wall, or use a canvas or the bare floor instead of a blanket.” So, Kiyoshi padrolls. I leave to my readers the pleasure of discovering whether or not he succeeds in winning the $6000 his family needs. 3.7 Exercises 1. Consider three events that might occur when a new mine is dug in the Cleveland National Forest in San Diego County, California: A = { quartz specimens are found } B = { tourmaline specimens are found } C = { aquamarine specimens are found } Assume the following probabilities: P(A) = 0.80, P(B) = 0.36, P(C) = 0.28, P(A ∩ B) = 0.29, P(A ∩ C) = 0.24, P(B ∩ C) = 0.16, and P(A ∩ B ∩ C) = 0.13. (a) Draw a suitable Venn diagram for this situation. (b) Calculate the probability that both quartz and tourmaline will be found, but not aquamarine. (c) Calculate the probability that quartz will be found, but not tour- maline or aquamarine. (d) Calculate the probability that none of these types of specimens will be found. (e) Calculate the probability of Ac ∩ (B ∪ C). 2. Consider two urns, one containing four tickets labelled {1, 3, 4, 6}; the other containing ten tickets, labelled {1, 3, 3, 3, 3, 4, 4, 4, 4, 6}. (a) What is the probability of drawing a 3 from the first urn?
  • 88. 86 CHAPTER 3. PROBABILITY (b) What is the probability of drawing a 3 from the second urn? (c) Which urn is a better model for throwing an astragalus? Why? 3. Suppose that five cards are dealt from a standard deck of playing cards. (a) What is the probability of drawing a straight flush? (b) What is the probability of drawing 4 of a kind? Hint: Use the results of Exercise 2.5.6. 4. Suppose that four fair dice are thrown simultaneously. (a) How many outcomes are possible? (b) What is the probability that each top face shows a different num- ber? (c) What is the probability that the top faces show four numbers that sum to five? (d) What is the probability that at least one of the top faces shows an odd number? (e) What is the probability that three of the top faces show the same odd number and the other top face shows an even number? 5. A dreidl is a four-sided top that contains a Hebrew letter on each side: nun, gimmel, heh, shin. These letters are an acronym for the Hebrew phrase nes gadol hayah sham (a great miracle happened there), which refers to the miracle of the temple light that burned for eight days with only one day’s supply of oil—the miracle celebrated at Chanukah. Here we suppose that a fair dreidl (one that is equally likely to fall on each of its four sides) is to be spun ten times. Compute the probability of each of the following events: (a) Five gimmels and five hehs; (b) No nuns or shins; (c) Two letters are absent and two letters are present; (d) At least two letters are absent. 6. Suppose that P(A) = 0.7, P(B) = 0.6, and P(Ac ∩ B) = 0.2. (a) Draw a Venn diagram that describes this experiment.
  • 89. 3.7. EXERCISES 87 (b) Is it possible for A and B to be disjoint events? Why or why not? (c) What is the probability of A ∪ Bc? (d) Is it possible for A and B to be independent events? Why or why not? (e) What is the conditional probability of A given B? 7. Suppose that 20 percent of the adult population is hypertensive. Sup- pose that an automated blood-pressure machine diagnoses 84 percent of hypertensive adults as hypertensive and 23 percent of nonhyperten- sive adults as hypertensive. A person is selected at random from the adult population. (a) Construct a tree diagram that describes this experiment. (b) What is the probability that the automated blood-pressure ma- chine will diagnose the selected person as hypertensive? (c) Suppose that the automated blood-pressure machine does diag- nose the selected person as hypertensive. What then is the prob- ability that this person actually is hypertensive? (d) The following passage appeared in a recent article (Bruce Bower, Roots of reason, Science News, 145:72–75, January 29, 1994) about how human beings think. Please comment on it in whatever way seems appropriate to you. And in a study slated to appear in COGNITION, Cosmides and Tooby confront a cognitive bias known as the “base-rate fallacy.” As an illustration, they cite a 1978 study in which 60 staff and students at Harvard Medical School attempted to solve this problem: “If a test to detect a disease whose prevalence is 1/1,000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?” Nearly half the sample estimated this probability as 95 percent; only 11 gave the correct response of 2 percent. Most participants neglected the base rate of the disease (it strikes 1 in 1,000 people) and formed a judgment solely from the characteristics of the test.
  • 90. 88 CHAPTER 3. PROBABILITY 8. Mike owns a box that contains 6 pairs of 14-carat gold, cubic zirconia earrings. The earrings are of three sizes: 3mm, 4mm, and 5mm. There are 2 pairs of each size. Each time that Mike needs an inexpensive gift for a female friend, he randomly selects a pair of earrings from the box. If the selected pair is 4mm, then he buys an identical pair to replace it. If the selected pair is 3mm, then he does not replace it. If the selected pair is 5mm, then he tosses a fair coin. If he observes Heads, then he buys two identical pairs of earrings to replace the selected pair; if he observes Tails, then he does not replace the selected pair. (a) What is the probability that the second pair selected will be 4mm? (b) If the second pair was not 4mm, then what is the probability that the first pair was 5mm? 9. The following puzzle was presented on National Public Radio’s Car Talk: RAY: Three different numbers are chosen at random, and one is written on each of three slips of paper. The slips are then placed face down on the table. The objective is to choose the slip upon which is written the largest number. Here are the rules: You can turn over any slip of paper and look at the amount written on it. If for any reason you think this is the largest, you’re done; you keep it. Otherwise you discard it and turn over a second slip. Again, if you think this is the one with the biggest number, you keep that one and the game is over. If you don’t, you discard that one too. TOM: And you’re stuck with the third. I get it. RAY: The chance of getting the highest number is one in three. Or is it? Is there a strategy by which you can improve the odds? Solve the puzzle, i.e., determine an optimal strategy for finding the highest number. What is the probability that your strategy will find the highest number? Explain your answer. 10. It is a curious fact that approximately 85% of all U.S. residents who are struck by lightning are men. Consider the population of U.S. residents,
  • 91. 3.7. EXERCISES 89 from which a person is randomly selected. Let A denote the event that the person is male and let B denote the event that the person will be struck by lightning. (a) Estimate P(A|B) and P(Ac|B). (b) Compare P(A|B) and P(A). Are A and B independent events? (c) Suggest reasons why P(A|B) is so much larger than P(Ac|B). It is tempting to joke that men don’t know enough to come in out of the rain! Why might there be some truth to this possibility, i.e., why might men be more reluctant to take precautions than women? Can you suggest other explanations? 11. For each of the following pairs of events, explain why A and B are dependent or independent. (a) Consider the population of U.S. citizens, from which a person is randomly selected. Let A denote the event that the person is a member of a chess club and let B denote the event that the person is a woman. (b) Consider the population of male U.S. citizens who are 30 years of age. A man is selected at random from this population. Let A denote the event that he will be bald before reaching 40 years of age and let B denote the event that his father went bald before reaching 40 years of age. (c) Consider the population of students who attend high school in the U.S. A student is selected at random from this population. Let A denote the event that the student speaks Spanish and let B denote the event that the student lives in Texas. (d) Consider the population of months in the 20th century. A month is selected at random from this population. Let A denote the event that a hurricane crossed the North Carolina coastline during this month and let B denote the event that it snowed in Denver, Colorado, during this month. (e) Consider the population of Hollywood feature films produced dur- ing the 20th century. A movie is selected at random from this population. Let A denote the event that the movie was filmed in color and let B denote the event that the movie is a western.
  • 92. 90 CHAPTER 3. PROBABILITY (f) Consider the population of U.S. college freshmen, from which a student is randomly selected. Let A denote the event that the student attends the College of William & Mary, and let B denote the event that the student graduated from high school in Virginia. (g) Consider the population of all persons (living or dead) who have earned a Ph.D. from an American university, from which one is randomly selected. Let A denote the event that the person’s Ph.D. was earned before 1950 and let B denote the event that the person is female. (h) Consider the population of persons who resided in New Orleans before Hurricane Katrina. A person is selected at random from this population. Let A denote the event that the person left New Orleans before Katrina arrived, and let B denote the event that the person belonged to a household whose 2004 income was below the federal poverty line. (i) Consider the population of all couples who married in the United States in 1995. A couple is selected at random from this popu- lation. Let A denote the event that the couple cohabited (lived together) before marrying, and let B denote the event that the couple had divorced by 2005. 12. Two graduate students are renting a house. Before leaving town for winter break, each writes a check for her share of the rent. Emilie writes her check on December 16. By chance, it happens that the number of her check ends with the digits 16. Anne writes her check on December 18. By chance, it happens that the number of her check ends with the digits 18. What is the probability of such a coincidence, i.e., that both students would use checks with numbers that end in the same two digits as the date? 13. Suppose that X is a random variable with cdf F(y) =              0 y ≤ 0 y/3 y ∈ [0, 1) 2/3 y ∈ [1, 2] y/3 y ∈ [2, 3] 1 y ≥ 3              . Graph F and compute the following probabilities:
  • 93. 3.7. EXERCISES 91 (a) P(X > 0.5) (b) P(2 < X ≤ 3) (c) P(0.5 < X ≤ 2.5) (d) P(X = 1) 14. In Section 3.6, we calculated the probability that the shooter will win a fair game of craps. In so doing, we glossed a subtle point. Suppose that the shooter’s first roll results in x = 8. Now the shooter must roll until he rolls another 8, in which cases he makes his point and wins, or until he rolls a 7, in which case he craps out and loses. We argued that “there are 5 ways to roll 8 versus 6 ways to roll 7, so the conditional probability of making point is 5/11.” This argument appears to ignore the possibility that the shooter might roll indefi- nitely, never rolling 8 or 7. The following calculations eliminate that possibility. For i = 1, 2, 3, . . ., let Xi denote the result of roll i in a fair game of craps. Assume that we have observed X1 = x = 8. (a) Calculate the probability that X2 ∈ {7, 8}. (b) Calculate the probability that X2 ∈ {7, 8} and that X3 ∈ {7, 8}. (c) Calculate the probability that X2 ∈ {7, 8} and that X3 ∈ {7, 8} and that X4 ∈ {7, 8}. (d) What is the probability that the shooter will never roll another 7 or 8? 15. In the final chapter of All I asking for is my body, Kiyoshi places an initial, double-or-nothing bet of $200. If he wins, he will have $400. If he then wins a second double-or-nothing bet of $400, he will have $800. And so on. If he wins five consecutive times, he will have $6400, enough to pay his family’s debt. (a) Calculate the probability that the shooter will win five consecutive games of Craps if each of the games is fair. (b) Calculate the probability that the shooter will win five consecutive games of Craps if the shooter is allowed to use Kiyoshi’s padrolling system. (c) Kiyoshi recalls that “Hiroshi never lost.” Does this seem plausi- ble?
  • 94. 92 CHAPTER 3. PROBABILITY
  • 95. Chapter 4 Discrete Random Variables 4.1 Basic Concepts Our introduction of random variables in Section 3.5 was completely general, i.e., the principles that we discussed apply to all random variables. In this chapter, we will study an important special class of random variables, the discrete random variables. One of the advantages of restricting attention to discrete random variables is that the mathematics required to define various fundamental concepts for this class is fairly minimal. We begin with a formal definition. Definition 4.1 A random variable X is discrete if X(S), the set of possible values of X, is countable. Our primary interest will be in random variables for which X(S) is finite; however, there are many important random variables for which X(S) is denu- merable. The methods described in this chapter apply to both possibilities. In contrast to the cumulative distribution function (cdf) defined in Sec- tion 3.5, we now introduce the probability mass function (pmf). Definition 4.2 Let X be a discrete random variable. The probability mass function (pmf) of X is the function f : ℜ → ℜ defined by f(x) = P(X = x). If f is the pmf of X, then f necessarily possesses several properties worth noting: 1. f(x) ≥ 0 for every x ∈ ℜ. 93
  • 96. 94 CHAPTER 4. DISCRETE RANDOM VARIABLES 2. If x 6∈ X(S), then f(x) = 0. 3. By the definition of X(S), X x∈X(S) f(x) = X x∈X(S) P(X = x) = P   [ x∈X(S) {x}   = P (X ∈ X(S)) = 1. There is an important relation between the pmf and the cdf. For each y ∈ ℜ, let L(y) = {x ∈ X(S) : x ≤ y} denote the values of X that are less than or equal to y. Then F(y) = P(X ≤ y) = P (X ∈ L(y)) = X x∈L(y) P(X = x) = X x∈L(y) f(x). (4.1) Thus, the value of the cdf at y can be obtained by summing the values of the pmf at all values x ≤ y. More generally, we can compute the probability that X assumes its value in any set B ⊂ ℜ by summing the values of the pmf over all values of X that lie in B. Here is the formula: P(X ∈ B) = X x∈X(S)∩B P(X = x) = X x∈X(S)∩B f(x). (4.2) We now turn to some elementary examples of discrete random variables and their pmfs. 4.2 Examples Example 4.1 A fair coin is tossed and the outcome is Heads or Tails. Define a random variable X by X(Heads) = 1 and X(Tails) = 0. The pmf of X is the function f defined by f(0) = P(X = 0) = 0.5, f(1) = P(X = 1) = 0.5, and f(x) = 0 for all x 6∈ X(S) = {0, 1}.
  • 97. 4.2. EXAMPLES 95 Example 4.2 A typical penny is spun and the outcome is Heads or Tails. Define a random variable X by X(Heads) = 1 and X(Tails) = 0. Assuming that P(Heads) = 0.3 (see Section 1.1.1), the pmf of X is the function f defined by f(0) = P(X = 0) = 0.7, f(1) = P(X = 1) = 0.3, and f(x) = 0 for all x 6∈ X(S) = {0, 1}. Example 4.3 A fair die is tossed and the number of dots on the upper face is observed. The sample space is S = {1, 2, 3, 4, 5, 6}. Define a random variable X by X(s) = 1 if s is a prime number and X(s) = 0 if s is not a prime number. The pmf of X is the function f defined by f(0) = P(X = 0) = P({4, 6}) = 1/3, f(1) = P(X = 1) = P({1, 2, 3, 5}) = 2/3, and f(x) = 0 for all x 6∈ X(S) = {0, 1}. Examples 4.1–4.3 have a common structure that we proceed to generalize. Definition 4.3 A random variable X is a Bernoulli trial if X(S) = {0, 1}. Traditionally, we call X = 1 a “success” and X = 0 a “failure”. The family of probability distributions of Bernoulli trials is parametrized (indexed) by a real number p ∈ [0, 1], usually by setting p = P(X = 1). We communicate that X is a Bernoulli trial with success probability p by writing X ∼ Bernoulli(p). The pmf of such a random variable is the function f defined by f(0) = P(X = 0) = 1 − p, f(1) = P(X = 1) = p, and f(x) = 0 for all x 6∈ X(S) = {0, 1}. Several important families of random variables can be derived from Ber- noulli trials. Consider, for example, the familiar experiment of tossing a fair coin twice and counting the number of Heads. In Section 4.4, we will generalize this experiment and count the number of successes in n Bernoulli trials. This will lead to the family of binomial probability distributions.
  • 98. 96 CHAPTER 4. DISCRETE RANDOM VARIABLES Bernoulli trials are also a fundamental ingredient of the St. Petersburg Paradox, described in Example 4.13. In this experiment, a fair coin is tossed until Heads was observed and the number of Tails was counted. More gen- erally, consider an experiment in which a sequence of independent Bernoulli trials, each with success probability p, is performed until the first success is observed. Let X1, X2, X3, . . . denote the individual Bernoulli trials and let Y denote the number of failures that precede the first success. Then the possible values of Y are Y (S) = {0, 1, 2, . . .} and the pmf of Y is f(j) = P(Y = j) = P (X1 = 0, . . . , Xj = 0, Xj+1 = 1) = P (X1 = 0) · · · P (Xj = 0) · P (Xj+1 = 1) = (1 − p)j p if j ∈ Y (S) and f(j) = 0 if j 6∈ Y (S). This family of probability distributions is also parametrized by a real number p ∈ [0, 1]. It is called the geometric family and a random variable with a geometric distribution is said to be a geometric random variable, written Y ∼ Geometric(p). If Y ∼ Geometric(p) and k ∈ Y (S), then F(k) = P(Y ≤ k) = 1 − P(Y > k) = 1 − P(Y ≥ k + 1). Because the event {Y ≥ k + 1} occurs if and only if X1 = · · · Xk+1 = 0, we conclude that F(k) = 1 − (1 − p)k+1 . Example 4.4 Gary is a college student who is determined to have a date for an approaching formal. He believes that each woman he asks is twice as likely to decline his invitation as to accept it, but he resolves to extend invitations until one is accepted. However, each of his first ten invitations is declined. Assuming that Gary’s assumptions about his own desirability are correct, what is the probability that he would encounter such a run of bad luck? Gary evidently believes that he can model his invitations as a sequence of independent Bernoulli trials, each with success probability p = 1/3. If so, then the number of unsuccessful invitations that he extends is a random variable Y ∼ Geometric(1/3) and P(Y ≥ 10) = 1 − P(Y ≤ 9) = 1 − F(9) = 1 − " 1 − µ 2 3 ¶10 # . = 0.0173.
  • 99. 4.2. EXAMPLES 97 Either Gary is very unlucky or his assumptions are flawed. Perhaps his probability model is correct, but p < 1/3. Perhaps, as seems likely, the probability of success depends on who he asks. Or perhaps the trials were not really independent.1 If Gary’s invitations cannot be modelled as independent and identically distributed Bernoulli trials, then the geometric distribution cannot be used. Another important family of random variables is often derived by con- sidering an urn model. Imagine an urn that contains m red balls and n black balls. The experiment of present interest involves selecting k balls from the urn in such a way that each of the ¡m+n k ¢ possible outcomes that might be obtained are equally likely. Let X denote the number of red balls selected in this manner. If we observe X = x, then x red balls were selected from a total of m red balls and k − x black balls were selected from a total of n black balls. Evidently, x ∈ X(S) if and only if x is an integer that satisfies x ≤ min(m, k) and k − x ≤ min(n, k). Furthermore, if x ∈ X(S), then the pmf of X is f(x) = P(X = x) = #{X = x} #S = ¡m x ¢¡ n k−x ¢ ¡m+n k ¢ . (4.3) This family of probability distributions is parametrized by a triple of integers, (m, n, k), for which m, n ≥ 0, m + n ≥ 1, and 0 ≤ k ≤ m + n. It is called the hypergeometric family and a random variable with a hypergeometric distribution is said to be a hypergeometric random variable, written Y ∼ Hypergeometric(m, n, k). The trick to using the hypergeometric distribution in applications is to recognize a correspondence between the actual experiment and an idealized urn model, as in. . . Example 4.5 Consider the hypothetical example described in Section 1.2, in which 30 freshman and 10 non-freshmen are randomly assigned exam A or B. What is the probability that exactly 15 freshmen (and therefore exactly 5 non-freshmen) receive exam A? In Example 2.5 we calculated that the probability in question is ¡30 15 ¢¡10 5 ¢ ¡40 20 ¢ = 39, 089, 615, 040 137, 846, 528, 820 . = 0.28. (4.4) 1 In the actual incident on which this example is based, the women all lived in the same residential college. It seems doubtful that each woman was completely unaware of the invitation that preceded hers.
  • 100. 98 CHAPTER 4. DISCRETE RANDOM VARIABLES Let us re-examine this calculation. Suppose that we write each student’s name on a slip of paper, mix the slips in a jar, then draw 20 slips without replacement. These 20 students receive exam A; the remaining 20 students receive exam B. Now drawing slips of paper from a jar is exactly like drawing balls from an urn. There are m = 30 slips with freshman names (red balls) and n = 10 slips with non-freshman names (black balls), of which we are drawing k = 20 without replacement. Using the hypergeometric pmf defined by (4.3), the probability of drawing exactly x = 15 freshman names is ¡m x ¢¡ n k−x ¢ ¡m+n k ¢ = ¡30 15 ¢¡10 5 ¢ ¡40 20 ¢ , the left-hand side of (4.4). Example 4.6 (Adapted from an example analyzed by R.R. Sokal and F.J. Rohlf (1969), Biometry: The Principles and Practice of Statistics in Biological Research, W.H. Freeman and Company, San Francisco.) All but 28 acacia trees (of the same species) were cleared from a study area in Central America. The 28 remaining trees were freed from ants by one of two types of insecticide. The standard insecticide (A) was administered to 15 trees; an experimental insecticide (B) was administered to the other 13 trees. The assignment of insectides to trees was completely random. At issue was whether or not the experimental insecticide was more effective than the standard insecticide in inhibiting future ant infestations. Next, 16 separate ant colonies were situated roughly equidistant from the acacia trees and permitted to invade them. Unless food is scarce, different colonies will not compete for the same resources; hence, it could be presumed that each colony would invade a different tree. In fact, the ants invaded 13 of the 15 trees treated with the standard insecticide and only 3 of the 13 trees treated with the experimental insecticide. If the two insecticides were equally effective in inhibiting future infestations, then what is the probability that no more than 3 ant colonies would have invaded trees treated with the experimental insecticide? This is a potentially confusing problem that is simplified by construct- ing an urn model for the experiment. There are m = 13 trees with the experimental insecticide (red balls) and n = 15 trees with the standard insecticide (black balls). The ants choose k = 16 trees (balls). Let X de- note the number of experimental trees (red balls) invaded by the ants; then
  • 101. 4.2. EXAMPLES 99 X ∼ Hypergeometric(13, 15, 16) and its pmf is f(x) = P(X = x) = ¡13 x ¢¡ 15 16−x ¢ ¡28 16 ¢ . Notice that there are not enough standard trees for each ant colony to invade one; hence, at least one ant colony must invade an experimental tree and X = 0 is impossible. Thus, P(X ≤ 3) = f(1) + f(2) + f(3) = ¡13 1 ¢¡15 15 ¢ ¡28 16 ¢ + ¡13 2 ¢¡15 14 ¢ ¡28 16 ¢ + ¡13 3 ¢¡15 13 ¢ ¡28 16 ¢ . = 0.0010. This reasoning illustrates the use of a statistical procedure called Fisher’s exact test. The probability that we have calculated is an example of what we will later call a significance probability. In the present example, the fact that the significance probability is so small would lead us to challenge an assertion that the experimental insecticide is no better than the standard insecticide. It is evident that calculations with the hypergeometric distribution can become rather tedious. Accordingly, this is a convenient moment to in- troduce computer software for the purpose of evaluating certain pmfs and cdfs. The statistical programming language R includes functions that evalu- ate pmfs and cdfs for a variety of distributions, including the geometric and hypergeometric.2 For the geometric, these functions are dgeom and pgeom; for the hypergeometric, these functions are dhyper and phyper. We can calculate the probability in Example 4.4 as follows: > 1-pgeom(q=9,prob=1/3) [1] 0.01734153 Similarly, we can calculate the probability in Example 4.6 as follows: > phyper(q=3,m=13,n=15,k=16) [1] 0.001026009 2 R is a free, open-source implementation of S, developed at AT&T Bell Laboratories. See Appendix R for information about obtaining, installing, and using R.
  • 102. 100 CHAPTER 4. DISCRETE RANDOM VARIABLES 4.3 Expectation Sometime in the early 1650s, the eminent theologian and amateur mathe- matician Blaise Pascal found himself in the company of the Chevalier de Méré.3 De Méré posed to Pascal a famous problem: how to divide the pot of an interrupted dice game. Pascal communicated the problem to Pierre de Fermat in 1654, beginning a celebrated correspondence that established a foundation for the mathematics of probability. Pascal and Fermat began by agreeing that the pot should be divided according to each player’s chances of winning it. For example, suppose that each of two players has selected a number from the set S = {1, 2, 3, 4, 5, 6}. For each roll of a fair die that produces one of their respective numbers, the corresponding player receives a token. The first player to accumulate five tokens wins a pot of $100. Suppose that the game is interrupted with Player A having accumulated four tokens and Player B having accumulated only one. The probability that Player B would have won the pot had the game been completed is the probability that B’s number would have appeared four more times before A’s number appeared one more time. Because we can ignore rolls that produce neither number, this is equivalent to the probability that a fair coin will have a run of four consecutive Heads, i.e., 0.5 · 0.5 · 0.5 · 0.5 = 0.0625. Hence, according to Pascal and Fermat, Player B is entitled to 0.0625 · $100 = $6.25 from the pot and Player A is entitled to the remaining $93.75. The crucial concept in Pascal’s and Fermat’s analysis is the notion that each prospect should be weighted by the chance of realizing that prospect. This notion motivates Definition 4.4 The expected value of a discrete random variable X, which we will denote E(X) or simply EX, is the probability-weighted average of the possible values of X, i.e., EX = X x∈X(S) xP(X = x) = X x∈X(S) xf(x). Remark The expected value of X, EX, is often called the population mean and denoted µ. 3 This account of the origins of modern probability can be found in Chapter 6 of David Bergamini’s Mathematics, Life Science Library, Time Inc., New York, 1963.
  • 103. 4.3. EXPECTATION 101 Example 4.7 If X ∼ Bernoulli(p), then µ = EX = X x∈{0,1} xP(X = x) = 0·P(X = 0)+1·P(X = 1) = P(X = 1) = p. Notice that, in general, the expected value of X is not the average of its possible values. In this example, the possible values are X(S) = {0, 1} and the average of these values is (always) 0.5. In contrast, the expected value depends on the probabilities of the values. Fair Value The expected payoff of a game of chance is sometimes called the fair value of the game. For example, suppose that you own a slot ma- chine that pays a jackpot of $1000 with probability p = 0.0005 and $0 with probability 1−p = 0.9995. How much should you charge a customer to play this machine? Letting X denote the payoff (in dollars), the expected payoff per play is EX = 1000 · 0.0005 + 0 · 0.9995 = 0.5; hence, if you want to make a profit, then you should charge more than $0.50 per play. Suppose, however, that a rival owner of an identical slot machine attempted to compete for the same customers. According to the theory of microeconomics, competition would cause each of you to try to undercut the other, eventually resulting in an equilibrium price of exactly $0.50 per play, the fair value of the game. We proceed to illustrate both the mathematics and the psychology of fair value by considering several lotteries. A lottery is a choice between receiving a certain payoff and playing a game of chance. In each of the following examples, we emphasize that the value accorded the game of chance by a rational person may be very different from the game’s expected value. In this sense, the phrase “fair value” is often a misnomer. Example 4.8a You are offered the choice between receiving a certain $5 and playing the following game: a fair coin is tossed and you receive $10 or $0 according to whether Heads or Tails is observed. The expected payoff from the game (in dollars) is EX = 10 · 0.5 + 0 · 0.5 = 5, so your options are equivalent with respect to expected earnings. One might therefore suppose that a rational person would be indifferent to which option
  • 104. 102 CHAPTER 4. DISCRETE RANDOM VARIABLES he or she selects. Indeed, in my experience, some students prefer to take the certain $5 and some students prefer to gamble on perhaps winning $10. For this example, the phrase “fair value” seems apt. Example 4.8b You are offered the choice between receiving a certain $5000 and playing the following game: a fair coin is tossed and you receive $10,000 or $0 according to whether Heads or Tails is observed. The mathematical structure of this lottery is identical to that of the preceding lottery, except that the stakes are higher. Again, the options are equivalent with respect to expected earnings; again, one might suppose that a rational person would be indifferent to which option he or she selects. However, many students who opt to gamble on perhaps winning $10 in Example 4.8a opt to take the certain $5000 in Example 4.8b. Example 4.8c You are offered the choice between receiving a certain $1 million and playing the following game: a fair coin is tossed and you receive $2 million or $0 according to whether Heads or Tails is observed. The mathematical structure of this lottery is identical to that of the preceding two lotteries, except that the stakes are now much higher. Again, the options are equivalent with respect to expected earnings; however, almost every student to whom I have presented this lottery has expressed a strong preference for taking the certain $1 million. Example 4.9 You are offered the choice between receiving a certain $1 million and playing the following game: a fair coin is tossed and you receive $5 million or $0 according to whether Heads or Tails is observed. The expected payoff from this game (in millions of dollars) is EX = 5 · 0.5 + 0 · 0.5 = 2.5, so playing the game is the more attractive option with respect to expected earnings. Nevertheless, most students opt to take the certain $1 million. This should not be construed as an irrational decision. For example, the addition of $1 million to my own modest estate would secure my eventual retirement. The addition of an extra $4 million would be very pleasant indeed, allowing me to increase my current standard of living. However, I do not value the additional $4 million nearly as much as I value the initial $1 million. As Aesop observed, “A little thing in hand is worth more than a great thing in prospect.” For this example, the phrase “fair value” introduces normative connotations that are not appropriate.
  • 105. 4.3. EXPECTATION 103 Example 4.10 Consider the following passage from a recent article about investing: “. . . it’s human nature to overweight low probabilities that offer high returns. In one study, subjects were given a choice between a 1-in-1000 chance to win $5000 or a sure thing to win $5; or a 1- in-1000 chance of losing $5000 versus a sure loss of $5. In the first case, the expected value (mathematically speaking) is making $5. In the second case, it’s losing $5. Yet in the first situation, which mimics a lottery, more than 70% of people asked chose to go for the $5000. In the second situation, more than 80% would take the $5 hit.”4 The author evidently considered the reported preferences paradoxical, but are they really surprising? Plus or minus $5 will not appreciably alter the financial situations of most subjects, but plus or minus $5000 will. It is perfectly rational to risk a negligible amount on the chance of winning $5000 while declining to risk a negligible amount on the chance of losing $5000. The following examples further explicate this point. Example 4.11 The same article advises, “To limit completely irra- tional risks, such as lottery tickets, try speculating only with money you would otherwise use for simple pleasures, such as your morning coffee.” Consider a hypothetical state lottery, in which 6 numbers are drawn (without replacement) from the set {1, 2, . . . , 39, 40}. For $2, you can pur- chase a ticket that specifies 6 such numbers. If the numbers on your ticket match the numbers selected by the state, then you win $1 million; otherwise, you win nothing. (For the sake of simplicity, we ignore the possibility that you might have to split the jackpot with other winners and the possibility that you might win a lesser prize.) Is buying a lottery ticket “completely irrational”? The probability of winning the lottery in question is p = 1 ¡40 6 ¢ = 1 3, 838, 380 . = 2.6053 × 10−7 , so your expected prize (in dollars) is approximately 106 · 2.6053 × 10−7 . = 0.26, 4 Robert Frick, “The 7 Deadly Sins of Investing,” Kiplinger’s Personal Finance Maga- zine, March 1998, p. 138.
  • 106. 104 CHAPTER 4. DISCRETE RANDOM VARIABLES which is considerably less than the cost of a ticket. Evidently, it is completely irrational to buy tickets for this lottery as an investment strategy. Suppose, however, that I buy one ticket per week and reason as follows: I will almost certainly lose $2 per week, but that loss will have virtually no impact on my standard of living; however, if by some miracle I win, then gaining $1 million will revolutionize my standard of living. This can hardly be construed as irrational behavior, although Robert Frick’s advice to speculate only with funds earmarked for entertainment is well-taken. In most state lotteries, the fair value of the game is less than the cost of a lottery ticket. This is only natural—lotteries exist because they generate revenue for the state that runs them! (By the same reasoning, gambling must favor the house because casinos make money for their owners.) However, on very rare occasions a jackpot is so large that the typical situation is reversed. Several years ago, an Australian syndicate noticed that the fair value of a Florida state lottery exceeded the price of a ticket and purchased a large number of tickets as an (ultimately successful) investment strategy. And Voltaire once purchased every ticket in a raffle upon noting that the prize was worth more than the total cost of the tickets being sold! Example 4.12 If the first case described in Example 4.10 mimics a lottery, then the second case mimics insurance. Mindful that insurance companies (like casinos) make money, Ambrose Bierce offered the follow- ing definition: “INSURANCE, n. An ingenious modern game of chance in which the player is permitted to enjoy the comfortable conviction that he is beating the man who keeps the table.”5 However, while it is certainly true that the fair value of an insurance policy is less than the premiums required to purchase it, it does not follow that buying insurance is irrational. I can easily afford to pay $200 per year for homeowners insurance, but I would be ruined if all of my possessions were destroyed by fire and I received no compensation for them. My decision that a certain but affordable loss is preferable to an unlikely but catastrophic loss is an example of risk-averse behavior. Before presenting our concluding example of fair value, we derive a useful formula. Suppose that X : S → ℜ is a discrete random variable and φ : ℜ → 5 Ambrose Bierce, The Devil’s Dictionary, 1881–1906. In The Collected Writings of Ambrose Bierce, Citadel Press, Secaucus, NJ, 1946.
  • 107. 4.3. EXPECTATION 105 ℜ is a function. Let Y = φ(X). Then Y : ℜ → ℜ is a random variable and Eφ(X) = EY = X y∈Y (S) yP(Y = y) = X y∈Y (S) yP (φ(X) = y) = X y∈Y (S) yP ³ X ∈ φ−1 (y) ´ = X y∈Y (S) y   X x∈φ−1(y) P(X = x)   = X y∈Y (S) X x∈φ−1(y) yP(X = x) = X y∈Y (S) X x∈φ−1(y) φ(x)P(X = x) = X x∈X(S) φ(x)P(X = x) = X x∈X(S) φ(x)f(x). (4.5) Example 4.13 Consider a game in which the jackpot starts at $1 and doubles each time that Tails is observed when a fair coin is tossed. The game terminates when Heads is observed for the first time. How much would you pay for the privilege of playing this game? How much would you charge if you were responsible for making the payoff? This is a curious game. With high probability, the payoff will be rather small; however, there is a small chance of a very large payoff. In response to the first question, most students discount the latter possibility and respond that they would only pay a small amount, rarely more than $4. In response to the second question, most students recognize the possibility of a large payoff and demand payment of a considerably greater amount. Let us consider if the notion of fair value provides guidance in reconciling these perspectives. Let X denote the number of Tails that are observed before the game terminates. Then X(S) = {0, 1, 2, . . .} and the geometric random variable X has pmf f(x) = P(x consecutive Tails) = 0.5x . The payoff from this game (in dollars) is Y = 2X; hence, the expected
  • 108. 106 CHAPTER 4. DISCRETE RANDOM VARIABLES payoff is E2X = ∞ X x=0 2x · 0.5x = ∞ X x=0 1 = ∞. This is quite startling! The “fair value” of this game provides very little insight into the value that a rational person would place on playing it. This remarkable example is quite famous—it is known as the St. Petersburg Para- dox. Properties of Expectation We now state (and sometimes prove) some useful consequences of Definition 4.4 and Equation 4.5. Theorem 4.1 Let X denote a discrete random variable and suppose that P(X = c) = 1. Then EX = c. Theorem 4.1 states that, if a random variable always assumes the same value c, then the probability-weighted average of the values that it assumes is c. This should be obvious. Theorem 4.2 Let X denote a discrete random variable and suppose that c ∈ ℜ is constant. Then E [cφ(X)] = X x∈X(S) cφ(x)f(x) = c X x∈X(S) φ(x)f(x) = cE [φ(X)] . Theorem 4.2 states that we can interchange the order of multiplying by a constant and computing the expected value. Notice that this property of expectation follows directly from the analogous property for summation. Theorem 4.3 Let X denote a discrete random variable. Then E [φ1(X) + φ2(X)] = X x∈X(S) [φ1(x) + φ2(x)]f(x) = X x∈X(S) [φ1(x)f(x) + φ2(x)f(x)] = X x∈X(S) φ1(x)f(x) + X x∈X(S) φ2(x)f(x) = E [φ1(X)] + E [φ2(X)] .
  • 109. 4.3. EXPECTATION 107 Theorem 4.3 states that we can interchange the order of adding functions of a random variable and computing the expected value. Again, this property of expectation follows directly from the analogous property for summation. Theorem 4.4 Let X1 and X2 denote discrete random variables. Then E [X1 + X2] = EX1 + EX2. Theorem 4.4 states that the expected value of a sum equals the sum of the expected values. Variance Now suppose that X is a discrete random variable, let µ = EX denote its expected value, or population mean, and define a function φ : ℜ → ℜ by φ(x) = (x − µ)2 . For any x ∈ ℜ, φ(x) is the squared deviation of x from the expected value of X. If X always assumes the value µ, then φ(X) always assumes the value 0; if X tends to assume values near µ, then φ(X) will tend to assume small values; if X often assumes values far from µ, then φ(X) will often assume large values. Thus, Eφ(X), the expected squared deviation of X from its expected value, is a measure of the variability of the population X(S). We summarize this observation in Definition 4.5 The variance of a discrete random variable X, which we will denote Var(X) or simply Var X, is the probability-weighted average of the squared deviations of X from EX = µ, i.e., Var X = E(X − µ)2 = X x∈X(S) (x − µ)2 f(x). Remark The variance of X, Var X, is often called the population vari- ance and denoted σ2. Denoting the population variance by σ2 may strike the reader as awk- ward notation, but there is an excellent reason for it. Because the variance measures squared deviations from the population mean, it is measured in different units than either the random variable itself or its expected value. For example, if X measures length in meters, then so does EX, but Var X is measured in meters squared. To recover a measure of population variability in the original units of measurement, we take the square root of the variance and obtain σ.
  • 110. 108 CHAPTER 4. DISCRETE RANDOM VARIABLES Definition 4.6 The standard deviation of a random variable is the square root of its variance. Remark The standard deviation of X, often denoted σ, is often called the population standard deviation. Example 4.1 (continued) If X ∼ Bernoulli(p), then σ2 = Var X = E(X − µ)2 = (0 − µ)2 · P(X = 0) + (1 − µ)2 · P(X = 1) = (0 − p)2 (1 − p) + (1 − p)2 p = p(1 − p)(p + 1 − p) = p(1 − p). Before turning to a more complicated example, we establish a useful fact. Theorem 4.5 If X is a discrete random variable, then Var X = E(X − µ)2 = E(X2 − 2µX + µ2 ) = EX2 + E(−2µX) + Eµ2 = EX2 − 2µEX + µ2 = EX2 − 2µ2 + µ2 = EX2 − (EX)2 . A straightforward way to calculate the variance of a discrete random variable that assumes a fairly small number of values is to exploit Theorem 4.5 and organize one’s calculations in the form of a table. Example 4.14 Suppose that X is a random variable whose possible values are X(S) = {2, 3, 5, 10}. Suppose that the probability of each of these values is given by the formula f(x) = P(X = x) = x/20. (a) Calculate the expected value of X. (b) Calculate the variance of X. (c) Calculate the standard deviation of X.
  • 111. 4.3. EXPECTATION 109 Solution x f(x) xf(x) x2 x2f(x) 2 0.10 0.20 4 0.40 3 0.15 0.45 9 1.35 5 0.25 1.25 25 6.25 10 0.50 5.00 100 50.00 6.90 58.00 (a) µ = EX = 0.2 + 0.45 + 1.25 + 5 = 6.9. (b) σ2 = Var X = EX2 − (EX)2 = (0.4 + 1.35 + 6.25 + 50) − 6.92 = 58 − 47.61 = 10.39. (c) σ = √ 10.39 . = 3.2234. Now suppose that X : S → ℜ is a discrete random variable and φ : ℜ → ℜ is a function. Let Y = φ(X). Then Y is a discrete random variable and Var φ(X) = Var Y = E [Y − EY ]2 = E [φ(X) − Eφ(X)]2 . (4.6) We conclude this section by stating (and sometimes proving) some useful consequences of Definition 4.5 and Equation 4.6. Theorem 4.6 Let X denote a discrete random variable and suppose that c ∈ ℜ is constant. Then Var(X + c) = Var X. Although possibly startling at first glance, this result is actually quite intuitive. The variance depends on the squared deviations of the values of X from the expected value of X. If we add a constant to each value of X, then we shift both the individual values of X and the expected value of X by the same amount, preserving the squared deviations. The variability of a population is not affected by shifting each of the values in the population by the same amount.
  • 112. 110 CHAPTER 4. DISCRETE RANDOM VARIABLES Theorem 4.7 Let X denote a discrete random variable and suppose that c ∈ ℜ is constant. Then Var(cX) = E [cX − E(cX)]2 = E [cX − cEX]2 = E [c(X − EX)]2 = E h c2 (X − EX)2 i = c2 E(X − EX)2 = c2 Var X. To understand this result, recall that the variance is measured in the original units of measurement squared. If we take the square root of each expression in Theorem 4.7, then we see that one can interchange multiplying a random variable by a nonnegative constant with computing its standard deviation. Theorem 4.8 If the discrete random variables X1 and X2 are independent, then Var(X1 + X2) = Var X1 + Var X2. Theorem 4.8 is analogous to Theorem 4.4. However, in order to ensure that the variance of a sum equals the sum of the variances, the random variables must be independent. 4.4 Binomial Distributions Suppose that a fair coin is tossed twice and the number of Heads is counted. Let Y denote the total number of Heads. Because the sample space has four equally likely outcomes, viz., S = {HH, HT, TH, TT}, the pmf of Y is easily determined: f(0) = P(Y = 0) = P ({HH}) = 0.25, f(1) = P(Y = 1) = P ({HT, TH}) = 0.5, f(2) = P(Y = 2) = P ({TT}) = 0.25, and f(y) = 0 if y 6∈ Y (S) = {0, 1, 2}.
  • 113. 4.4. BINOMIAL DISTRIBUTIONS 111 Referring to representation (c) of Example 3.18, the above experiment has the following characteristics: • Let X1 denote the number of Heads observed on the first toss and let X2 denote the number of Heads observed on the second toss. Then the random variable of interest is Y = X1 + X2. • The random variables X1 and X2 are independent. • The random variables X1 and X2 have the same distribution, viz. X1, X2 ∼ Bernoulli(0.5). We proceed to generalize this example in two ways: 1. We allow any finite number of trials. 2. We allow any success probability p ∈ [0, 1]. Definition 4.7 Let X1, . . . , Xn be mutually independent Bernoulli trials, each with success probability p. Then Y = n X i=1 Xi is a binomial random variable, denoted Y ∼ Binomial(n; p). Applying Theorem 4.4, we see that the expected value of a binomial random variable is the product of the number of trials and the probability of success: EY = E à n X i=1 Xi ! = n X i=1 EXi = n X i=1 p = np. Furthermore, because the trials are independent, we can apply Theorem 4.8 to calculate the variance: Var Y = Var à n X i=1 Xi ! = à n X i=1 Var Xi ! = à n X i=1 p(1 − p) ! = np(1 − p).
  • 114. 112 CHAPTER 4. DISCRETE RANDOM VARIABLES Because Y counts the total number of successes in n Bernoulli trials, it should be apparent that Y (S) = {0, 1, . . . , n}. Let f denote the pmf of Y . For fixed n, p, and j ∈ Y (S), we wish to determine f(j) = P(Y = j). To illustrate the reasoning required to make this determination, suppose that there are n = 6 trials, each with success probability p = 0.3, and that we wish to determine the probability of observing exactly j = 2 successes. Some examples of experimental outcomes for which Y = 2 include the following: 110000 000011 010010 Because the trials are mutually independent, we see that P(110000) = 0.3 · 0.3 · 0.7 · 0.7 · 0.7 · 0.7 = 0.32 · 0.74 , P(000011) = 0.7 · 0.7 · 0.7 · 0.7 · 0.3 · 0.3 = 0.32 · 0.74 , P(010010) = 0.7 · 0.3 · 0.7 · 0.7 · 0.3 · 0.7 = 0.32 · 0.74 . It should be apparent that the probability of each outcome for which Y = 2 is the product of j = 2 factors of p = 0.3 and n−j = 4 factors of 1−p = 0.7. Furthermore, the number of such outcomes is the number of ways of choosing j = 2 successes from a total of n = 6 trials. Thus, f(2) = P(Y = 2) = Ã 6 2 ! 0.32 0.74 for the specific example in question and the general formula for the binomial pmf is f(j) = P(Y = j) = Ã n j ! pj (1 − p)n−j . It follows, of course, that the general formula for the binomial cdf is F(k) = P(Y ≤ k) = k X j=0 P(Y = j) = k X j=0 f(j) = k X j=0 Ã n j ! pj (1 − p)n−j . (4.7) Except for very small numbers of trials, direct calculation of (4.7) is rather tedious. Fortunately, tables of the binomial cdf for selected values of
  • 115. 4.4. BINOMIAL DISTRIBUTIONS 113 n and p are widely available, as is computer software for evaluating (4.7). In the examples that follow, we will evaluate (4.7) using the R function pbinom. As the following examples should make clear, the trick to evaluating bi- nomial probabilities is to write them in expressions that only involve prob- abilities of the form P(Y ≤ k). Example 4.15 In 10 trials with success probability 0.5, what is the probability that no more than 4 successes will be observed? Here, n = 10, p = 0.5, and we want to calculate P(Y ≤ 4) = F(4). We do so in R as follows: > pbinom(4,size=10,prob=.5) [1] 0.3769531 Example 4.16 In 12 trials with success probability 0.3, what is the probability that more than 6 successes will be observed? Here, n = 12, p = 0.3, and we want to calculate P(Y > 6) = 1 − P(Y ≤ 6) = 1 − F(6). We do so in R as follows: > 1-pbinom(6,12,.3) [1] 0.03860084 Example 4.17 In 15 trials with success probability 0.6, what is the probability that at least 5 but no more than 10 successes will be observed? Here, n = 15, p = 0.6, and we want to calculate P(5 ≤ Y ≤ 10) = P(Y ≤ 10) − P(Y ≤ 4) = F(10) − F(4). We do so in R as follows: > pbinom(10,15,.6)-pbinom(4,15,.6) [1] 0.7733746
  • 116. 114 CHAPTER 4. DISCRETE RANDOM VARIABLES Example 4.18 In 20 trials with success probability 0.9, what is the probability that exactly 16 successes will be observed? Here, n = 20, p = 0.9, and we want to calculate P(Y = 16) = P(Y ≤ 16) − P(Y ≤ 15) = F(16) − F(15). We do so in R as follows: > pbinom(16,20,.9)-pbinom(15,20,.9) [1] 0.08977883 Example 4.19 In 81 trials with success probability 0.64, what is the probability that the proportion of observed successes will be between 60 and 70 percent? Here, n = 81, p = 0.64, and we want to calculate P(0.6 < Y/81 < 0.7) = P(0.6 · 81 < Y < 0.7 · 81) = P(48.6 < Y < 56.7) = P(49 ≤ Y ≤ 56) = P(Y ≤ 56) − P(Y ≤ 48) = F(56) − F(48). We do so in R as follows: > pbinom(56,81,.64)-pbinom(48,81,.64) [1] 0.6416193 Many practical situations can be modelled using a binomial distribution. Doing so typically requires one to perform the following steps. 1. Identify what constitutes a Bernoulli trial and what constitutes a suc- cess. Verify or assume that the trials are mutually independent with a common probability of success. 2. Identify the number of trials (n) and the common probability of success (p). 3. Identify the event whose probability is to be calculated. 4. Calculate the probability of the event in question, e.g., by using the pbinom function in R.
  • 117. 4.5. EXERCISES 115 Example 4.20 RD Airlines flies planes that seat 58 passengers. Years of experience have revealed that 20 percent of the persons who purchase tickets fail to claim their seat. (Such persons are called “no-shows”.) Because of this phenomenon, RD routinely overbooks its flights, i.e., RD typically sells more than 58 tickets per flight. If more than 58 passengers show, then the “extra” passengers are “bumped” to another flight. Suppose that RD sells 64 tickets for a certain flight from Washington to New York. How might RD estimate the probability that at least one passenger will have to be bumped? 1. Each person who purchased a ticket must decide whether or not to claim his or her seat. This decision represents a Bernoulli trial, for which we will declare a decision to claim the seat a success. Strictly speaking, the Bernoulli trials in question are neither mutually inde- pendent nor identically distributed. Some individuals, e.g., families, travel together and make a common decision as to whether or not to claim their seats. Furthermore, some travellers are more likely to change their plans than others. Nevertheless, absent more detailed in- formation, we should be able to compute an approximate answer by assuming that the total number of persons who claim their seats has a binomial distribution. 2. The problem specifies that n = 64 persons have purchased tickets. Appealing to past experience, we assume that the probability that each person will show is p = 1 − 0.2 = 0.8. 3. At least one passenger will have to be bumped if more than 58 passen- gers show, so the desired probability is P(Y > 58) = 1 − P(Y ≤ 58) = 1 − F(58). 4. The necessary calculation can be performed in R as follows: > 1-pbinom(58,64,.8) [1] 0.006730152 4.5 Exercises 1. Suppose that a weighted die is tossed. Let X denote the number of dots that appear on the upper face of the die, and suppose that P(X = x) = (7 − x)/20 for x = 1, 2, 3, 4, 5 and P(X = 6) = 0. Determine each of the following:
  • 118. 116 CHAPTER 4. DISCRETE RANDOM VARIABLES (a) The probability mass function of X. (b) The cumulative distribution function of X. (c) The expected value of X. (d) The variance of X. (e) The standard deviation of X. 2. Suppose that a jury of 12 persons is to be selected from a pool of 25 persons who were called for jury duty. The pool comprises 12 retired persons, 6 employed persons, 5 unemployed persons, and 2 students. Assuming that each person is equally likely to be selected, answer the following: (a) What is the probability that both students will be selected? (b) What is the probability that the jury will contain exactly twice as many retired persons as employed persons? 3. When casting four astragali, a throw that results in four different uppermost sides is called a venus. (See Section 1.4.) Suppose that four astragali, {A, B, C, D} each have the following probabilities of producing the four possible uppermost faces: P(1) = P(6) = 0.1, P(3) = P(4) = 0.4. (a) Suppose that we write A = 1 to indicate the event that A produces side 1, etc. Compute P(A = 1, B = 3, C = 4, D = 6). (b) Compute P(A = 1, B = 6, C = 3, D = 4). (c) What is the probability that one throw of these four astragali will produce a venus? Hint: See Exercise 2.5.3. (d) For k = 2, k = 3, and k = 100, what is the probability that k throws of these four astragali will produce a run of k venuses? 4. Suppose that each of five astragali have the probabilities specified in the previous exercise. When throwing these five astagali, (a) What is the probability of obtaining the throw of child-eating Cronos, i.e., of obtaining three fours and two sixes? (b) What is the probability of obtaining the throw of Saviour Zeus, i.e., of obtaining one one, two threes, and two fours?
  • 119. 4.5. EXERCISES 117 Hint: See Exercise 2.5.4. 5. Koko (a cat) is trying to catch a mouse who lives under Susan’s house. The mouse has two exits, one outside and one inside, and randomly selects the outside exit 60% of the time. Each midnight, the mouse emerges for a constitutional. If Koko waits outside and the mouse chooses the outside exit, then Koko has a 20% chance of catching the mouse. If Koko waits inside, then there is a 30% chance that he will fall asleep. However, if he stays awake and the mouse chooses the inside exit, then Koko has a 40% chance of catching the mouse. (a) Is Koko more likely to catch the mouse if he waits inside or out- side? Why? (b) If Koko decides to wait outside each midnight, then what is the probability that he will catch the mouse within a week (no more than 7 nights)? 6. Three urns each contain ten gems: • Urn 1 contains 6 rubies and 4 emeralds. • Urn 2r contains 8 rubies and 2 emeralds. • Urn 2e contains 4 rubies and 6 emeralds. The following procedure is used to select two gems. First, one gem is drawn at random from urn 1. If this first gem is a ruby, then a second gem is drawn at random from urn 2r; however, if the first gem is an emerald, then the second gem is drawn at random from urn 2e. (a) Construct a tree diagram that describes this procedure. (b) What is the probability that a ruby is obtained on the second draw? (c) Suppose that the second gem is a ruby. What then is the proba- bility that the first gem was also a ruby? (d) Suppose that this procedure is independently replicated three times. What is the probability that a ruby is obtained on the second draw exactly once? (e) Suppose that this procedure is independently replicated three times and that a ruby is obtained on the second draw each time. What then is the probability that the first gem was a ruby each time?
  • 120. 118 CHAPTER 4. DISCRETE RANDOM VARIABLES 7. Arlen is planning a dinner party at which he will be able to accommo- date seven guests. From past experience, he knows that each person invited to the party will accept his invitation with probability 0.5. He also knows that each person who accepts will actually attend with probability 0.8. Suppose that Arlen invites twelve people. Assuming that they behave independently of one another, what is the probability that he will end up with more guests than he can accommodate? 8. Hotels that host conferences routinely overbook their rooms because some people who plan to attend conferences fail to arrive. A common assumption is that 10 percent of the hotel rooms reserved by conference attendees will not be claimed. In contrast, only 4 percent of the per- sons who reserve hotel rooms for the annual Joint Statistical Meetings (JSM) fail to claim them. Suppose that a certain hotel has 100 rooms. Incorrectly believing that statisticians behave like normal people, the hotel accepts 110 room reservations for JSM. What is the probability that the hotel will have to turn away statisticians who have reserved rooms? 9. A small liberal arts college receives applications for admission from 1000 high school seniors. The college has dormitory space for a fresh- man class of 95 students and will have to arrange for off-campus hous- ing for any additional freshmen. In previous years, an average of 64 percent of the students that the college has accepted have elected to attend another school. Clearly the college should accept more than 95 students, but its administration does not want to take too big a chance that it will have to accommodate more than 95 students. Af- ter some delibration, the administrators decide to accept 225 students. Answer the following questions as well as you can with the information provided. (a) How many freshmen do you expect that the college will have to accommodate? (b) What is the the probability that the college will have to arrange for some freshmen to live off-campus? 10. In NCAA tennis matches, line calls are made by the players. If an um- pire is observing the match, then a player can challenge an opponent’s call. The umpire will either affirm or overrule the challenged call. In one of their recent team matches, the William & Mary women’s tennis
  • 121. 4.5. EXERCISES 119 team challenged 38 calls by their opponents. The umpires overruled 12 of the challenged calls. This struck Nina and Delphine as significant, as it is their impression that approximately 20 percent of all challenged calls in NCAA tennis matches are overruled. Let us assume that their impression is correct. (a) What is the probability that chance variation would result in at least 12 of 38 challenged calls being overruled? (b) Suppose that the William & Mary women’s tennis team plays 25 team matches next year and challenges exactly 38 calls in each match. (In fact, the number of challenged calls varies from match to match.) What is the probability that they will play at least one team match in which at least 12 challenged calls are overruled? 11. The Association for Research and Enlightenment (ARE) in Virginia Beach, VA, offers daily demonstrations of a standard technique for testing extrasensory perception (ESP). A “sender” is seated before a box on which one of five symbols (plus, square, star, circle, wave) can be illuminated. A random mechanism selects symbols in such a way that each symbol is equally likely to be illuminated. When a symbol is illuminated, the sender concentrates on it and a “receiver” attempts to identify which symbol has been selected. The receiver indicates a symbol on the receiver’s box, which sends a signal to the sender’s box that cues it to select and illuminate another symbol. This process of illuminating, sending, and receiving a symbol is repeated 25 times. Each selection of a symbol to be illuminated is independent of the others. The receiver’s score (for a set of 25 trials) is the number of symbols that s/he correctly identifies. For the purpose of this exercise, please suppose that ESP does not exist. (a) How many symbols should we expect the receiver to identify cor- rectly? (b) The ARE considers a score of more than 7 matches to be indica- tive of ESP. What is the probability that the receiver will provide such an indication? (c) The ARE provides all audience members with scoring sheets and invites them to act as receivers. Suppose that, as on August 31, 2002, there are 21 people in attendance: 1 volunteer sender, 1 volunteer receiver, and 19 additional receivers in the audience.
  • 122. 120 CHAPTER 4. DISCRETE RANDOM VARIABLES What is the probability that at least one of the 20 receivers will attain a score indicative of ESP? 12. Mike teaches two sections of Applied Statistics each year for thirty years, for a total of 1500 students. Each of his students spins a penny 89 times and counts the number of Heads. Assuming that each of these 1500 pennies has P(Heads) = 0.3 for a single spin, what is the probability that Mike will encounter at least one student who observes no more than two Heads?
  • 123. Chapter 5 Continuous Random Variables 5.1 A Motivating Example Some of the concepts that were introduced in Chapter 4 pose technical diffi- culties when the random variable is not discrete. In this section, we illustrate some of these difficulties by considering a random variable X whose set of possible values is the unit interval, i.e., X(S) = [0, 1]. Specifically, we ask the following question: What probability distribution formalizes the notion of “equally likely” outcomes in the unit interval [0, 1]? When studying finite sample spaces in Section 3.3, we formalized the notion of “equally likely” by assigning the same probability to each individual outcome in the sample space. Thus, if S = {s1, . . . , sN }, then P({si}) = 1/N. This construction sufficed to define probabilities of events: if E ⊂ S, then E = {si1 , . . . , sik }; and consequently P(E) = P   k [ j=1 n sij o   = k X j=1 P ³n sij o´ = k X j=1 1 N = k N . Unfortunately, the present example does not work out quite so neatly. 121
  • 124. 122 CHAPTER 5. CONTINUOUS RANDOM VARIABLES How should we assign P(X = 0.5)? Of course, we must have 0 ≤ P(X = 0.5) ≤ 1. If we try P(X = 0.5) = ǫ for any real number ǫ > 0, then a difficulty arises. Because we are assuming that every value in the unit interval is equally likely, it must be that P(X = x) = ǫ for every x ∈ [0, 1]. Consider the event E = ½ 1 2 , 1 3 , 1 4 , . . . ¾ . (5.1) Then we must have P(E) = P   ∞ [ j=2 ½ 1 j ¾   = ∞ X j=2 P µ½ 1 j ¾¶ = ∞ X j=2 ǫ = ∞, (5.2) which we cannot allow. Hence, we must assign a probability of zero to the outcome x = 0.5 and, because all outcomes are equally likely, P(X = x) = 0 for every x ∈ [0, 1]. Because every x ∈ [0, 1] is a possible outcome, our conclusion that P(X = x) = 0 is initially somewhat startling. However, it is a mistake to identify impossibility with zero probability. In Section 3.2, we established that the impossible event (empty set) has probability zero, but we did not say that it is the only such event. To avoid confusion, we now emphasize: If an event is impossible, then it necessarily has probability zero; however, having probability zero does not necessarily mean that an event is impossible. If P(X = x) = ǫ = 0, then the calculation in (5.2) reveals that the event defined by (5.1) has probability zero. Furthermore, there is nothing special about this particular event—the probability of any countable event must be zero! Hence, to obtain positive probabilities, e.g., P(X ∈ [0, 1]) = 1, we must consider events whose cardinality is more than countable. Consider the events [0, 0.5] and [0.5, 1]. Because all outcomes are equally likely, these events must have the same probability, i.e., P (X ∈ [0, 0.5]) = P (X ∈ [0.5, 1]) . Because [0, 0.5] ∪ [0.5, 1] = [0, 1] and P(X = 0.5) = 0, we have 1 = P (X ∈ [0, 1]) = P (X ∈ [0, 0.5]) + P (X ∈ [0.5, 1]) − P (X = 0.5) = P (X ∈ [0, 0.5]) + P (X ∈ [0.5, 1]) . Combining these equations, we deduce that each event has probability 1/2. This is an intuitively pleasing conclusion: it says that, if outcomes are equally
  • 125. 5.1. A MOTIVATING EXAMPLE 123 likely, then the probability of each subinterval equals the proportion of the entire interval occupied by the subinterval. In mathematical notation, our conclusion can be expressed as follows: Suppose that X(S) = [0, 1] and each x ∈ [0, 1] is equally likely. If 0 ≤ a ≤ b ≤ 1, then P (X ∈ [a, b]) = b − a. Notice that statements like P(X ∈ [0, 0.5]) = 0.5 cannot be deduced from knowledge that each P(X = x) = 0. To construct a probability distribution for this situation, it is necessary to assign probabilities to intervals, not just to individual points. This fact reveals the reason that, in Section 3.2, we introduced the concept of an event and insisted that probabilities be assigned to events rather than to outcomes. The probability distribution that we have constructed is called the con- tinuous uniform distribution on the interval [0, 1], denoted Uniform[0, 1]. If X ∼ Uniform[0, 1], then the cdf of X is easily computed: • If y < 0, then F(y) = P(X ≤ y) = P (X ∈ (−∞, y]) = 0. • If y ∈ [0, 1], then F(y) = P(X ≤ y) = P (X ∈ (−∞, 0)) + P (X ∈ [0, y]) = 0 + (y − 0) = y. • If y > 1, then F(y) = P(X ≤ y) = P (X ∈ (−∞, 0)) + P (X ∈ [0, 1]) + P (X ∈ (1, y)) = 0 + (1 − 0) + 0 = 1. This function is plotted in Figure 5.1.
  • 126. 124 CHAPTER 5. CONTINUOUS RANDOM VARIABLES −1 0 1 2 0.0 0.5 1.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 5.1: The cumulative distribution function of X ∼ Uniform(0, 1). What about the pmf of X? In Section 4.1, we defined the pmf of a discrete random variable by f(x) = P(X = x); we then used the pmf to calculate the probabilities of arbitrary events. In the present situation, P(X = x) = 0 for every x, so the pmf is not very useful. Instead of representing the probabilites of individual points, we need to represent the probabilities of intervals. Consider the function f(x) =      0 x ∈ (−∞, 0) 1 x ∈ [0, 1] 0 x ∈ (1, ∞)      , (5.3) which is plotted in Figure 5.2. Notice that f is constant on X(S) = [0, 1], the set of equally likely possible values, and vanishes elsewhere. If 0 ≤ a ≤ b ≤ 1, then the area under the graph of f between a and b is the area of a rectangle with sides b − a (horizontal direction) and 1 (vertical direction). Hence, the area in question is (b − a) · 1 = b − a = P(X ∈ [a, b]), so that the probabilities of intervals can be determined from f. In the next section, we will base our definition of continuous random variables on this observation.
  • 127. 5.2. BASIC CONCEPTS 125 −1 0 1 2 0.0 0.5 1.0 ◦ ◦ • • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 5.2: The probability density function of X ∼ Uniform(0, 1). 5.2 Basic Concepts Consider the graph of a function f : ℜ → ℜ, as depicted in Figure 5.3. Our interest is in the area of the shaded region. This region is bounded by the graph of f, the horizontal axis, and vertical lines at the specified endpoints a and b. We denote this area by Area[a,b](f). Our intent is to identify such areas with the probabilities that random variables assume certain values. For a very few functions, such as the one defined in (5.3), it is possible to determine Area[a,b](f) by elementary geometric calculations. For most functions, some knowledge of calculus is required to determine Area[a,b](f). Because we assume no previous knowledge of calculus, we will not be con- cerned with such calculations. Nevertheless, for the benefit of those readers who know some calculus, we find it helpful to borrow some notation and write Area[a,b](f) = Z b a f(x)dx. (5.4) Readers who have no knowledge of calculus should interpret (5.4) as a def- inition of its right-hand side, which is pronounced “the integral of f from a to b”. Readers who are familiar with the Riemann (or Lebesgue) integral should interpret this notation in its conventional sense. We now introduce an alternative to the probability mass function. Definition 5.1 A probability density function (pdf) is a function f : ℜ → ℜ such that
  • 128. 126 CHAPTER 5. CONTINUOUS RANDOM VARIABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . a b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 5.3: A continuous probability density function. 1. f(x) ≥ 0 for every x ∈ ℜ. 2. Area(−∞,∞)(f) = R ∞ −∞ f(x)dx = 1. Notice that the definition of a pdf is analogous to the definition of a pmf. Each is nonnegative and assigns unit probability to the set of possible values. The only difference is that summation in the definition of a pmf is replaced with integration in the case of a pdf. Definition 5.1 was made without reference to a random variable—we now use it to define a new class of random variables. Definition 5.2 A random variable X is continuous if there exists a proba- bility density function f such that P (X ∈ [a, b]) = Z b a f(x)dx. It is immediately apparent from this definition that the cdf of a continuous random variable X is F(y) = P(X ≤ y) = P (X ∈ (−∞, y]) = Z y −∞ f(x)dx. (5.5)
  • 129. 5.2. BASIC CONCEPTS 127 Equation (5.5) should be compared to equation (4.1). In both cases, the value of the cdf at y is represented as the accumulation of values of the pmf/pdf at x ≤ y. The difference lies in the nature of the accumulating pro- cess: summation for the discrete case (pmf), integration for the continuous case (pdf). Remark for Calculus Students: By applying the Fundamen- tal Theorem of Calculus to (5.5), we deduce that the pdf of a continuous random variable is the derivative of its cdf: d dy F(y) = d dy Z y −∞ f(x)dx = f(y). Remark on Notation: It may strike the reader as curious that we have used f to denote both the pmf of a discrete random variable and the pdf of a continuous random variable. However, as our discussion of their relation to the cdf is intended to sug- gest, they play analogous roles. In advanced, measure-theoretic courses on probability, one learns that our pmf and pdf are ac- tually two special cases of one general construction. Likewise, the concept of expectation for continuous random variables is analogous to the concept of expectation for discrete random variables. Because P(X = x) = 0 if X is a continuous random variable, the notion of a probability-weighted average is not very useful in the continuous setting. However, if X is a discrete random variable, then P(X = x) = f(x) and a probability-weighted average is identical to a pmf-weighted average. The notion of a pmf-weighted average is easily extended to the continuous setting: if X is a continuous random variable, then we introduce a pdf-weighted average of the possible values of X, where averaging is accomplished by replacing summation with integration. Definition 5.3 Suppose that X is a continuous random variable with prob- ability density function f. Then the expected value of X is µ = EX = Z ∞ −∞ xf(x)dx, assuming that this quantity exists.
  • 130. 128 CHAPTER 5. CONTINUOUS RANDOM VARIABLES If the function g : ℜ → ℜ is such that Y = g(X) is a random variable, then it can be shown that EY = Eg(X) = Z ∞ −∞ g(x)f(x)dx, assuming that this quantity exists. In particular, Definition 5.4 If µ = EX exists and is finite, then the variance of X is σ2 = VarX = E(X − µ)2 = Z ∞ −∞ (x − µ)2 f(x)dx. Thus, for discrete and continuous random variables, the expected value is the pmf/pdf-weighted average of the possible values and the variance is the pmf/pdf-weighted average of the squared deviations of the possible values from the expected value. Because calculus is required to compute the expected value and variance of most continuous random variables, our interest in these concepts lies not in computing them but in understanding what information they convey. We will return to this subject in Chapter 6. 5.3 Elementary Examples In this section we consider some examples of continuous random variables for which probabilities can be calculated without recourse to calculus. Example 5.1 What is the probability that a battery-powered wristwatch will stop with its minute hand positioned between 10 and 20 minutes past the hour? To answer this question, let X denote the number of minutes past the hour to which the minute hand points when the watch stops. Then the possible values of X are X(S) = [0, 60) and it is reasonable to assume that each value is equally likely. We must compute P(X ∈ (10, 20)). Because these values occupy one sixth of the possible values, it should be obvious that the answer is going to be 1/6. To obtain the answer using the formal methods of probability, we require a generalization of the Uniform[0, 1] distribution that we studied in Section 5.1. The pdf that describes the notion of equally likely values in the interval
  • 131. 5.3. ELEMENTARY EXAMPLES 129 [0, 60) is f(x) =      0 x ∈ (−∞, 0) 1/60 x ∈ [0, 60) 0 x ∈ [60, ∞)      . (5.6) To check that f is really a pdf, observe that f(x) ≥ 0 for every x ∈ ℜ and that Area[0,60)(f) = (60 − 0) 1 60 = 1. Notice the analogy between the pdfs (5.6) and (5.3). The present pdf defines the continuous uniform distribution on the interval [0, 60); thus, we describe the present situation by writing X ∼ Uniform[0, 60). To calculate the spec- ified probability, we must determine the area of the shaded region in Figure 5.4, i.e., P(X ∈ (10, 20)) = Area(10,20)(f) = (20 − 10) 1 60 = 1 6 . −10 0 10 20 30 40 50 60 70 0 1/60 2/60 3/60 ◦ ◦ • • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 5.4: The probability density function of X ∼ Uniform[0, 60). Example 5.2 Consider two battery-powered watches. Let X1 denote the number of minutes past the hour at which the first watch stops and let X2 denote the number of minutes past the hour at which the second watch stops. What is the probability that the larger of X1 and X2 will be between 30 and 50?
  • 132. 130 CHAPTER 5. CONTINUOUS RANDOM VARIABLES Here we have two independent random variables, each distributed as Uniform[0, 60), and a third random variable, Y = max(X1, X2). Let F denote the cdf of Y . We want to calculate P(30 < Y < 50) = F(50) − F(30). We proceed to derive the cdf of Y . It is evident that Y (S) = [0, 60), so F(y) = 0 if y < 0 and F(y) = 1 if y ≥ 1. If y ∈ [0, 60), then (by the independence of X1 and X2) F(y) = P(Y ≤ y) = P (max(X1, X2) ≤ y) = P (X1 ≤ y, X2 ≤ y) = P (X1 ≤ y) · P (X2 ≤ y) = y − 0 60 − 0 · y − 0 60 − 0 = y2 3600 . Thus, the desired probability is P(30 < Y < 50) = F(50) − F(30) = 502 3600 − 302 3600 = 4 9 . −10 0 10 20 30 40 50 60 70 0 1/60 2/60 3/60 ◦ • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 5.5: The probability density function for Example 5.2.
  • 133. 5.3. ELEMENTARY EXAMPLES 131 In preparation for Example 5.3, we claim that the pdf of Y is f(y) =      0 y ∈ (−∞, 0) y/1800 y ∈ [0, 60) 0 y ∈ [60, ∞)      , which is graphed in Figure 5.5. To check that f is really a pdf, observe that f(y) ≥ 0 for every y ∈ ℜ and that Area[0,60)(f) = 1 2 (60 − 0) 60 1800 = 1. To check that f is really the pdf of Y , observe that f(y) = 0 if y 6∈ [0, 60) and that, if y ∈ [0, 60), then P (Y ∈ [0, y)) = P(Y ≤ y) = F(y) = y2 3600 = 1 2 (y − 0) y 1800 = Area[0,y)(f). If the pdf had been specified, then instead of deriving the cdf we would have simply calculated P(30 < Y < 50) = Area(30,50)(f) by any of several convenient geometric arguments. Example 5.3 Consider two battery-powered watches. Let X1 denote the number of minutes past the hour at which the first watch stops and let X2 denote the number of minutes past the hour at which the second watch stops. What is the probability that the sum of X1 and X2 will be between 45 and 75? Again we have two independent random variables, each distributed as Uniform[0, 60), and a third random variable, Z = X1 + X2. We want to calculate P(45 < Z < 75) = P (Z ∈ (45, 75)) . It is apparent that Z(S) = [0, 120). Although we omit the derivation, it can be determined mathematically that the pdf of Z is f(z) =          0 z ∈ (−∞, 0) z/3600 z ∈ [0, 60) (120 − z)/3600 z ∈ [60, 120) 0 z ∈ [120, ∞)          .
  • 134. 132 CHAPTER 5. CONTINUOUS RANDOM VARIABLES 0 20 40 60 80 100 120 0 1/60 2/60 3/60 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 5.6: The probability density function for Example 5.3. This pdf is graphed in Figure 5.6, in which it is apparent that the area of the shaded region is P(45 < Z < 75) = P (Z ∈ (45, 75)) = Area(45,75)(f) = 1 − 1 2 (45 − 0) 45 3600 − 1 2 (120 − 75) 120 − 75 3600 = 1 − 452 602 = 7 16 . 5.4 Normal Distributions We now introduce the most important family of distributions in probability or statistics, the familiar bell-shaped curve. Definition 5.5 A continuous random variable X is normally distributed with mean µ and variance σ2 > 0, denoted X ∼ Normal(µ, σ2), if the pdf of X is f(x) = 1 √ 2πσ exp " − 1 2 µ x − µ σ ¶2 # . (5.7) Although we will not make extensive use of (5.7), a great many useful properties of normal distributions can be deduced directly from it. Most of the following properties can be discerned in Figure 5.7.
  • 135. 5.4. NORMAL DISTRIBUTIONS 133 µ − 2σ µ − σ µ µ + σ µ + 2σ 0.0 0.1 0.2 0.3 0.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 5.7: The probability density function of X ∼ Normal(µ, σ2). 1. f(x) > 0. It follows that, for any nonempty interval (a, b), P (X ∈ (a, b)) = Area(a,b)(f) > 0, and hence that X(S) = (−∞, +∞). 2. f is symmetric about µ, i.e., f(µ + x) = f(µ − x). 3. f(x) decreases as |x − µ| increases. In fact, the decrease is very rapid. We express this by saying that f has very light tails. 4. P(µ − σ < X < µ + σ) . = 0.683. 5. P(µ − 2σ < X < µ + 2σ) . = 0.954. 6. P(µ − 3σ < X < µ + 3σ) . = 0.997. Notice that there is no one normal distribution, but a 2-parameter fam- ily of uncountably many normal distributions. In fact, if we plot µ on a horizontal axis and σ > 0 on a vertical axis, then there is a distinct normal distribution for each point in the upper half-plane. However, Properties 4–6
  • 136. 134 CHAPTER 5. CONTINUOUS RANDOM VARIABLES above, which hold for all choices of µ and σ, suggest that there is a funda- mental equivalence between different normal distributions. It turns out that, if one can compute probabilities for any one normal distribution, then one can compute probabilities for any other normal distribution. In anticipation of this fact, we distinguish one normal distribution to serve as a reference distribution: Definition 5.6 The standard normal distribution is Normal(0, 1). The following result is of enormous practical value: Theorem 5.1 If X ∼ Normal(µ, σ2), then Z = X − µ σ ∼ Normal(0, 1). The transformation Z = (X − µ)/σ is called conversion to standard units. Detailed tables of the standard normal cdf are widely available, as is computer software for calculating specified values. Combined with Theorem 5.1, this availability allows us to easily compute probabilities for arbitrary normal distributions. In the following examples, we let Φ denote the cdf of Z ∼ Normal(0, 1) and we make use of the R function pnorm. Example 5.4a If X ∼ Normal(1, 4), then what is the probability that X assumes a value no more than 3? Here, µ = 1, σ = 2, and we want to calculate P(X ≤ 3) = P µ X − µ σ ≤ 3 − µ σ ¶ = P µ Z ≤ 3 − 1 2 = 1 ¶ = Φ(1). We do so in R as follows: > pnorm(1) [1] 0.8413447 Remark The R function pnorm accepts optional arguments that specify a mean and standard deviation. Thus, in Example 5.4a, we could directly evaluate P(X ≤ 3) as follows: > pnorm(3,mean=1,sd=2) [1] 0.8413447 This option, of course, is not available if one is using a table of the standard normal cdf. Because the transformation to standard units plays such a fundamental role in probability and statistics, we will emphasize computing normal probabilities via the standard normal distribution.
  • 137. 5.4. NORMAL DISTRIBUTIONS 135 Example 5.4b If X ∼ Normal(−1, 9), then what is the probability that X assumes a value of at least −7? Here, µ = −1, σ = 3, and we want to calculate P(X ≥ −7) = P µ X − µ σ ≥ −7 − µ σ ¶ = P µ Z ≥ −7 + 1 3 = −2 ¶ = 1 − P(Z < −2) = 1 − Φ(−2). We do so in R as follows: > 1-pnorm(-2) [1] 0.9772499 Example 5.4c If X ∼ Normal(2, 16), then what is the probability that X assumes a value between 0 and 10? Here, µ = 2, σ = 4, and we want to calculate P(0 < X < 10) = P µ 0 − µ σ < X − µ σ < 10 − µ σ ¶ = P µ −0.5 = 0 − 2 4 < Z < 10 − 2 4 = 2 ¶ = P(Z < 2) − P(Z < −0.5) = Φ(2) − Φ(−0.5). We do so in R as follows: > pnorm(2)-pnorm(-.5) [1] 0.6687123 Example 5.4d If X ∼ Normal(−3, 25), then what is the probability that |X| assumes a value greater than 10? Here, µ = −3, σ = 5, and we want to calculate P(|X| > 10) = P(X > 10 or X < −10) = P(X > 10) + P(X < −10) = P µ X − µ σ > 10 − µ σ ¶ + P µ X − µ σ < −10 − µ σ ¶ = P µ Z > 10 + 3 5 = 2.6 ¶ + P µ Z < −10 + 3 5 = −1.2 ¶ = 1 − Φ(2.6) + Φ(−1.2).
  • 138. 136 CHAPTER 5. CONTINUOUS RANDOM VARIABLES We do so in R as follows: > 1-pnorm(2.6)+pnorm(-1.2) [1] 0.1197309 Example 5.4e If X ∼ Normal(4, 16), then what is the probability that X2 assumes a value less than 36? Here, µ = 4, σ = 4, and we want to calculate P(X2 < 36) = P(−6 < X < 6) = P µ −6 − µ σ < X − µ σ < 6 − µ σ ¶ = P µ −2.5 = −6 − 4 4 < Z < 6 − 4 4 = 0.5 ¶ = P(Z < 0.5) − P(Z < −2.5) = Φ(0.5) − Φ(−2.5). We do so in R as follows: > pnorm(.5)-pnorm(-2.5) [1] 0.6852528 We defer an explanation of why the family of normal distributions is so important until Section 8.3, concluding the present section with the following useful result: Theorem 5.2 If X1 ∼ Normal(µ1, σ2 1) and X2 ∼ Normal(µ2, σ2 2) are inde- pendent, then X1 + X2 ∼ Normal(µ1 + µ2, σ2 1 + σ2 2). 5.5 Normal Sampling Distributions A number of important probability distributions can be derived by consider- ing various functions of normal random variables. These distributions play important roles in statistical inference. They are rarely used to describe data; rather, they arise when analyzing data that is sampled from a normal distribution. For this reason, they are sometimes called sampling distribu- tions. This section collects some definitions of and facts about several important sampling distributions. It is not important to read this section until you encounter these distributions in later chapters; however, it is convenient to collect this material in one easy-to-find place.
  • 139. 5.5. NORMAL SAMPLING DISTRIBUTIONS 137 Chi-Squared Distributions Suppose that Z1, . . . , Zn ∼ Normal(0, 1) and consider the continuous random variable Y = Z2 1 + · · · + Z2 n. Because each Z2 i ≥ 0, the set of possible values of Y is Y (S) = [0, ∞). We are interested in the distribution of Y . The distribution of Y belongs to a family of probability distributions called the chi-squared family. This family is indexed by a single real-valued parameter, ν ∈ [1, ∞), called the degrees of freedom parameter. We will denote a chi-squared distribution with ν degrees of freedom by χ2(ν). Figure 5.8 displays the pdfs of several chi-squared distributions. y f(y) 0 2 4 6 8 10 0.0 0.5 1.0 1.5 Figure 5.8: The probability density functions of Y ∼ χ2(ν) for ν = 1, 3, 5. The following fact is quite useful: Theorem 5.3 If Z1, . . . , Zn ∼ Normal(0, 1) and Y = Z2 1 + · · · + Z2 n, then Y ∼ χ2(n). In theory, this fact allows one to compute the probabilities of events defined by values of Y , e.g., P(Y > 4.5). In practice, this requires evaluating the
  • 140. 138 CHAPTER 5. CONTINUOUS RANDOM VARIABLES cdf of χ2(ν), a function for which there is no simple formula. Fortunately, there exist efficient algorithms for numerically evaluating these cdfs. The R function pchisq returns values of the cdf of any specified chi-squared distribution. For example, if Y ∼ χ2(2), then P(Y > 4.5) is > 1-pchisq(4.5,df=2) [1] 0.1053992 Finally, if Zi ∼ Normal(0, 1), then EZ2 i = Var Zi + (EZi)2 = 1. It follows that EY = E Ã n X i=1 Z2 i ! = n X i=1 EZ2 i = n X i=1 1 = n; thus, Corollary 5.1 If Y ∼ χ2(n), then EY = n. Student’s t Distributions Now let Z ∼ Normal(0, 1) and Y ∼ χ2(ν) be independent random variables and consider the continuous random variable T = Z p Y/ν . The set of possible values of T is T(S) = (−∞, ∞). We are interested in the distribution of T. Definition 5.7 The distribution of T is called a t distribution with ν degrees of freedom. We will denote this distribution by t(ν). The standard normal distribution is symmetric about the origin; i.e., if Z ∼ Normal(0, 1), then −Z ∼ Normal(0, 1). It follows that T = Z/ p Y/ν and −T = −Z/ p Y/ν have the same distribution. Hence, if p is the pdf of T, then it must be that p(t) = p(−t). Thus, t pdfs are symmetric about the origin, just like the standard normal pdf. Figure 5.9 displays the pdfs of two t distributions. They can be dis- tinguished by virtue of the fact that the variance of t(ν) decreases as ν increases. It may strike you that t pdfs closely resemble normal pdfs. In fact, the standard normal pdf is a limiting case of the t pdfs:
  • 141. 5.5. NORMAL SAMPLING DISTRIBUTIONS 139 y f(y) -4 -2 0 2 4 0.0 0.1 0.2 0.3 0.4 Figure 5.9: The probability density functions of T ∼ t(ν) for ν = 5, 30. Theorem 5.4 Let Fν denote the cdf of t(ν) and let Φ denote the cdf of Normal(0, 1). Then lim ν→∞ Fν(t) = Φ(t) for every t ∈ (−∞, ∞). Thus, when ν is sufficiently large (ν > 40 is a reasonable rule of thumb), t(ν) is approximately Normal(0, 1) and probabilities involving the former can be approximated by probabilities involving the latter. In R, it is just as easy to calculate t(ν) probabilities as it is to calculate Normal(0, 1) probabilities. The R function pt returns values of the cdf of any specified t distribution. For example, if T ∼ t(14), then P(T ≤ −1.5) is > pt(-1.5,df=14) [1] 0.07791266
  • 142. 140 CHAPTER 5. CONTINUOUS RANDOM VARIABLES Fisher’s F Distributions Finally, let Y1 ∼ χ2(ν1) and Y2 ∼ χ2(ν2) be independent random variables and consider the continuous random variable F = Y1/ν1 Y2/ν2 . Because Yi ≥ 0, the set of possible values of F is F(S) = [0, ∞). We are interested in the distribution of F. Definition 5.8 The distribution of F is called an F distribution with ν1 and ν2 degrees of freedom. We will denote this distribution by F(ν1, ν2). It is customary to call ν1 the “numerator” degrees of freedom and ν2 the “denominator” degrees of freedom. Figure 5.10 displays the pdfs of several F distributions. y f(y) 0 2 4 6 8 0.0 0.2 0.4 0.6 0.8 Figure 5.10: The probability density functions of F ∼ F(ν1, ν2) for (ν1, ν2) = (2, 12), (4, 20), (9, 10). There is an important relation between t and F distributions. To antic- ipate it, suppose that Z ∼ Normal(0, 1) and Y2 ∼ χ2(ν2) are independent
  • 143. 5.6. EXERCISES 141 random variables. Then Y1 = Z2 ∼ χ2(1), so T = Z p Y2/ν2 ∼ t (ν2) and T2 = Z2 Y2/ν2 = Y1/1 Y2/ν2 ∼ F (1, ν2) . More generally, Theorem 5.5 If T ∼ t(ν), then T2 ∼ F(1, ν). The R function pf returns values of the cdf of any specified F distribution. For example, if F ∼ F(2, 27), then P(F > 2.5) is > 1-pf(2.5,df1=2,df2=27) [1] 0.1008988 5.6 Exercises 1. In this problem you will be asked to examine two equations. Several symbols from each equation will be identified. Your task will be to decide which symbols represent real numbers and which symbols rep- resent functions. If a symbol represents a function, then you should state the domain and the range of that function. Recall: A function is a rule of assignment. The set of labels that the function might possibly assign is called the range of the function; the set of objects to which labels are assigned is called the domain. For example, when I grade your test, I assign a numeric value to your name. Grading is a function that assigns real numbers (the range) to students (the domain). (a) In the equation p = P (Z > 1.96), please identify each of the following symbols as a real number or a function: i. p ii. P iii. Z (b) In the equation σ2 = E (X − µ)2 , please identify each of the fol- lowing symbols as a real number or a function:
  • 144. 142 CHAPTER 5. CONTINUOUS RANDOM VARIABLES i. σ ii. E iii. X iv. µ 2. Suppose that X is a continuous random variable with probability den- sity function (pdf) f defined as follows: f(x) =      0 if x < 1 2(x − 1) if 1 ≤ x ≤ 2 0 if x > 2      . (a) Graph f. (b) Verify that f is a pdf. (c) Compute P(1.50 < X < 1.75). 3. Consider the function f : ℜ → ℜ defined by f(x) =          0 x < 0 cx 0 < x < 1.5 c(3 − x) 1.5 < x < 3 0 x > 3          , where c is an undetermined constant. (a) For what value of c is f a probability density function? (b) Suppose that a continuous random variable X has probability density function f. Compute EX. (Hint: Draw a picture of the pdf.) (c) Compute P(X > 2). (d) Suppose that Y ∼ Uniform(0, 3). Which random variable has the larger variance, X or Y ? (Hint: Draw a picture of the two pdfs.) (e) Determine and graph the cumulative distribution function of X. 4. Imagine throwing darts at a circular dart board, B. Let us measure the dart board in units for which the radius of B is 1, so that the area of B is π. Suppose that the darts are thrown in such a way that they are certain to hit a point in B, and that each point in B is equally
  • 145. 5.6. EXERCISES 143 likely to be hit. Thus, if A ⊂ B, then the probability of hitting a point in A is P(A) = area(A) area(B) = area(A) π . Define the random variable X to be the distance from the center of B to the point that is hit. (a) What are the possible values of X? (b) Compute P(X ≤ 0.5). (c) Compute P(0.5 < X ≤ 0.7). (d) Determine and graph the cumulative distribution function of X. (e) [Optional—for those who know a little calculus.] Determine and graph the probability density function of X. 5. Imagine throwing darts at a triangular dart board, B = {(x, y) : 0 ≤ y ≤ x ≤ 1} . Suppose that the darts are thrown in such a way that they are certain to hit a point in B, and that each point in B is equally likely to be hit. Define the random variable X to be the value of the x-coordinate of the point that is hit, and define the random variable Y to be the value of the y-coordinate of the point that is hit. (a) Draw a picture of B. (b) Compute P(X ≤ 0.5). (c) Determine and graph the cumulative distribution function of X. (d) Are X and Y independent? 6. Let X be a normal random variable with mean µ = −5 and standard deviation σ = 10. Compute the following: (a) P(X < 0) (b) P(X > 5) (c) P(−3 < X < 7) (d) P(|X + 5| < 10) (e) P(|X − 3| > 2)
  • 146. 144 CHAPTER 5. CONTINUOUS RANDOM VARIABLES
  • 147. Chapter 6 Quantifying Population Attributes The distribution of a random variable is a mathematical abstraction of the possible outcomes of an experiment. Indeed, having identified a random variable of interest, we will often refer to its distribution as the population. If one’s goal is to represent an entire population, then one can hardly do better than to display its entire probability mass or density function. Usually, however, one is interested in specific attributes of a population. This is true if only because it is through specific attributes that one comprehends the entire population, but it is also easier to draw inferences about a specific population attribute than about the entire population. Accordingly, this chapter examines several population attributes that are useful in statistics. We will be especially concerned with measures of centrality and mea- sures of dispersion. The former provide quantitative characterizations of where the “middle” of a population is located; the latter provide quanti- tative characterizations of how widely the population is spread. We have already introduced one important measure of centrality, the expected value of a random variable (the population mean, µ), and one important measure of dispersion, the standard deviation of a random variable (the population standard deviation, σ). This chapter discusses these measures in greater depth and introduces other, complementary measures. 6.1 Symmetry We begin by considering the following question: 145
  • 148. 146 CHAPTER 6. QUANTIFYING POPULATION ATTRIBUTES Where is the “middle” of a normal distribution? It is quite evident from Figure 5.7 that there is only one plausible answer to this question: if X ∼ Normal(µ, σ2), then the “middle” of the distribution of X is µ. Let f denote the pdf of X. To understand why µ is the only plausible middle of f, recall a property of f that we noted in Section 5.4: for any x, f(µ+x) = f(µ−x). This property states that f is symmetric about µ. It is the property of symmetry that restricts the plausible locations of “middle” to the central value µ. To generalize the above example of a measure of centrality, we introduce an important qualitative property that a population may or may not possess: Definition 6.1 Let X be a continuous random variable with probability den- sity function f. If there exists a value θ ∈ ℜ such that f(θ + x) = f(θ − x) for every x ∈ ℜ, then X is a symmetric random variable and θ is its center of symmetry. We have already noted that X ∼ Normal(µ, σ2) has center of symmetry µ. Another example of symmetry is illustrated in Figure 6.1: X ∼ Uniform[a, b] has center of symmetry (a + b)/2. ◦ a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . ◦ b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . • • a+b 2 Figure 6.1: X ∼ Uniform[a, b] has center of symmetry (a + b)/2. For symmetric random variables, the center of symmetry is the only plausible measure of centrality—of where the “middle” of the distribution is located. Symmetry will play an important role in our study of statistical
  • 149. 6.2. QUANTILES 147 inference. Our primary concern will be with continuous random variables, but the concept of symmetry can be used with other random variables as well. Here is a general definition: Definition 6.2 Let X be a random variable. If there exists a value θ ∈ ℜ such that the random variables X − θ and θ − X have the same distribution, then X is a symmetric random variable and θ is its center of symmetry. Suppose that we attempt to compute the expected value of a symmetric random variable X with center of symmetry θ. Thinking of the expected value as a weighted average, we see that each θ+x will be weighted precisely as much as the corresponding θ − x. Thus, if the expected value exists (there are a few pathological random variables for which the expected value is undefined), then it must equal the center of symmetry, i.e., EX = θ. Of course, we have already seen that this is the case for X ∼ Normal(µ, σ2) and for X ∼ Uniform[a, b]. 6.2 Quantiles In this section we introduce population quantities that can be used for a variety of purposes. As in Section 6.1, these quantities are most easily un- derstood in the case of continuous random variables: Definition 6.3 Let X be a continuous random variable and let α ∈ (0, 1). If q = q(X; α) is such that P(X < q) = α and P(X > q) = 1 − α, then q is called an α quantile of X. If we express the probabilities in Definition 6.3 as percentages, then we see that q is the 100α percentile of the distribution of X. Example 6.1 Suppose that X ∼ Uniform[a, b] has pdf f, depicted in Figure 6.2. Then q is the value in (a, b) for which α = P(X < q) = Area[a,q](f) = (q − a) · 1 b − a , i.e., q = a + α(b − a). This expression is easily interpreted: to the lower endpoint a, add 100α% of the distance b − a to obtain the 100α percentile.
  • 150. 148 CHAPTER 6. QUANTIFYING POPULATION ATTRIBUTES ◦ a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . ◦ b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . • • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . α q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 6.2: A quantile of a Uniform distribution. Example 6.2 Suppose that X has pdf f(x) = ( x/2 x ∈ [0, 2] 0 otherwise ) , depicted in Figure 6.3. Then q is the value in (0, 2) for which α = P(X < q) = Area[a,q](f) = 1 2 · (q − 0) · µ q 2 − 0 ¶ = q2 4 , i.e., q = 2 √ α. Example 6.3 Suppose that X ∼ Normal(0, 1) has cdf Φ. Then q is the value in (−∞, ∞) for which α = P(X < q) = Φ(q), i.e., q = Φ−1(α). Unlike the previous examples, we cannot compute q by elementary calculations. Fortunately, the R function qnorm computes quantiles of normal distribu- tions. For example, we compute the α = 0.95 quantile of X as follows: > qnorm(.95) [1] 1.644854 Example 6.4 Suppose that X has pdf f(x) = ( 1/2 x ∈ [0, 1] ∪ [2, 3] 0 otherwise ) , depicted in Figure 6.4. Notice that P(X ∈ [0, 1]) = 0.5 and P(X ∈ [2, 3]) = 0.5. If α ∈ (0, 0.5), then we can use the same reasoning that we employed
  • 151. 6.2. QUANTILES 149 −0.5 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.2 0.4 0.6 0.8 1.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . α . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . • ◦ q . ... ... .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 6.3: A quantile of another distribution. −1 0 1 2 3 4 0.0 0.5 1.0 ◦ ◦ ◦ ◦ • • • • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 6.4: A distribution for which the α = 0.5 quantile is not unique. in Example 6.1 to deduce that q = 2α. Similarly, if α ∈ (0.5, 1), then q = 2 + 2(α − 0.5) = 2α + 1. However, if α = 0.5, then we encounter an ambiguity: the equalities P(X < q) = 0.5 and P(X > q) = 0.5 hold for any q ∈ [1, 2]. Accordingly, any q ∈ [1, 2] is an α = 0.5 quantile of X. Thus, quantiles are not always unique. To avoid confusion when a quantile is not unique, it is nice to have a convention for selecting one of the possible quantile values. In the case that α = 0.5, there is a universal convention: Definition 6.4 The midpoint of the interval of all values of the α = 0.5 quantile is called the population median.
  • 152. 150 CHAPTER 6. QUANTIFYING POPULATION ATTRIBUTES In Example 6.4, the population median is q = 1.5. Working with the quantiles of a continuous random variable X is straight- forward because P(X = q) = 0 for any choice of q. This means that P(X < q) + P(X > q) = 1; hence, if P(X < q) = α, then P(X > q) = 1 − α. Furthermore, it is always possible to find a q for which P(X < q) = α. This is not the case if X is discrete. Example 6.5 Let X be a discrete random variable that assumes values in the set {1, 2, 3} with probabilities p(1) = 0.4, p(2) = 0.4, and p(3) = 0.2. What is the median of X? Imagine accumulating probability as we move from −∞ to ∞. At what point do we find that we have acquired half of the total probability? The answer is that we pass from having 40% of the probability to having 80% of the probability as we occupy the point q = 2. It makes sense to declare this value to be the median of X. Here is another argument that appeals to Definition 6.3. If q < 2, then P(X > q) = 0.6 > 0.5. Hence, it would seem that the population median should not be less than 2. Similarly, if q > 2, then P(X < q) = 0.8 > 0.5. Hence, it would seem that the population median should not be greater than 2. We conclude that the population median should equal 2. But notice that P(X < 2) = 0.4 < 0.5 and P(X > 2) = 0.2 < 0.5! We conclude that Definition 6.3 will not suffice for discrete random variables. However, we can generalize the reasoning that we have just employed as follows: Definition 6.5 Let X be a random variable and let α ∈ (0, 1). If q = q(X; α) is such that P(X < q) ≤ α and P(X > q) ≤ 1 − α, then q is called an α quantile of X. The remainder of this section describes how quantiles are often used to measure centrality and dispersion. The following three quantiles will be of particular interest: Definition 6.6 Let X be a random variable. The first, second, and third quartiles of X, denoted q1(X), q2(X), and q3(X), are the α = 0.25, α = 0.50, and α = 0.75 quantiles of X. The second quartile is also called the median of X.
  • 153. 6.2. QUANTILES 151 6.2.1 The Median of a Population If X is a symmetric random variable with center of symmetry θ, then P(X < θ) = P(X > θ) = 1 − P(X = θ) 2 ≤ 1 2 and q2(X) = θ. Even if X is not symmetric, the median of X is an excellent way to define the “middle” of the population. Many statistical procedures use the median as a measure of centrality. Example 6.6 One useful property of the median is that it is rather in- sensitive to the influence of extreme values that occur with small probability. For example, let Xk denote a discrete random variable that assumes values in {−1, 0, 1, 10k} for k = 1, 2, 3, . . .. Suppose that Xk has the following pmf: x pk(x) −1 0.19 0 0.60 1 0.19 10k 0.02 Most of the probability (98%) is concentrated on the values {−1, 0, 1}. This probability is centered at x = 0. A small amount of probability is con- centrated at a large value, x = 10, 100, 1000, . . .. If we want to treat these large values as aberrations (perhaps our experiment produces a physically meaningful value x ∈ {−1, 0, 1} with probability 0.98, but our equipment malfunctions and produces a physically meaningless value x = 10k with probability 0.02), then we might prefer to declare that x = 0 is the central value of X. In fact, no matter how large we choose k, the median refuses to be distracted by the aberrant value: P(X < 0) = 0.19 and P(X > 0) = 0.21, so the median of X is q2(X) = 0. 6.2.2 The Interquartile Range of a Population Now we turn our attention from the problem of measuring centrality to the problem of measuring dispersion. Can we use quantiles to quantify how widely spread are the values of a random variable? A natural approach is to choose two values of α and compute the corresponding quantiles. The distance between these quantiles is a measure of dispersion.
  • 154. 152 CHAPTER 6. QUANTIFYING POPULATION ATTRIBUTES To avoid comparing apples and oranges, let us agree on which two values of α we will choose. Statisticians have developed a preference for α = 0.25 and α = 0.75, in which case the corresponding quantiles are the first and third quartiles. Definition 6.7 Let X be a random variable with first and third quartiles q1 and q3. The interquartile range of X is the quantity iqr(X) = q3 − q1. If X is a continuous random variable, then P(q1 < X < q3) = 0.5, so the interquartile range is the interval of values on which is concentrated the central 50% of the probability. Like the median, the interquartile range is rather insensitive to the in- fluence of extreme values that occur with small probability. In Example 6.6, the central 50% of the probability is concentrated on the single value x = 0. Hence, the interquartile range is 0 − 0 = 0, regardless of where the aberrant 2% of the probability is located. 6.3 The Method of Least Squares Let us return to the case of a symmetric random variable X, in which case the “middle” of the distribution is unambiguously the center of symmetry θ. Given this measure of centrality, how might we construct a measure of dispersion? One possibility is to measure how far a “typical” value of X lies from its central value, i.e., to compute E|X − θ|. This possibility leads to several remarkably fertile approaches to describing both dispersion and centrality. Given a designated central value c and another value x, we say that the absolute deviation of x from c is |x − c| and that the squared deviation of x from c is (x−c)2. The magnitude of a typical absolute deviation is E|X −c| and the magnitude of a typical squared deviation is E(X − c)2. A natural approach to measuring centrality is to choose a value of c that typically results in small deviations, i.e., to choose c either to minimize E|X − c| or to minimize E(X − c)2. The second possibility is a simple example of the method of least squares. Measuring centrality by minimizing the magnitude of a typical absolute or squared deviation results in two familiar quantities: Theorem 6.1 Let X be a random variable with population median q2 and population mean µ = EX. Then
  • 155. 6.3. THE METHOD OF LEAST SQUARES 153 1. The value of c that minimizes E|X − c| is c = q2. 2. The value of c that minimizes E(X − c)2 is c = µ. It follows that medians are naturally associated with absolute deviations and that means are naturally associated with squared deviations. Having discussed the former in Section 6.2.1, we now turn to the latter. 6.3.1 The Mean of a Population Imagine creating a physical model of a probability distribution by distribut- ing weights along the length of a board. The location of the weights are the values of the random variable and the weights represent the probabilities of those values. After gluing the weights in place, we position the board atop a fulcrum. How must the fulcrum be positioned in order that the board be perfectly balanced? It turns out that one should position the fulcrum at the mean of the probability distribution. For this reason, the expected value of a random variable is sometimes called its center of mass. Thus, like the population median, the population mean has an appealing interpretation that commends its use as a measure of centrality. If X is a symmetric random variable with center of symmetry θ, then µ = EX = θ and q2 = q2(X) = θ, so the population mean and the population median agree. In general, this is not the case. If X is not symmetric, then one should think carefully about whether one is interested in the population mean and the population median. Of course, computing both measures and examining the discrepancy between them may be highly instructive. In particular, if EX 6= q2(X), then X is not a symmetric random variable. In Section 6.2.1 we noted that the median is rather insensitive to the influence of extreme values that occur with small probability. The mean lacks this property. In Example 6, EXk = −0.19 + 0.00 + 0.19 + 10k · 0.02 = 2 · 10k−2 , which equals 0.2 if k = 1, 2 if k = 2, 20 if k = 3, 200 if k = 4, and so on. No matter how reluctantly, the population mean follows the aberrant value toward infinity as k increases. 6.3.2 The Standard Deviation of a Population Suppose that X is a random variable with EX = µ and Var X = σ2. If we adopt the method of least squares, then we obtain c = µ as our measure
  • 156. 154 CHAPTER 6. QUANTIFYING POPULATION ATTRIBUTES of centrality, in which case the magnitude of a typical squared deviation is E(X −µ)2 = σ2, the population variance. The variance measures dispersion in squared units. For example, if X measures length in meters, then Var X is measured in meters squared. If, as in Section 6.2.2, we prefer to measure dispersion in the original units of measurement, then we must take the square root of the variance. Accordingly, we will emphasize the population standard deviation, σ, as a measure of dispersion. Just as it is natural to use the median and the interquartile range to- gether, so is it natural to use the mean and the standard deviation together. In the case of a symmetric random variable, the median and the mean agree. However, the interquartile range and the standard deviation measure disper- sion in two fundamentally different ways. To gain insight into their relation to each other, suppose that X ∼ Normal(0, 1), in which case the population standard deviation is σ = 1. We use R to compute iqr(X): > qnorm(.75)-qnorm(.25) [1] 1.348980 We have derived a useful fact: the interquartile range of a normal random variable is approximately 1.35 standard deviations. If we encounter a random variable for which this is not the case, then that random variable is not normally distributed. Like the mean, the standard deviation is sensitive to the influence of extreme values that occur with small probability. Consider Example 6. The variance of Xk is σ2 k = EX2 k − (EXk)2 = ³ 0.19 + 0.00 + 0.19 + 100k · 0.02 ´ − ³ 2 · 10k−2 ´2 = 0.38 + 2 · 100k−1 − 4 · 100k−2 = 0.38 + 196 · 100k−2 , so σ1 = √ 2.34, σ2 = √ 196.38, σ3 = √ 19600.38, and so on. The population standard deviation tends toward infinity as the aberrant value tends toward infinity. 6.4 Exercises 1. Refer to the random variable X defined in Exercise 2 of Chapter 5. Compute the following two quantities: q2(X), the population median; and iqr(X), the population interquartile range.
  • 157. 6.4. EXERCISES 155 2. Consider the function g : ℜ → ℜ defined by g(x) =              0 x < 0 x x ∈ [0, 1] 1 x ∈ [1, 2] 3 − x x ∈ [2, 3] 0 x > 3              . Let f(x) = cg(x), where c is an undetermined constant. (a) For what value of c is f a probability density function? (b) Suppose that a continuous random variable X has probability density function f. Compute P(1.5 < X < 2.5). (c) Compute EX. (d) Let F denote the cumulative distribution function of X. Compute F(1). (e) Determine the 0.90 quantile of f. 3. Suppose that X is a continuous random variable with probability den- sity function f(x) =          0 x < 0 x x ∈ (0, 1) (3 − x)/4 x ∈ (1, 3) 0 x > 3          . (a) Compute q2(X), the population median. (b) Which is greater, q2(X) or EX? Explain your reasoning. (c) Compute P(0.5 < X < 1.5). (d) Compute iqr(X), the population interquartile range. 4. Consider the dart-throwing experiment described in Exercise 5.6.5 and compute the following quantities: (a) q2(X) (b) q2(Y ) (c) iqr(X) (d) iqr(Y )
  • 158. 156 CHAPTER 6. QUANTIFYING POPULATION ATTRIBUTES 5. Lynn claims that Lulu is the cutest dog in the world. Slightly more circumspect, Michael allows that Lulu is “one in a million.” Seizing the opportunity to revel in Lulu’s charm, Lynn devises a procedure for measuring CCQ (canine cuteness quotient), which she calibrates so that CCQ ∼ Normal(100, 400). Assuming that Michael is correct, what is Lulu’s CCQ score? 6. A random variable X ∼ Uniform(5, 15) has population mean µ = EX = 10 and population variance σ2 = Var X = 225. Let Y denote a normal random variable with the same mean and variance. (a) Consider X. What is the ratio of its interquartile range to its standard deviation? (b) Consider Y . What is the ratio of its interquartile range to its standard deviation? 7. Identify each of the following statements as True or False. Briefly explain each of your answers. (a) For every symmetric random variable X, the median of X equals the average of the first and third quartiles of X. (b) For every random variable X, the interquartile range of X is greater than the standard deviation of X. (c) For every random variable X, the expected value of X lies between the first and third quartile of X. (d) If the standard deviation of a random variable equals zero, then so does its interquartile range. (e) If the median of a random variable equals its expected value, then the random variable is symmetric. 8. For each of the following random variables, discuss whether the median or the mean would be a more useful measure of centrality: (a) The annual income of U.S. households. (b) The lifetime of 75-watt light bulbs. 9. The R function qbinom returns quantiles of the binomial distribution. For example, quartiles of X ∼ Binomial(n = 3; p = 0.5) can be com- puted as follows:
  • 159. 6.4. EXERCISES 157 > alpha <- c(.25,.5,.75) > qbinom(alpha,size=3,prob=.5) [1] 1 1 2 Notice that X is a symmetric random variable with center of symmetry θ = 1.5, but qbinom computes q2(X) = 1. This reveals that R may produce unexpected results when it computes the quantiles of discrete random variables. By experimenting with various choices of n and p, try to discover a rule according to which qbinom computes quartiles of the binomial distribution.
  • 160. 158 CHAPTER 6. QUANTIFYING POPULATION ATTRIBUTES
  • 161. Chapter 7 Data Chapters 3–6 developed mathematical tools for studying populations. Ex- periments are performed for the purpose of obtaining information about a population that is imperfectly understood. Experiments produce data, the raw material from which statistical procedures draw inferences about the population under investigation. The probability distribution of a random variable X is a mathematical abstraction of an experimental procedure for sampling from a population. When we perform the experiment, we observe one of the possible values of X. To distinguish an observed value of a random variable from the ran- dom variable itself, we designate random variables by uppercase letters and observed values by corresponding lowercase letters. Example 7.1 A coin is tossed and Heads is observed. The mathemat- ical abstraction of this experiment is X ∼ Bernoulli(p) and the observed value of X is x = 1. We will be concerned with experiments that are replicated a fixed number of times. By replication, we mean that each repetition of the experiment is performed under identical conditions and that the repetitions are mutually independent. Mathematically, we write X1, . . . , Xn ∼ P. Let xi denote the observed value of Xi. The set of observed values, ~ x = {x1, . . . , xn}, is called a sample. This chapter introduces several useful techniques for extracting informa- tion from samples. This information will be used to draw inferences about populations (for example, to guess the value of the population mean) and to assess assumptions about populations (for example, to decide whether 159
  • 162. 160 CHAPTER 7. DATA or not the population can plausibly be modelled by a normal distribution). Drawing inferences about population attributes (especially means) is the pri- mary subject of subsequent chapters, which will describe specific procedures for drawing specific types of inferences. However, deciding which procedure is appropriate often involves assessing the validity of certain statistical as- sumptions. The methods described in this chapter will be our primary tools for making such assessments. To assess whether or not an assumption is plausible, one must be able to investigate what happens when the assumption holds. For example, if a scientist needs to decide whether or not it is plausible that her sample was drawn from a normal distribution, then she needs to be able to recog- nize normally distributed data. For this reason, the samples studied in this chapter were generated under carefully controlled conditions, by computer simulation. This allows us to investigate how samples drawn from specified distributions should behave, thereby providing a standard against which to compare experimental data for which the true distribution can never be known. Fortunately, R provides several convenient functions for simulating random sampling. Example 7.2 Consider the experiment of tossing a fair die n = 20 times. We can simulate this experiment as follows: > SampleSpace <- c(1,2,3,4,5,6) > sample(x=SampleSpace,size=20,replace=T) [1] 1 6 3 2 2 3 5 3 6 4 3 2 5 3 2 2 3 2 4 2 Example 7.3 Consider the experiment of drawing a sample of size n = 5 from Normal(2, 3). We can simulate this experiment as follows: > rnorm(5,mean=2,sd=sqrt(3)) [1] 1.3274812 0.5901923 2.5881013 1.2222812 3.4748139 7.1 The Plug-In Principle We will employ a general methodology for relating samples to populations. In Chapters 3–6 we developed a formidable apparatus for studying popu- lations (probability distributions). We would like to exploit this apparatus fully. Given a sample, we will pretend that the sample is a finite population (discrete probability distribution) and then we will use methods for studying
  • 163. 7.1. THE PLUG-IN PRINCIPLE 161 finite populations to learn about the sample. This approach is sometimes called the Plug-In Principle. The Plug-In Principle employs a fundamental construction: Definition 7.1 Let ~ x = (x1, . . . , xn) be a sample. The empirical proba- bility distribution associated with ~ x, denoted P̂n, is the discrete probability distribution defined by assigning probability 1/n to each {xi}. Notice that, if a sample contains several copies of the same numerical value, then each copy is assigned probability 1/n. This is illustrated in the following example. Example 7.2 (continued) A fair die is rolled n = 20 times, resulting in the sample ~ x = {1, 6, 3, 2, 2, 3, 5, 3, 6, 4, 3, 2, 5, 3, 2, 2, 3, 2, 4, 2}. (7.1) The empirical distribution P̂20 is the discrete distribution that assigns the following probabilities: xi #{xi} P̂20({xi}) 1 1 0.05 2 7 0.35 3 6 0.30 4 2 0.10 5 2 0.10 6 2 0.10 Notice that, although the true probabilities are P({xi}) = 1/6, the empirical probabilities range from 0.05 to 0.35. The fact that P̂20 differs from P is an example of sampling variation. Statistical inference is concerned with determining what the empirical distribution (the sample) tells us about the true distribution (the population). The empirical distribution, P̂n, is an intuitively appealing approximation of the actual probability distribution, P, from which the sample was drawn. Notice that the empirical probability of any event A is just P̂n(A) = # {xi ∈ A} · 1 n ,
  • 164. 162 CHAPTER 7. DATA the observed frequency with which A occurs in the sample. Because the em- pirical distribution is an authentic probability distribution, all of the meth- ods that we developed for studying (discrete) distributions are available for studying samples. For example, Definition 7.2 The empirical cdf, usually denoted F̂n, is the cdf associated with P̂n, i.e. F̂n(y) = P̂n(X ≤ y) = # {xi ≤ y} n . The empirical cdf of sample (7.1) is graphed in Figure 7.1. y F(y) -2 -1 0 1 2 3 4 5 6 7 8 9 0.0 0.2 0.4 0.6 0.8 1.0 Figure 7.1: An empirical cdf. In R, one can graph the empirical cdf of a sample x with the following command: > plot.ecdf(x)
  • 165. 7.2. PLUG-IN ESTIMATES OF MEAN AND VARIANCE 163 7.2 Plug-In Estimates of Mean and Variance Population quantities defined by expected values are easily estimated by the plug-in principle. For example, suppose that X1, . . . , Xn ∼ P and that we observe a sample ~ x = {x1, . . . , xn}. Let µ = EXi denote the population mean. Then Definition 7.3 The plug-in estimate of µ, denoted µ̂n, is the mean of the empirical distibution: µ̂n = n X i=1 xi · 1 n = 1 n n X i=1 xi = x̄n. This quantity is called the sample mean. Example 7.2 (continued) The population mean is µ = EXi = 1· 1 6 +2· 1 6 +3· 1 6 +4· 1 6 +5· 1 6 +6· 1 6 = 1 + 2 + 3 + 4 + 5 + 6 6 = 3.5. The sample mean of sample (7.1) is µ̂20 = x̄20 = 1 · 1 20 + 6 · 1 20 + · · · + 4 · 1 20 + 2 · 1 20 = 1 × 0.05 + 2 × 0.35 + 3 × 0.30 + 4 × 0.10 + 5 × 0.10 + 6 × 0.10 = 3.15. Notice that µ̂20 6= µ. This is another example of sampling variation. The variance can be estimated in the same way. Let σ2 = Var Xi denote the population variance; then Definition 7.4 The plug-in estimate of σ2, denoted c σ2 n, is the variance of the empirical distribution: c σ2 n = n X i=1 (xi − µ̂n)2 · 1 n = 1 n n X i=1 (xi − x̄n)2 = 1 n n X i=1 x2 i − Ã 1 n n X i=1 xi !2 . Notice that we do not refer to c σ2 n as the sample variance. As will be discussed in Section 9.2.2, most authors designate another, equally plausible estimate of the population variance as the sample variance.
  • 166. 164 CHAPTER 7. DATA Example 7.2 (continued) The population variance is σ2 = EX2 i − (EXi)2 = 12 + 22 + 32 + 42 + 52 + 62 6 − 3.52 = 35 12 . = 2.9167. The plug-in estimate of the variance is d σ2 20 = ³ 12 × 0.05 + 22 × 0.35 + 32 × 0.30+ 42 × 0.10 + 52 × 0.10 + 62 × 0.10 ´ − 3.152 = 1.9275. Again, notice that d σ2 20 6= σ2, yet another example of sampling variation. There are many ways to compute the preceding plug-in estimates using R. Assuming that x contains the sample, here are two possibilities: > n <- length(x) > plug.mean <- sum(x)/n > plug.var <- sum(x^2)/n - plug.mean^2 > plug.mean <- mean(x) > plug.var <- mean(x^2) - plug.mean^2 7.3 Plug-In Estimates of Quantiles Population quantities defined by quantiles can also be estimated by the plug- in principle. Again, suppose that X1, . . . , Xn ∼ P and that we observe a sample ~ x = {x1, . . . , xn}. Then Definition 7.5 The plug-in estimate of a population quantile is the corre- sponding quantile of the empirical distribution. In particular, the sample median is the median of the empirical distribution. The sample interquartile range is the interquartile range of the empirical distribution. Example 7.4 Consider the experiment of drawing a sample of size n = 20 from Uniform(1, 5). This probability distribution has a population median of 3 and a population interquartile range of 4 − 2 = 2. I simulated this experiment (and listed the sample in increasing order) with the following R command: > x <- sort(runif(20,min=1,max=5))
  • 167. 7.3. PLUG-IN ESTIMATES OF QUANTILES 165 This resulted in the following sample: 1.124600 1.161286 1.445538 1.828181 1.853359 1.934939 1.943951 2.107977 2.372500 2.448152 2.708874 3.297806 3.418913 3.437485 3.474940 3.698471 3.740666 4.039637 4.073617 4.195613 The sample median is 2.448152 + 2.708874 2 = 2.578513, which also can be computed with the following R command: > median(x) [1] 2.578513 Notice that the sample median does not exactly equal the population median. This is another example of sampling variation. To compute the sample interquartile range, we require the first and third sample quartiles, i.e., the α = 0.25 and α = 0.75 sample quantiles. We must now confront the fact that Definition 6.5 may not specify unique quantile values. For the empirical distribution of the sample above, any number in [1.853359, 1.934939] is a sample first quartile and any number in [3.474940, 3.698471] is a sample third quartile. The statistical community has not agreed on a convention for resolving the ambiguity in the definition of quartiles. One natural and popular possi- bility is to use the central value in each interval of possible quartiles. If we adopt that convention here, then the sample interquartile range is 3.474940 + 3.698471 2 − 1.853359 + 1.934939 2 = 1.692556. R adopts a slightly different convention, illustrated below. The following command computes the 0.25 and 0.75 quantiles: > quantile(x,probs=c(.25,.75)) 25% 75% 1.914544 3.530823 The following command computes several useful sample quantities: > summary(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.124600 1.914544 2.578513 2.715325 3.530823 4.195613
  • 168. 166 CHAPTER 7. DATA If we use the R definition of quantile, then the sample interquartile range is 3.530823 − 1.914544 = 1.616279. Rather than typing the quartiles into R, we can compute the sample interquartile range as follows: > q <- as.vector(quantile(x,probs=c(.25,.75))) > q[2]-q[1] [1] 1.616279 This is sufficiently complicated that we might prefer to create a function that computes the interquartile range of a sample: > iqr <- function(x) { + q <- as.vector(quantile(x,probs=c(.25,.75))) + return(q[2]-q[1]) + } > iqr(x) [1] 1.616279 Notice that the sample quantities do not exactly equal the population quantities that they estimate, regardless of which convention we adopt for defining quartiles. This is another example of sampling variation. Used judiciously, sample quantiles can be extremely useful when trying to discern various features of the population from which the sample was drawn. The remainder of this section describes two graphical techniques for assimilating and displaying sample quantile information. 7.3.1 Box Plots Information about sample quartiles is often displayed visually, in the form of a box plot. A box plot of a sample consists of a rectangle that extends from the first to the third sample quartile, thereby drawing attention to the central 50% of the data. Thus, the length of the rectangle equals the sample interquartile range. The location of the sample median is also identified, and its location within the rectangle often provides insight into whether or not the population from which the sample was drawn is symmetric. Whiskers extend from the ends of the rectangle, either to the extreme values of the data or to 1.5 times the sample interquartile range, whichever is less. Values that lie beyond the whiskers are called outliers and are individually identified.
  • 169. 7.3. PLUG-IN ESTIMATES OF QUANTILES 167 0 2 4 6 8 10 Figure 7.2: A box plot of a sample from χ2(3). Example 7.5 The pdf of the asymmetric distribution χ2(3) was graphed in Figure 5.8. The following R commands draw a random sample of n = 100 observed values from this population, then construct a box plot of the sam- ple: > x <- rchisq(100,df=3) > boxplot(x) An example of a box plot produced by these commands is displayed in Figure 7.2. In this box plot, the numerical values in the sample are represented by the vertical axis. The third quartile of the box plot in Figure 7.2 is farther above the median than the first quartile is below it. The short lower whisker extends
  • 170. 168 CHAPTER 7. DATA from the first quartile to the minimal value in the sample, whereas the long upper whisker extends 1.5 interquartile ranges beyond the third quartile. Furthermore, there are 4 outliers beyond the upper whisker. Once we learn to discern these key features of the box plot, we can easily recognize that the population from which the sample was drawn is not symmetric. The frequency of outliers in a sample often provides useful diagnostic information. Recall that, in Section 6.3, we computed that the interquartile range of a normal distribution is 1.34898 standard deviations. A value is an outlier if it lies more than z = 1.34898 2 + 1.5 · 1.34898 = 2.69796 standard deviations from the mean. Hence, the probability that an observa- tion drawn from a normal distribution is an outlier is > 2*pnorm(-2.69796) [1] 0.006976582 and we would expect a sample drawn from a normal distribution to contain approximately 7 outliers per 1000 observations. A sample that contains a dramatically different proportion of outliers, as in Example 7.5, is not likely to have been drawn from a normal distribution. Box plots are especially useful for comparing several populations. Example 7.6 We drew samples of 100 observations from three normal populations: Normal(0, 1), Normal(2, 1), and Normal(1, 4). To attempt to discern in the samples the various differences in population mean and stan- dard deviation, we examined side-by-side box plots. This was accomplished by the following R commands: > z1 <- rnorm(100) > z2 <- rnorm(100,mean=2,sd=1) > z3 <- rnorm(100,mean=1,sd=2) > boxplot(z1,z2,z3) An example of the output of these commands is displayed in Figure 7.3. 7.3.2 Normal Probability Plots Another powerful graphical technique that relies on quantiles are quantile- quantile (QQ) plots, which plot the quantiles of one distribution against the
  • 171. 7.3. PLUG-IN ESTIMATES OF QUANTILES 169 1 2 3 −4 −2 0 2 4 6 Figure 7.3: Box plots of samples from three normal distributions. quantiles of another. QQ plots are used to compare the shapes of two distri- butions, most commonly by plotting the observed quantiles of an empirical distribution against the corresponding quantiles of a theoretical normal dis- tribution. In this case, a QQ plot is often called a normal probability plot. If the shape of the empirical distribution resembles a normal distribution, then the points in a normal probability plot should tend to fall on a straight line. If they do not, then we should be skeptical that the sample was drawn from a normal distribution. Extracting useful information from normal probabil- ity plots requires some practice, but the patient data analyst will be richly rewarded. Example 7.4 (continued) A normal probability plot of the sample generated in Example 7.5 against a theoretical normal distribution is dis- played in Figure 7.4. This plot was created using the following R command: > qqnorm(x) Notice the systematic and asymmetric bending away from linearity in this plot. In particular, the smaller quantiles are much closer to the central values
  • 172. 170 CHAPTER 7. DATA −2 −1 0 1 2 0 2 4 6 8 10 12 14 Normal Q−Q Plot Theoretical Quantiles Sample Quantiles Figure 7.4: A normal probability plot of a sample from χ2(3). than should be the case for a normal distribution. This suggests that this sample was drawn from a nonnormal distribution that is skewed to the right. Of course, we know that this sample was drawn from χ2(3), which is in fact skewed to the right. When using normal probability plots, one must guard against overinter- preting slight departures from linearity. Remember: some departures from linearity will result from sampling variation. Consequently, before drawing definitive conclusions, the wise data analyst will generate several random samples from the theoretical distribution of interest in order to learn how much sampling variation is to be expected. Before dismissing the possibil- ity that the sample in Example 7.5 was drawn from a normal distribution, one should generate several normal samples of the same size for comparison. The normal probability plots of four such samples are displayed in Figure 7.5. In none of these plots did the points fall exactly on a straight line. However, upon comparing the normal probability plot in Figure 7.4 to the normal probability plots in Figure 7.5, it is abundantly clear that the sample in Example 7.5 was not drawn from a normal distribution.
  • 173. 7.4. KERNEL DENSITY ESTIMATES 171 −2 −1 0 1 2 −2 −1 0 1 2 3 Normal Q−Q Plot Theoretical Quantiles Sample Quantiles −2 −1 0 1 2 −2 −1 0 1 2 3 Normal Q−Q Plot Theoretical Quantiles Sample Quantiles −2 −1 0 1 2 −3 −2 −1 0 1 2 Normal Q−Q Plot Theoretical Quantiles Sample Quantiles −2 −1 0 1 2 −1 0 1 2 Normal Q−Q Plot Theoretical Quantiles Sample Quantiles Figure 7.5: Normal probability plots of four samples from Normal(0, 1). 7.4 Kernel Density Estimates Suppose that ~ x = {x1, . . . , xn} is a sample drawn from an unknown pdf f. Box plots and normal probability plots are extremely useful graphical tech- niques for discerning in ~ x certain important attributes of f, e.g., centrality, dispersion, asymmetry, nonnormality. To discern more subtle features of f, we now ask if it is possible to reconstruct from ~ x a pdf ˆ fn that approximates f. This is a difficult problem, one that remains a vibrant topic of research and about which little is said in introductory courses. However, using the concept of the empirical distribution, one can easily motivate one of the most popular techniques for nonparametric probability density estimation. The logic of the empirical distribution is this: by assigning probability 1/n to each xi, one accumulates more probability in regions that produced more observed values. However, because the entire amount 1/n is placed exactly on the value xi, the resulting empirical distribution is necessarily discrete. If the population from which the sample was drawn is discrete, then the empirical distribution estimates the probability mass function. However,
  • 174. 172 CHAPTER 7. DATA if the population from which the sample was drawn is continuous, then all possible values occur with zero probability. In this case, there is nothing special about the precise values that were observed—what is important are the regions in which they occurred. Instead of placing all of the probability 1/n assigned to xi exactly on the value xi, we now imagine distributing it in a neighborhood of xi according to some probability density function. This construction will also result in more probability accumulating in regions that produced more values, but it will produce a pdf instead of a pmf. Here is a general description of this approach, usually called kernel density estimation: 1. Choose a probability density function K, the kernel. Typically, K is a symmetric pdf centered at the origin. Common choices of K include the Normal(0, 1) and Uniform[−0.5, 0.5] pdfs. 2. At each xi, center a rescaled copy of the kernel. This pdf, 1 h K µ x − xi h ¶ , (7.2) will control the distribution of the 1/n probability assigned to xi. The parameter h is variously called the smoothing parameter, the window width, or the bandwidth. 3. The difficult decision in constructing a kernel density estimate is the choice of h. The technical details of this issue are beyond the scope of this book, but the underlying principles are quite simple: • Small values of h mean that the standard deviation of (7.2) will be small, so that the 1/n probability assigned to xi will be distributed close to xi. This is appropriate when n is large and the xi are tightly packed. • Large values of h mean that the standard deviation of (7.2) will be large, so that the 1/n probability assigned to xi will be widely distributed in the general vicinity of xi. This is appropriate when n is small and the xi are sparse. 4. After choosing K and h, the kernel density estimate of f is ˆ fn(x) = n X i=1 1 n 1 h K µ x − xi h ¶ = 1 nh n X i=1 K µ x − xi h ¶ . Such estimates are easily computed and graphed using the R functions density and plot.
  • 175. 7.4. KERNEL DENSITY ESTIMATES 173 Example 7.7 Consider the probability density function f displayed in Figure 7.6. The most striking feature of f is that it is bimodal. Can we detect this feature using a sample drawn from f? x f(x) -3 -2 -1 0 1 2 3 4 0.0 0.05 0.10 0.15 0.20 0.25 0.30 Figure 7.6: A bimodal probability density function. We drew a sample of size n = 100 from f. A box plot and a normal probability plot of this sample are displayed in Figure 7.7. It is difficult to discern anything unusual from the box plot. The normal probability plot contains all of the information in the sample, but it is encoded in such a way that the feature of interest is not easily extracted. In contrast, the kernel density estimate displayed in Figure 7.8 clearly reveals that the sample was drawn from a bimodal population. After storing the sample in the vector x, this estimate was computed and plotted using the following R command: > plot(density(x))
  • 176. 174 CHAPTER 7. DATA -3 -2 -1 0 1 2 3 • • • • •• •• •••••••• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• •••• ••••••••• • • • Quantiles of Standard Normal x -2 -1 0 1 2 -3 -2 -1 0 1 2 3 Figure 7.7: A box plot and a normal probability plot for Example 7.7. 7.5 Case Study: Are Forearm Lengths Normally Distributed? Many of the inferential procedures that statisticians have developed assume that the data to be analyzed were drawn from one (or more) normal distri- butions. These procedures are often the most elegant and powerful methods available to the data analyst, but they can easily mislead when applied to nonnormal data. Conveniently, the normality assumption is often quite plausible; just as often, however, it is not. It is therefore essential that the data analyst be able to make informed decisions about whether or not to assume normality. One of our primary uses for the methods introduced in the present chapter will be to assist us in making such decisions. Be warned: because we cannot know the true distributions of (most) random variables encountered in scientific experimentation, we cannot know whether or not they are in fact normally distributed. Such ignorance should humble and can intimidate, but it should not paralyze. To analyze data, one must proceed somehow, and it is best to do so with as much information as possible. It is often (but not universally) the case that measurements of linear
  • 177. 7.5. CASE STUDY: ARE FOREARM LENGTHS NORMALLY DISTRIBUTED?175 • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • density(x)$x density(x)$y -4 -2 0 2 4 0.0 0.05 0.10 0.15 0.20 0.25 Figure 7.8: A kernel density estimate for Example 7.7. dimension (height, length, width, depth, breadth, etc.) are normally dis- tributed. To further illustrate the methods introduced in the present chap- ter, we apply them to a famous data set, measurements of forearm length made on n = 140 adult males, inquiring whether or not it appears plausi- ble to assume that forearm lengths are normally distributed. These data, displayed in Table 7.1, were studied by K. Pearson and A. Lee1 and subse- quently reproduced as Data Set 139 in A Handbook of Small Data Sets. Examining the numbers in Table 7.1, we note that the measurements were made with a precision of 0.1 inches and that many values occur several times. For example, 9 of the 140 men had forearms with a measured length of 18.5 inches. Because the probability that any two continuous random variables will be equal is zero, the existence of equal values in the sample should cause one to consider whether or not these measurements should be modelled as observed values of continuous random variables. In this case, it makes sense to proceed. Actual (as opposed to measured) forearm 1 K. Pearson and A. Lee (1903). On the laws of inheritance in man. I. Inheritance of physical characters. Biometrika, 2:357–462.
  • 178. 176 CHAPTER 7. DATA 17.3 18.4 20.9 16.8 18.7 20.5 17.9 20.4 18.3 20.5 19.0 17.5 18.1 17.1 18.8 20.0 19.1 19.1 17.9 18.3 18.2 18.9 19.4 18.9 19.4 20.8 17.3 18.5 18.3 19.4 19.0 19.0 20.5 19.7 18.5 17.7 19.4 18.3 19.6 21.4 19.0 20.5 20.4 19.7 18.6 19.9 18.3 19.8 19.6 19.0 20.4 17.3 16.1 19.2 19.6 18.8 19.3 19.1 21.0 18.6 18.3 18.3 18.7 20.6 18.5 16.4 17.2 17.5 18.0 19.5 19.9 18.4 18.8 20.1 20.0 18.5 17.5 18.5 17.9 17.4 18.7 18.6 17.3 18.8 17.8 19.0 19.6 19.3 18.1 18.5 20.9 19.8 18.1 17.1 19.8 20.6 17.6 19.1 19.5 18.4 17.7 20.2 19.9 18.6 16.6 19.2 20.0 17.4 17.1 18.3 19.1 18.5 19.6 18.0 19.4 17.1 19.9 16.3 18.9 20.7 19.7 18.5 18.4 18.7 19.3 16.3 16.9 18.2 18.5 19.3 18.1 18.0 19.5 20.3 20.1 17.2 19.5 18.8 19.2 17.7 Table 7.1: Forearm lengths (in inches) of 140 adult males, studied by K. Pearson and A. Lee (1903). length is surely continuous, and there are 47 distinct values in Table 7.1. To preserve important numerical relations, e.g., 19.5 − 18.5 = 2(18.5 − 18), we can accomplish far more with continuous random variables than we might with discrete random variables. We proceed to investigate the plausibility of assuming that the continuous random variables are normal random variables. Figure 7.9 displays a box plot, a normal probability plot, and a kernel density estimate, constructed from the 140 forearm measurements in Ta- ble 7.1 by the following R commands: > par(mfrow=c(1,3)) > boxplot(forearms,main="Box Plot") > qqnorm(forearms) > plot(density(forearms),type="l",main="PDF Estimate") Examining the box plot, we first note that the sample median lies roughly halfway between the first and third sample quartiles, and that the whiskers are of roughly equal length. This is precisely what we would expect to observe if the data were drawn from a symmetric distribution. We also note that these data contain no outliers. These features are consistent with the
  • 179. 7.5. CASE STUDY: ARE FOREARM LENGTHS NORMALLY DISTRIBUTED?177 16 17 18 19 20 21 Box Plot −2 0 1 2 16 17 18 19 20 21 Normal Q−Q Plot Theoretical Quantiles Sample Quantiles 16 18 20 22 0.00 0.10 0.20 0.30 PDF Estimate N = 140 Bandwidth = 0.375 Density Figure 7.9: Three displays of 140 forearm measurements. possibility that these data were drawn from a normal distribution, but they do not preclude other symmetric distributions. Both normal probability plots and kernel density estimates reveal far more about the data than do box plots. More information is generally de- sirable, but seeing too much creates the danger that patterns created by chance variation will be overinterpreted by the too-eager data analyst. Key to the proper use of normal probability plots and kernel density estimates is mature judgment about which features reflect on the population and which features are due to chance variation. The normal probability plot of the forearm data is generally straight, but should we worry about the kink at the lower end? The kernel density estimate of the forearm data is unimodal and nearly symmetric, but should we be concerned by its apparent lack of inflection points at ±1 standard deviations? The best way to investigate such concerns is to generate pseu- dorandom normal samples, each of the same size as the observed sample (here n = 140), and consider what—if anything—distinguishes the observed sample from the normal samples. I generated three pseudorandom normal samples using the rnorm function. The four normal probability plots are displayed in Figure 7.10 and the four kernel density estimates are displayed in Figure 7.11. I am unable to advance a credible argument that the forearm sample looks any less normal than the three normal samples. In addition to the admittedly subjective comparison of normal probabil-
  • 180. 178 CHAPTER 7. DATA −2 −1 0 1 2 −2 −1 0 1 Normal Q−Q Plot Theoretical Quantiles Sample Quantiles −2 −1 0 1 2 16 17 18 19 20 21 Normal Q−Q Plot Theoretical Quantiles Sample Quantiles −2 −1 0 1 2 −3 −2 −1 0 1 2 Normal Q−Q Plot Theoretical Quantiles Sample Quantiles −2 −1 0 1 2 −3 −2 −1 0 1 2 Normal Q−Q Plot Theoretical Quantiles Sample Quantiles Figure 7.10: Normal probability plots of the forearm data and three pseu- dorandom samples from Normal(0, 1). ity plots and kernel density estimates, it may be helpful to compare certain quantitative attributes of the sample to known quantitative attributes of normal distributions. In Section 6.3, for example, we noted that the ra- tio of population interquartile range to population standard deviation is 1.34898 . = 1.35 for a normal distribution. The analogous ratio of sample interquartile range to sample standard deviation can be quite helpful in de- ciding whether or not the sample was drawn from a normal distribution. It should be noted, however, that not all distributions with this ratio are normal; thus, although a ratio substantially different from 1.35 may suggest that the sample was not drawn from a normal distribution, a ratio close to 1.35 does not prove that it was.
  • 181. 7.5. CASE STUDY: ARE FOREARM LENGTHS NORMALLY DISTRIBUTED?179 −3 −2 −1 0 1 2 0.0 0.1 0.2 0.3 0.4 PDF Estimate N = 140 Bandwidth = 0.2804 Density 16 18 20 22 0.00 0.10 0.20 0.30 PDF Estimate N = 140 Bandwidth = 0.375 Density −4 −2 0 2 4 0.0 0.1 0.2 0.3 PDF Estimate N = 140 Bandwidth = 0.3359 Density −4 −2 0 2 0.00 0.10 0.20 0.30 PDF Estimate N = 140 Bandwidth = 0.3528 Density Figure 7.11: Kernel density estimates constructed from the forearm data and three pseudorandom samples from Normal(0, 1). To facilitate the calculation of iqr:sd ratios, we define an R function that performs the necessary operations. Here are the R commands that define our new function, iqrsd: > iqrsd <- function(x) { + x.mean <- mean(x) + x.var <- mean(x^2)-x.mean^2 + q <- as.vector(quantile(x,probs=c(.25,.75))) + x.iqr <- q[2]-q[1] + return(x.iqr/sqrt(x.var)) + }
  • 182. 180 CHAPTER 7. DATA I generated 10 pseudorandom normal samples, each of size n = 140, using the rnorm function, than applied the new iqrsd function to each sample. The resulting ratios ranged from a minimum of 1.178 to a maximum of 1.545. The ratio for the forearm data is 1.344, so one can hardly object to assuming normality on the basis of this quantity. Overall, the forearm data look about as normal as one could ever hope to encounter with actual experimental data. If one is hoping to use an infer- ential procedure that assumes normality, then this is the ideal case. Unfor- tunately, one rarely encounters situations in which one can so comfortably assume normality. 7.6 Exercises 1. The following independent samples were drawn from four populations: Sample 1 Sample 2 Sample 3 Sample 4 5.098 4.627 3.021 7.390 2.739 5.061 6.173 5.666 2.146 2.787 7.602 6.616 5.006 4.181 6.250 7.868 4.016 3.617 1.875 2.428 9.026 3.605 6.996 6.740 4.965 6.036 4.850 7.605 5.016 4.745 6.661 10.868 6.195 2.340 6.360 1.739 4.523 6.934 7.052 1.996 (a) Use the boxplot function to create side-by-side box plots of these samples. Does it appear that these samples were all drawn from the same population? Why or why not? (b) Use the rnorm function to draw four independent samples, each of size n = 10, from one normal distribution. Examine box plots of these samples. Is it possible that Samples 1–4 were all drawn from the same normal distribution? 2. The following sample, ~ x, was collected and sorted: 0.246 0.327 0.423 0.425 0.434 0.530 0.583 0.613 0.641 1.054 1.098 1.158 1.163 1.439 1.464 2.063 2.105 2.106 4.363 7.517
  • 183. 7.6. EXERCISES 181 (a) Graph the empirical cdf of ~ x. (b) Calculate the plug-in estimates of the mean, the variance, the median, and the interquartile range. (c) Take the square root of the plug-in estimate of the variance and compare it to the plug-in estimate of the interquartile range. Do you think that ~ x was drawn from a normal distribution? Why or why not? (d) Use the qqnorm function to create a normal probability plot. Do you think that ~ x was drawn from a normal distribution? Why or why not? (e) Now consider the transformed sample ~ y produced by replacing each xi with its natural logarithm. If ~ x is stored in the vector x, then ~ y can be computed by the following R command: > y <- log(x) Do you think that ~ y was drawn from a normal distribution? Why or why not? 3. In January 2002, twelve students enrolled in Math 351 (Applied Statis- tics) at the College of William & Mary reported the following results for the experiment described in Exercise 1.5.2. (Two students reported more than one measurement, but only one measurement per student is reported here.) 143 3 16 144 4 16 14014 16 144 7 16 14312 16 15313 16 11910 16 143 1 16 14314 16 144 3 16 144 7 16 148 3 16 (a) Do these measurements appear to be a sample from a normal distribution? Why or why not? (b) Suggest possible explanations for the surprising amount of varia- tion in these measurements. (c) Use these measurements to estimate the true length of the table. Justify your estimation procedure. 4. Forty-one students taking Math 351 (Applied Statistics) at the College of William & Mary were administered a test. The following test scores
  • 184. 182 CHAPTER 7. DATA were observed and sorted: 90 90 89 88 85 85 84 82 82 82 81 81 81 80 79 79 78 76 75 74 72 71 70 66 65 63 62 62 61 59 58 58 57 56 56 53 48 44 40 35 33 (a) Do these numbers appear to be a random sample from a normal distribution? (b) Does this list of numbers have any interesting anomalies? 5. Do the numbers in Table 1.1 (Michelson’s measurements of the speed of light) appear to be a random sample from a normal distribution? 6. Consider a box that contains 10 tickets, labelled {1, 1, 1, 1, 2, 5, 5, 10, 10, 10}. From this box, I propose to draw (with replacement) n = 40 tickets. I am interested in the sum, Y , of the 40 ticket values that I draw. Write an R function named box.model that simulates this experiment, i.e., evaluating box.model is like observing a value, y, of the random variable Y . 7. Experiment with using R to generate simulated random samples of various sizes. Use the summary function to compute the quartiles of these samples. Try to discern the convention that this function uses to define sample quartiles.
  • 185. Chapter 8 Lots of Data Throughout Chapter 7 we emphasized that, because of sampling variation, the plug-in estimate of a population quantity rarely equals the actual value of the population quantity. The present chapter explores this phenomenon in greater depth. Suppose that X1, . . . , Xn ∼ P and that an experimental scientist wants to estimate the population mean, µ = EXi. To do so, she observes values x1, . . . , xn of X1, . . . , Xn, then computes x̄n = 1 n n X i=1 xi, the plug-in estimate of µ. Mathematically, this is equivalent to first defining a new random variable, X̄n = 1 n n X i=1 Xi, then observing the value x̄n of X̄n. The random variable X̄n is the average of the random variables X1, . . . , Xn. Both the random variable X̄n and the observed value x̄n are called the sample mean. This is potentially confusing, but the convention of using uppercase letters for random variables and low- ercase letters for observed values allows us to be clear about which concept we have in mind when we use the phrase “sample mean.” In this chapter, we study the behavior of X̄n. We begin with an example. Suppose that, unbeknownst to the scientist, P is the asymmetric probability distribution χ2(3), with pdf depicted in Figure 5.8. Because of Corollary 5.1, it follows that µ = 3. Hence, we can 183
  • 186. 184 CHAPTER 8. LOTS OF DATA assess the quality of the scientist’s estimates of µ by comparing the estimates to the correct value, µ = 3. We will use simulation to explore what might occur in this situation. First, consider drawing a small sample of n = 5 observations. Here is what happened when I performed that experiment three times: > x <- rchisq(5,df=3) > mean(x) [1] 3.650077 > x <- rchisq(5,df=3) > mean(x) [1] 2.963841 > x <- rchisq(5,df=3) > mean(x) [1] 2.063129 Due to sampling variation, the first estimate is too high, the second estimate is just about right, and the third estimate is too low. These results suggest that small samples may be unreliable. Of course, if we admit the possibility that small samples are unreliable, then it might be wise to perform the simulation more than three times! So, I performed the same simulation 1000 times, each time observing values of X1, . . . , X5 ∼ χ2(3) and then computing x̄5, the observed value of X̄5. To display the results, I applied the method described in Section 7.4 to the 1000 observed values of X̄5. This produced a kernel density estimate, displayed in Figure 8.1, of the pdf of X̄5. Notice the considerable variation in the observed values of X̄5. Next, consider drawing a moderate sample of n = 20 observations. I did this 1000 times, each time observing values of X1, . . . , X20 ∼ χ2(3) and then computing x̄20, the observed value of X̄20. From these 1000 observed values of X̄20, I constructed a kernel density estimate of the pdf of X̄20. This estimated pdf is displayed in Figure 8.2. Notice that the observed values of X̄20 tend to be more tightly clustered around µ = 3 than do the observed values of X̄5, suggesting that moderate samples are more reliable than small samples. Finally, consider drawing a large sample of n = 80 observations. I did this 1000 times, each time observing values of X1, . . . , X80 ∼ χ2(3) and then computing x̄80, the observed value of X̄80. From these 1000 observed values of X̄80, I constructed a kernel density estimate of the pdf of X̄80. This
  • 187. 8.1. AVERAGING DECREASES VARIATION 185 observed value of sample mean estimated density 0 1 2 3 4 5 6 7 8 9 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Figure 8.1: Kernel density estimate constructed from 1000 observed values of X̄n for n = 5. X1, . . . , Xn ∼ χ2(3) and µ = EXi = 3. estimated pdf is displayed in Figure 8.3. Notice that the observed values of X̄80 tend to be more tightly clustered around µ = 3 than do the observed values of X̄20, suggesting that large samples are more reliable than moderate samples. The sections in this chapter generalize the preceding observations. We consider any experiment that can be performed, independently and identi- cally, as many times as we please. We describe this situation by supposing the existence of a sequence of independent and identically distributed ran- dom variables, X1, X2, . . ., and we assume that these random variables have a finite mean µ = EXi and a finite variance σ2 = Var Xi. Under these assumptions, we study the behavior of the sample mean, X̄n, as n increases. 8.1 Averaging Decreases Variation By definition, EXi = µ. Thus, the population mean is the average value assumed by the random variable Xi. This statement is also true of the
  • 188. 186 CHAPTER 8. LOTS OF DATA observed value of sample mean estimated density 0 1 2 3 4 5 6 7 8 9 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Figure 8.2: Kernel density estimate constructed from 1000 observed values of X̄n for n = 20. X1, . . . , Xn ∼ χ2(3) and µ = EXi = 3. sample mean: EX̄n = 1 n n X i=1 EXi = 1 n n X i=1 µ = µ; however, there is a crucial distinction between Xi and X̄n. The tendency of a random variable to assume a value that is close to its expected value is quantified by computing its variance. By definition, Var Xi = σ2, but Var X̄n = Var à 1 n n X i=1 Xi ! = 1 n2 n X i=1 Var Xi = 1 n2 n X i=1 σ2 = σ2 n . Hence, the sample mean has less variability than any of the individual ran- dom variables that are being averaged. Averaging decreases variation. Fur- thermore, as n → ∞, Var X̄n → 0. Thus, by repeating our experiment enough times, we can make the variation in the sample mean as small as we please.
  • 189. 8.2. THE WEAK LAW OF LARGE NUMBERS 187 observed value of sample mean estimated density 0 1 2 3 4 5 6 7 8 9 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Figure 8.3: Kernel density estimate constructed from 1000 observed values of X̄n for n = 80. X1, . . . , Xn ∼ χ2(3) and µ = EXi = 3. The preceding remarks suggest that, if the population mean is unknown, then we can draw inferences about it by observing the behavior of the sample mean. This fundamental insight is the basis for a considerable portion of this book. The remainder of this chapter refines the relation between the population mean and the behavior of the sample mean. 8.2 The Weak Law of Large Numbers Recall Definition 2.12 from Section 2.4: a sequence of real numbers {yn} converges to a limit c ∈ ℜ if and only if, for every ǫ > 0, there exists a natural number N such that yn ∈ (c − ǫ, c + ǫ) for each n ≥ N. Our first task is to generalize from convergence of a sequence of real numbers to convergence of a sequence of random variables. If we replace {yn}, a sequence of real numbers, with {Yn}, a sequence of random variables, then the event that Yn ∈ (c−ǫ, c+ǫ) is uncertain. Rather than demand that this event must occur for n sufficiently large, we ask only
  • 190. 188 CHAPTER 8. LOTS OF DATA that the probability of this event tend to unity as n tends to infinity. This results in Definition 8.1 A sequence of random variables {Yn} converges in probabil- ity to a constant c, written Yn P → c, if and only if, for every ǫ > 0, lim n→∞ P (Yn ∈ (c − ǫ, c + ǫ)) = 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . | c ( c − ǫ ) c + ǫ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ↓ Y5 ց Y25 → Y125 Figure 8.4: An example of convergence in probability. Convergence in probability is depicted in Figure 8.4 using the pdfs fn of continuous random variables Yn. (One could also use the pmfs of discrete random variables.) We see that pn = P (Yn ∈ (c − ǫ, c + ǫ)) = Z c+ǫ c−ǫ fn(x) dx is tending to unity as n increases. Notice, however, that each pn < 1. The concept of convergence in probability allows us to state an important result. Theorem 8.1 (Weak Law of Large Numbers) Let X1, X2, . . . be any se- quence of independent and identically distributed random variables having
  • 191. 8.2. THE WEAK LAW OF LARGE NUMBERS 189 finite mean µ and finite variance σ2. Then X̄n P → µ. This result is of considerable consequence. It states that, as we average more and more Xi, the average values that we observe tend to be distributed closer and closer to the theoretical average of the Xi. This property of the sample mean strengthens our contention that the behavior of X̄n provides more and more information about the value of µ as n increases. The Weak Law of Large Numbers (WLLN) has an important special case. Corollary 8.1 (Law of Averages) Let A be any event and consider a se- quence of independent and identical experiments in which we observe whether or not A occurs. Let p = P(A) and define independent and identically dis- tributed random variables by Xi = ( 1 A occurs 0 Ac occurs ) . Then Xi ∼ Bernoulli(p), X̄n is the observed frequency with which A occurs in n trials, and µ = EXi = p = P(A) is the theoretical probability of A. The WLLN states that the former tends to the latter as the number of trials increases. The Law of Averages formalizes our common experience that “things tend to average out in the long run.” For example, we might be surprised if we tossed a fair coin n = 10 times and observed X̄10 = 0.9; however, if we knew that the coin was indeed fair (p = 0.5), then we would remain confident that, as n increased, X̄n would eventually tend to 0.5. Notice that the conclusion of the Law of Averages is the frequentist interpretation of probability. Instead of defining probability via the notion of long-run frequency, we defined probability via the Kolmogorov axioms. Although our approach does not require us to interpret probabilities in any one way, the Law of Averages states that probability necessarily behaves in the manner specified by frequentists. Finally, recall from Section 7.1 that the empirical probability of an event A is the observed frequency with which A occurs in the sample: P̂n(A) = # {xi ∈ A} · 1 n ,
  • 192. 190 CHAPTER 8. LOTS OF DATA By the Law of Averages, this quantity tends to the true probability of A as the size of the sample increases. Thus, the theory of probability provides a mathematical justification for approximating P with P̂n when P is unknown. 8.3 The Central Limit Theorem The Weak Law of Large Numbers states a precise sense in which the dis- tribution of values of the sample mean collapses to the population mean as the size of the sample increases. As interesting and useful as this fact is, it leaves several obvious questions unanswered: 1. How rapidly does the sample mean tend toward the population mean? 2. How does the shape of the sample mean’s distribution change as the sample mean tends toward the population mean? To answer these questions, we convert the random variables in which we are interested to standard units. We have supposed the existence of a sequence of independent and iden- tically distributed random variables, X1, X2, . . ., with finite mean µ = EXi and finite variance σ2 = Var Xi. We are interested in the sum and/or the average of X1, . . . , Xn. It will be helpful to identify several crucial pieces of information for each random variable of interest: random expected standard standard variable value deviation units Xi µ σ (Xi − µ) /σ Pn i=1 Xi nµ √ n σ ( Pn i=1 Xi − nµ) ÷ ( √ n σ) X̄n µ σ/ √ n ¡ X̄n − µ ¢ ÷ (σ/ √ n) First we consider Xi. Notice that converting to standard units does not change the shape of the distribution of Xi. For example, if Xi ∼ Bernoulli(0.5), then the distribution of Xi assigns equal probability to each of two values, x = 0 and x = 1. If we convert to standard units, then the distribution of Z1 = Xi − µ σ = Xi − 0.5 0.5
  • 193. 8.3. THE CENTRAL LIMIT THEOREM 191 also assigns equal probability to each of two values, z1 = −1 and z1 = 1. In particular, notice that converting Xi to standard units does not automati- cally result in a normally distributed random variable. Next we consider the sum and the average of X1, . . . , Xn. Notice that, after converting to standard units, these quantities are identical: Zn = Pn i=1 Xi − nµ √ nσ = (1/n) (1/n) Pn i=1 Xi − nµ √ n σ = X̄n − µ σ/ √ n . It is this new random variable on which we shall focus our attention. We begin by observing that Var £√ n ¡ X̄n − µ ¢¤ = Var (σZn) = σ2 Var (Zn) = σ2 is constant. The WLLN states that ¡ X̄n − µ ¢ P → 0, so √ n is a “magnification factor” that maintains random variables with a constant positive variance. We conclude that 1/ √ n measures how rapidly the sample mean tends toward the population mean. Now we turn to the more refined question of how the distribution of the sample mean changes as the sample mean tends toward the population mean. By converting to standard units, we are able to distinguish changes in the shape of the distribution from changes in its mean and variance. Despite our inability to make general statements about the behavior of Z1, it turns out that we can say quite a bit about the behavior of Zn as n becomes large. The following theorem is one of the most remarkable and useful results in all of mathematics. It is fundamental to the study of both probability and statistics. Theorem 8.2 (Central Limit Theorem) Let X1, X2, . . . be any sequence of independent and identically distributed random variables having finite mean µ and finite variance σ2. Let Zn = X̄n − µ σ/ √ n , let Fn denote the cdf of Zn, and let Φ denote the cdf of the standard normal distribution. Then, for any fixed value z ∈ ℜ, P (Zn ≤ z) = Fn(z) → Φ(z) as n → ∞.
  • 194. 192 CHAPTER 8. LOTS OF DATA The Central Limit Theorem (CLT) states that the behavior of the average (or, equivalently, the sum) of a large number of independent and identically distributed random variables will resemble the behavior of a standard normal random variable. This is true regardless of the distribution of the random variables that are being averaged. Thus, the CLT allows us to approximate a variety of probabilities that otherwise would be intractable. Of course, we require some sense of how many random variables must be averaged in order for the normal approximation to be reasonably accurate. This does depend on the distribution of the random variables, but a popular rule of thumb is that the normal approximation can be used if n ≥ 30. Often, the normal approximation works quite well with even smaller n. Example 8.1 A chemistry professor is attempting to determine the conformation of a certain molecule. To measure the distance between a pair of nearby hydrogen atoms, she uses NMR spectroscopy. She knows that this measurement procedure has an expected value equal to the actual distance and a standard deviation of 0.5 angstroms. If she replicates the experiment 36 times, then what is the probability that the average measured value will fall within 0.1 angstroms of the true value? Let Xi denote the measurement obtained from replication i, for i = 1, . . . , 36. We are told that µ = EXi is the actual distance between the atoms and that σ2 = Var Xi = 0.52. Let Z ∼ Normal(0, 1). Then, applying the CLT, P ¡ µ − 0.1 < X̄36 < µ + 0.1 ¢ = P ¡ µ − 0.1 − µ < X̄36 − µ < µ + 0.1 − µ ¢ = P Ã −0.1 0.5/6 < X̄36 − µ 0.5/6 < 0.1 0.5/6 ! = P (−1.2 < Zn < 1.2) . = P (−1.2 < Z < 1.2) = Φ(1.2) − Φ(−1.2). Now we use R: > pnorm(1.2)-pnorm(-1.2) [1] 0.7698607 We conclude that there is a chance of approximately 77% that the average of the measured values will fall with 0.1 angstroms of the true value. Notice that it is not possible to compute the exact probability. To do so would require knowledge of the distribution of the Xi.
  • 195. 8.3. THE CENTRAL LIMIT THEOREM 193 It is sometimes useful to rewrite the normal approximations derived from the CLT as statements of the approximate distributions of the sum and the average. For the sum we obtain n X i=1 Xi · ∼ Normal ³ nµ, nσ2 ´ (8.1) and for the average we obtain ¯ Xn · ∼ Normal à µ, σ2 n ! . (8.2) These approximations are especially useful when combined with Theorem 5.2. Example 8.2 The chemistry professor in Example 8.1 asks her grad- uate student to replicate the experiment that she performed an additional 64 times. What is the probability that the averages of their respective measured values will fall within 0.1 angstroms of each other? The professor’s measurements are X1, . . . , X36 ∼ ³ µ, 0.52 ´ . Applying (8.2), we obtain X̄36 · ∼ Normal µ µ, 0.25 36 ¶ . Similarly, the student’s measurements are Y1, . . . , Y64 ∼ ³ µ, 0.52 ´ . Applying (8.2), we obtain Ȳ64 · ∼ Normal µ µ, 0.25 64 ¶ or −Ȳ64 · ∼ Normal µ −µ, 0.25 64 ¶ . Now we apply Theorem 5.2 to conclude that X̄36 − Ȳ64 = X̄36 + ¡ −Ȳ64 ¢ · ∼ Normal à 0, 0.25 36 + 0.25 64 = 52 482 ! .
  • 196. 194 CHAPTER 8. LOTS OF DATA Converting to standard units, it follows that P ¡ −0.1 < X̄36 − Ȳ64 < 0.1 ¢ = P Ã −0.1 5/48 < X̄36 − Ȳ64 5/48 < 0.1 5/48 ! . = P (−0.96 < Z < 0.96) = Φ(0.96) − Φ(−0.96). Now we use R: > pnorm(.96)-pnorm(-.96) [1] 0.6629448 We conclude that there is a chance of approximately 66% that the two av- erages will fall with 0.1 angstroms of each other. The CLT has a long history. For the special case of Xi ∼ Bernoulli(p), a version of the CLT was obtained by De Moivre in the 1730s. The first attempt at a more general CLT was made by Laplace in 1810, but definitive results were not obtained until the second quarter of the 20th century. The- orem 8.2 is actually a very special case of far more general results established during that period. However, with one exception to which we now turn, it is sufficiently general for our purposes. The astute reader may have noted that, in Examples 8.1 and 8.2, we assumed that the population mean µ was unknown but that the population variance σ2 was known. Is this plausible? In Examples 8.1 and 8.2, it might be that the nature of the instrumentation is sufficiently well understood that the population variance may be considered known. In general, however, it seems somewhat implausible that we would know the population variance and not know the population mean. The normal approximations employed in Examples 8.1 and 8.2 require knowledge of the population variance. If the variance is not known, then it must be estimated from the measured values. Chapters 7 and 9 will introduce procedures for doing so. In anticipation of those procedures, we state the following generalization of Theorem 8.2: Theorem 8.3 Let X1, X2, . . . be any sequence of independent and identi- cally distributed random variables having finite mean µ and finite variance σ2. Suppose that D1, D2, . . . is a sequence of random variables with the prop- erty that D2 n P → σ2 and let Tn = X̄n − µ Dn/ √ n .
  • 197. 8.3. THE CENTRAL LIMIT THEOREM 195 Let Fn denote the cdf of Tn, and let Φ denote the cdf of the standard normal distribution. Then, for any fixed value t ∈ ℜ, P (Tn ≤ t) = Fn(t) → Φ(t) as n → ∞. We conclude this section with a warning. Statisticians usually invoke the CLT in order to approximate the distribution of a sum or an average of random variables X1, . . . , Xn that are observed in the course of an ex- periment. The Xi need not be normally distributed themselves—indeed, the grandeur of the CLT is that it does not assume normality of the Xi. Nevertheless, we will discover that many important statistical procedures do assume that the Xi are normally distributed. Researchers who hope to use these procedures naturally want to believe that their Xi are normally distributed. Often, they look to the CLT for reassurance. Many think that, if only they replicate their experiment enough times, then somehow their observations will be drawn from a normal distribution. This is ab- surd! Suppose that a fair coin is tossed once. Let X1 denote the number of Heads, so that X1 ∼ Bernoulli(0.5). The Bernoulli distribution is not at all like a normal distribution. If we toss the coin one million times, then each Xi ∼ Bernoulli(0.5). The Bernoulli distribution does not miraculously become a normal distribution. Remember, The Central Limit Theorem does not say that a large sample was necessarily drawn from a normal distribution! On some occasions, it is possible to invoke the CLT to anticipate that the random variable to be observed will behave like a normal random variable. This involves recognizing that the observed random variable is the sum or the average of lots of independent and identically distributed random variables that are not observed. Example 8.3 To study the effect of an insect growth regulator (IGR) on termite appetite, an entomologist plans an experiment. Each replication of the experiment will involve placing 100 ravenous termites in a container with a dried block of wood. The block of wood will be weighed before the experiment begins and after a fixed number of days. The random variable of interest is the decrease in weight, the amount of wood consumed by the termites. Can we anticipate the distribution of this random variable?
  • 198. 196 CHAPTER 8. LOTS OF DATA The total amount of wood consumed is the sum of the amounts con- sumed by each termite. Assuming that the termites behave independently and identically, the CLT suggests that this sum should be approximately normally distributed. When reasoning as in Example 8.3, one should construe the CLT as no more than suggestive. Most natural processes are far too complicated to be modelled so simplistically with any guarantee of accuracy. One should always examine the observed values to see if they are consistent with one’s theorizing. 8.4 Exercises 1. Suppose that I toss a fair coin 100 times and observe 60 Heads. Now I decide to toss the same coin another 100 times. Does the Law of Averages imply that I should expect to observe another 40 Heads? 2. In Example 7.7, we observed a sample of size n = 100. A normal prob- ability plot and kernel density estimate constructed from this sample suggested that the observations had been drawn from a nonnormal distribution. True or False: It follows from the Central Limit Theorem that a kernel density estimate constructed from a much larger sample would more closely resemble a normal distribution. 3. Suppose that an astragalus has the following probabilities of producing the four possible uppermost faces: P(1) = P(6) = 0.1, P(3) = P(4) = 0.4. This astragalus is to be thrown 100 times. Let Xi denote the value of the uppermost face that results from throw i. (a) Compute the expected value and the variance of Xi. (b) Compute the probability that the average value of the 100 throws will exceed 3.6. 4. Chris owns a laser pointer that is powered by two AAAA batteries. A pair of batteries will power the pointer for an average of five hours use, with a standard deviation of 30 minutes. Chris decides to take advantage of a sale and buys 20 2-packs of AAAA batteries. What is the probability that he will get to use his laser pointer for at least 105 hours before he needs to buy more batteries?
  • 199. 8.4. EXERCISES 197 5. Consider a box that contains 10 tickets, labelled {1, 1, 1, 1, 2, 5, 5, 10, 10, 10}. From this box, I propose to draw (with replacement) n = 40 tickets. Let Y denote the sum of the values on the tickets that are drawn. (a) To approximate P(170.5 < Y < 199.5), one Math 351 student writes an R function box.model that simulates the proposed ex- periment. Evaluating box.model is like observing a value, y, of the random variable Y . Then she writes a loop that repeat- edly evaluates box.model and computes the proportion of times that box.model produces y ∈ (170.5, 199.5). She reasons that, if she evaluates box.model a large number of times, then the observed proportion of y ∈ (170.5, 199.5) should approximate P(170.5 < Y < 199.5). Is her reasoning justified? Why or why not? (b) Another student suggests that P(170.5 < Y < 199.5) can be approximated by performing the following R commands: > se <- sqrt(585.6) > pnorm(199.5,mean=184,sd=se)- + pnorm(170.5,mean=184,sd=se) Do you agree? Why or why not? (c) Which approach will produce the more accurate approximation of P(170.5 < Y < 199.5)? Explain your reasoning. 6. A certain financial theory posits that daily fluctuations in stock prices are independent random variables. Suppose that the daily price fluc- tuations (in dollars) of a certain blue-chip stock are independent and identically distributed random variables X1, X2, X3, . . ., with EXi = 0.01 and Var Xi = 0.01. (Thus, if today’s price of this stock is $50, then tomorrow’s price is $50 + X1, etc.) Suppose that the daily price fluctuations (in dollars) of a certain internet stock are independent and identically distributed random variables Y1, Y2, Y3, . . ., with EYj = 0 and Var Yj = 0.25. Now suppose that both stocks are currently selling for $50 per share and you wish to invest $50 in one of these two stocks for a period of 400 market days. Assume that the costs of purchasing and selling a share of either stock are zero.
  • 200. 198 CHAPTER 8. LOTS OF DATA (a) Approximate the probability that you will make a profit on your investment if you purchase a share of the blue-chip stock. (b) Approximate the probability that you will make a profit on your investment if you purchase a share of the internet stock. (c) Approximate the probability that you will make a profit of at least $20 if you purchase a share of the blue-chip stock. (d) Approximate the probability that you will make a profit of at least $20 if you purchase a share of the internet stock. (e) Assuming that the internet stock fluctuations and the blue-chip stock fluctuations are independent, approximate the probability that, after 400 days, the price of the internet stock will exceed the price of the blue-chip stock.
  • 201. Chapter 9 Inference In Chapters 3–8 we developed methods for studying the behavior of ran- dom variables. Given a specific probability distribution, we can calcu- late the probabilities of various events. For example, knowing that Y ∼ Binomial(n = 100; p = 0.5), we can calculate P(40 ≤ Y ≤ 60). Roughly speaking, statistics is concerned with the opposite sort of problem. For ex- ample, knowing that Y ∼ Binomial(n = 100; p), where the value of p is unknown, and having observed Y = y (say y = 32), what can we say about p? The phrase statistical inference describes any procedure for extracting information about a probability distribution from an observed sample. The present chapter introduces the fundamental principles of statistical inference. We will discuss three types of statistical inference—point esti- mation, hypothesis testing, and set estimation—in the context of drawing inferences about a single population mean. More precisely, we will consider the following situation: 1. X1, . . . , Xn are independent and identically distributed random vari- ables. We observe a sample, ~ x = {x1, . . . , xn}. 2. Both EXi = µ and Var Xi = σ2 exist and are finite. We are interested in drawing inferences about the population mean µ, a quantity that is fixed but unknown. 3. The sample size, n, is sufficiently large that we can use the normal approximation provided by the Central Limit Theorem. We begin, in Section 9.1, by examining a narrative that is sufficiently nuanced to motivate each type of inferential technique. We then proceed to 199
  • 202. 200 CHAPTER 9. INFERENCE discuss point estimation (Section 9.2), hypothesis testing (Sections 9.3 and 9.4), and set estimation (Section 9.5). Although we are concerned exclusively with large-sample inferences about a single population mean, it should be appreciated that this concern often arises in practice. More importantly, the fundamental concepts that we introduce in this context are common to virtually all problems that involve statistical inference. 9.1 A Motivating Example We consider an artificial example that permits us to scrutinize the precise nature of statistical reasoning. Two siblings, a magician (Arlen) and an at- torney (Robin) agree to resolve their disputed ownership of an Erté painting by tossing a penny. Arlen produces a penny and, just as Robin is about to toss it in the air, Arlen smoothly suggests that spinning the penny on a table might ensure better randomization. Robin assents and spins the penny. As it spins, Arlen calls “Tails!” The penny comes to rest with Tails facing up and Arlen takes possession of the Erté. Robin is left with the penny. That evening, Robin wonders if she has been had. She decides to perform an experiment. She spins the same penny on the same table 100 times and observes 68 Tails. It occurs to Robin that perhaps spinning this penny was not entirely fair, but she is reluctant to accuse her brother of impropriety until she is convinced that the results of her experiment cannot be dismissed as coincidence. How should she proceed? It is easy to devise a mathematical model of Robin’s experiment: each spin of the penny is a Bernoulli trial and the experiment is a sequence of n = 100 trials. Let Xi denote the outcome of spin i, where Xi = 1 if Heads is observed and Xi = 0 if Tails is observed. Then X1, . . . , X100 ∼ Bernoulli(p), where p is the fixed but unknown (to Robin!) probability that a single spin will result in Heads. The probability distribution Bernoulli(p) is our mathematical abstraction of a population and the population parameter of interest is µ = EXi = p, the population mean. Let Y = 100 X i=1 Xi, the total number of Heads obtained in n = 100 spins. Under the mathe- matical model that we have proposed, Y ∼ Binomial(p). In performing her
  • 203. 9.1. A MOTIVATING EXAMPLE 201 experiment, Robin observes a sample ~ x = {x1, . . . , x100} and computes y = 100 X i=1 xi, the total number of Heads in her sample. In our narrative, y = 32. We emphasize that p ∈ [0, 1] is fixed but unknown. Robin’s goal is to draw inferences about this fixed but unknown quantity. We consider three questions that she might ask: 1. What is the true value of p? More precisely, what is a reasonable guess as to the true value of p? 2. Is p = 0.5? Specifically, is the evidence that p 6= 0.5 so compelling that Robin can comfortably accuse Arlen of impropriety? 3. What are plausible values of p? In particular, is there a subset of [0, 1] that Robin can confidently claim contains the true value of p? The first set of questions introduces a type of inference that statisticians call point estimation. We have already encountered (in Chapter 7) a natural approach to point estimation, the plug-in principle. In the present case, the plug-in principle suggests estimating the theoretical probability of success, p, by computing the observed proportion of successes, p̂ = y n = 32 100 = 0.32. The second set of questions introduces a type of inference that statis- ticians call hypothesis testing. Having calculated p̂ = 0.32 6= 0.5, Robin is inclined to guess that p 6= 0.5. But how compelling is the evidence that p 6= 0.5? Let us play devil’s advocate: perhaps p = 0.5, but chance produced “only” y = 32 instead of a value nearer EY = np = 100 × 0.5 = 50. This is a possibility that we can quantify. If Y ∼ Binomial(n = 100; p = 0.5), then the probability that Y will deviate from its expected value by at least |50 − 32| = 18 is p = P (|Y − 50| ≥ 18) = P(Y ≤ 32 or Y ≥ 68) = P(Y ≤ 32) + P(Y ≥ 68) = P(Y ≤ 32) + 1 − P(Y ≤ 67) = pbinom(32,100,.5)+1-pbinom(67,100,.5) = 0.0004087772.
  • 204. 202 CHAPTER 9. INFERENCE This significance probability seems fairly small—perhaps small enough to convince Robin that in fact p 6= 0.5. The third set of questions introduces a type of inference that statisticians call set estimation. We have just tested the possibility that p = p0 in the special case p0 = 0.5. Now, imagine testing the possibility that p = p0 for each p0 ∈ [0, 1]. Those p0 that are not rejected as inconsistent with the observed data, y = 32, will constitute a set of plausible values of p. To implement this procedure, Robin will have to adopt a standard of implausibility. Perhaps she decides to reject p0 as implausible when the corresponding significance probability, p = P (|Y − 100p0| ≥ |32 − 100p0|) = P (Y − 100p0 ≥ |32 − 100p0|) + P (Y − 100p0 ≤ −|32 − 100p0|) = P (Y ≥ 100p0 + |32 − 100p0|) + P (Y ≤ 100p0 − |32 − 100p0|) , satisfies p ≤ 0.1. Recalling that Y ∼ Binomial(100; p0) and using the R function pbinom, some trial and error reveals that p > 0.1 if p0 lies in the interval [0.245, 0.404]. (The endpoints of this interval are included.) Notice that this interval does not contain p0 = 0.5, which we had already rejected as implausible. 9.2 Point Estimation The goal of point estimation is to make a reasonable guess of the unknown value of a designated population quantity, e.g., the population mean. The quantity that we hope to guess is called the estimand. 9.2.1 Estimating a Population Mean Suppose that the estimand is µ, the population mean. The plug-in principle suggests estimating µ by computing the mean of the empirical distribution. This leads to the plug-in estimate of µ, µ̂ = x̄n. Thus, we estimate the mean of the population by computing the mean of the sample, which is certainly a natural thing to do. We will distinguish between x̄n = 1 n n X i=1 xi,
  • 205. 9.2. POINT ESTIMATION 203 a real number that is calculated from the sample ~ x = {x1, . . . , xn}, and X̄n = 1 n n X i=1 Xi, a random variable that is a function of the random variables X1, . . . , Xn. (Such a random variable is called a statistic.) The latter is our rule for guessing, an estimation procedure or estimator. The former is the guess itself, the result of applying our rule for guessing to the sample that we observed, an estimate. The quality of an individual estimate depends on the individual sample from which it was computed and is therefore affected by chance variation. Furthermore, it is rarely possible to assess how close to correct an individual estimate may be. For these reasons, we study estimation procedures and identify the statistical properties that these random variables possess. In the present case, two properties are worth noting: 1. We know that EX̄n = µ. Thus, on the average, our procedure for guessing the population mean produces the correct value. We express this property by saying that X̄n is an unbiased estimator of µ. The property of unbiasedness is intuitively appealing and sometimes is quite useful. However, many excellent estimation procedures are biased and some unbiased estimators are unattractive. For example, EX1 = µ by definition, so X1 is also an unbiased estimator of µ; but most researchers would find the prospect of estimating a population mean with a single observation to be rather unappetizing. Indeed, Var X̄n = σ2 n < σ2 = Var X1, so the unbiased estimator X̄n has smaller variance than the unbiased estimator X1. 2. The Weak Law of Large Numbers states that X̄n P → µ. Thus, as the sample size increases, the estimator X̄n converges in probability to the estimand µ. We express this property by saying that X̄n is a consistent estimator of µ. The property of consistency is essential—it is difficult to conceive a circumstance in which one would be willing to use an estimation proce- dure that might fail regardless of how much data one collected. Notice that the unbiased estimator X1 is not consistent.
  • 206. 204 CHAPTER 9. INFERENCE 9.2.2 Estimating a Population Variance Now suppose that the estimand is σ2, the population variance. Although we are concerned with drawing inferences about the population mean, we will discover that hypothesis testing and set estimation may require knowing the population variance. If the population variance is not known, then it must be estimated from the sample. The plug-in principle suggests estimating σ2 by computing the variance of the empirical distribution. This leads to the plug-in estimate of σ2, c σ2 = 1 n n X i=1 (xi − x̄n)2 . The plug-in estimator of σ2 is biased; in fact, E " 1 n n X i=1 ¡ Xi − X̄n ¢2 # = n − 1 n σ2 < σ2 . This does not present any particular difficulties; however, if we desire an unbiased estimator, then we simply multiply the plug-in estimator by the factor n/(n − 1), obtaining S2 n = n n − 1 " 1 n n X i=1 ¡ Xi − X̄n ¢2 # = 1 n − 1 n X i=1 ¡ Xi − X̄n ¢2 . (9.1) The statistic S2 n is the most popular estimator of σ2 and many books refer to the estimate s2 n = 1 n − 1 n X i=1 (xi − x̄n)2 as the sample variance. (For example, the R command var computes s2 n.) In fact, both estimators are perfectly reasonable, consistent estimators of σ2. We will prefer S2 n for the rather mundane reason that using it will simplify some of the formulas that we will encounter. 9.3 Heuristics of Hypothesis Testing Hypothesis testing is appropriate for situations in which one wants to guess which of two possible statements about a population is correct. For example, in Section 9.1 we considered the possibility that spinning a penny is fair (p = 0.5) versus the possibility that spinning a penny is not fair (p 6= 0.5). The logic of hypothesis testing is of a familar sort:
  • 207. 9.3. HEURISTICS OF HYPOTHESIS TESTING 205 If an alleged coincidence seems too implausible, then we tend to believe that it wasn’t really a coincidence. Man has engaged in this kind of reasoning for millenia. In Cicero’s De Divinatione, Quintus exclaims: “They are entirely fortuitous you say? Come! Come! Do you really mean that? . . . When the four dice [astragali] produce the venus-throw you may talk of accident: but suppose you made a hundred casts and the venus-throw appeared a hundred times; could you call that accidental?”1 The essence of hypothesis testing is captured by the familiar saying, “Where there’s smoke, there’s fire.” In this section we formalize such rea- soning, appealing to three prototypical examples: 1. Assessing circumstantial evidence in a criminal trial. For simplicity, suppose that the defendant has been charged with a single count of pre-meditated murder and that the jury has been in- structed to either convict of murder in the first degree or acquit. The defendant had motive, means, and opportunity. Furthermore, two types of blood were found at the crime scene. One type was evidently the victim’s. Laboratory tests demonstrated that the other type was not the victim’s, but failed to demonstrate that it was not the defen- dant’s. What should the jury do? The evidence used by the prosecution to try to establish a connection between the blood of the defendant and blood found at the crime scene is probabilistic, i.e., circumstantial. It will likely be presented to the jury in the language of mathematics, e.g., “Both blood samples have characteristics x, y and z; yet only 0.5% of the population has such blood.” The defense will argue that this is merely an unfortunate coincidence. The jury must evaluate the evidence and decide whether or not such a coincidence is too extraordinary to be believed, i.e., they must decide if their assent to the proposition that the defendant committed the murder rises to a level of certainty sufficient to convict. 1 Cicero rejected the conclusion that a run of one hundred venus-throws is so improbable that it must have been caused by divine intervention; however, Cicero was castigating the practice of divination. Quintus was entirely correct in suggesting that a run of one hundred venus-throws should not be rationalized as “entirely fortuitous.” A modern scientist might conclude that an unusual set of astragali had been used to produce this remarkable result.
  • 208. 206 CHAPTER 9. INFERENCE If the combined weight of the evidence against the defendant is a chance of one in ten, then the jury is likely to acquit; if it is a chance of one in a million, then the jury is likely to convict. 2. Assessing data from a scientific experiment. A study2 of termite foraging behavior reached the controversial conclu- sion that two species of termites compete for scarce food resources. In this study, a site in the Sonoran desert was cleared of dead wood and toilet paper rolls were set out as food sources. The rolls were examined regularly over a period of many weeks and it was observed that only very rarely was a roll infested with both species of termites. Was this just a coincidence or were the two species competing for food? The scientists constructed a mathematical model of termite foraging behavior under the assumption that the two species forage indepen- dently of each other. This model was then used to quantify the prob- ability that infestation patterns such as the one observed arise due to chance. This probability turned out to be just one in many billions— a coincidence far too extraordinary to be dismissed as such—and the researchers concluded that the two species were competing. 3. Assessing the results of Robin’s penny-spinning experiment. In Section 9.1, we noted that Robin observed only y = 32 Heads when she would expect EY = 50 Heads if indeed p = 0.5. This is a dis- crepancy of |32 − 50| = 18, and we considered that possibility that such a large discrepancy might have been produced by chance. More precisely, we calculated p = P(|Y − EY | ≥ 18) under the assumption that p = 0.5, obtaining p . = 0.0004. On this basis, we speculated that Robin might be persuaded to accuse her brother of cheating. In each of the preceding examples, a binary decision was based on a level of assent to probabilistic evidence. At least conceptually, this level can be quantified as a significance probability, which we loosely interpret to mean the probability that chance would produce a coincidence at least as extraor- dinary as the phenomenon observed. This begs an obvious question, which we pose now for subsequent consideration: how small should a significance probability be for one to conclude that a phenomenon is not a coincidence? 2 S.C. Jones and M.W. Trosset (1991). Interference competition in desert subterranean termites. Entomologia Experimentalis et Applicata, 61:83–90.
  • 209. 9.3. HEURISTICS OF HYPOTHESIS TESTING 207 We now proceed to explicate a formal model for statistical hypothesis testing that was proposed by J. Neyman and E. S. Pearson in the late 1920s and 1930s. Our presentation relies heavily on drawing simple analogies to criminal law, which we suppose is a more familiar topic than statistics to most students. The States of Nature The states of nature are the possible mechanisms that might have produced the observed phenomenon. Mathematically, they are the possible probability distributions under consideration. Thus, in the penny-spinning example, the states of nature are the Bernoulli trials indexed by p ∈ [0, 1]. In hypothesis testing, the states of nature are partitioned into two sets or hypotheses. In the penny-spinning example, the hypotheses that we formulated were p = 0.5 (penny-spinning is fair) and p 6= 0.5 (penny-spinning is not fair); in the legal example, the hypotheses are that the defendant did commit the murder (the defendant is factually guilty) and that the defendant did not commit the murder (the defendant is factually innocent). The goal of hypothesis testing is to decide which hypothesis is correct, i.e., which hypothesis contains the true state of nature. In the penny- spinning example, Robin wants to determine whether or not penny-spinning is fair. In the termite example, Jones and Trosset wanted to determine whether or not termites were foraging independently. More generally, scien- tists usually partition the states of nature into a hypothesis that corresponds to a theory that the experiment is designed to investigate and a hypothesis that corresponds to a chance explanation; the goal of hypothesis testing is to decide which explanation is correct. In a criminal trial, the jury would like to determine whether the defendant is factually innocent of factually guilty—in the words of the United States Supreme Court in Bullington v. Missouri (1981): Underlying the question of guilt or innocence is an objective truth: the defendant did or did not commit the crime. From the time an accused is first suspected to the time the decision on guilt or innocence is made, our system is designed to enable the trier of fact to discover that truth. Formulating appropriate hypotheses can be a delicate business. In the penny-spinning example, we formulated hypotheses p = 0.5 and p 6= 0.5. These hypotheses are appropriate if Robin wants to determine whether or
  • 210. 208 CHAPTER 9. INFERENCE not penny-spinning is fair. However, one can easily imagine that Robin is not interested in whether or not penny-spinning is fair, but rather in whether or not her brother gained an advantage by using the procedure. If so, then appropriate hypotheses would be p < 0.5 (penny-spinning favored Arlen) and p ≥ 0.5 (penny-spinning did not favor Arlen). The Actor The states of nature having been partitioned into two hypotheses, it is neces- sary for a decisionmaker (the actor) to choose between them. In the penny- spinning example, the actor is Robin; in the termite example, the actor is the team of researchers; in the legal example, the actor is the jury. Statisticians often describe hypothesis testing as a game that they play against Nature. To study this game in greater detail, it becomes necessary to distinguish between the two hypotheses under consideration. In each example, we declare one hypothesis to be the null hypothesis (H0) and the other to be the alternative hypothesis (H1). Roughly speaking, the logic for determining which hypothesis is H0 and which is H1 is the following: H0 should be the hypothesis to which one defaults if the evidence is equivocal and H1 should be the hypothesis that one requires compelling evidence to embrace. We shall have a great deal more to say about distinguishing null and alternative hypotheses, but for now suppose that we have declared the fol- lowing: (1) H0: the defendant did not commit the murder, (2) H0: the termites are foraging independently, and (3) H0: spinning the penny is fair. Having done so, the game takes the following form: State of Nature H0 H1 Actor’s H0 Type II error Choice H1 Type I error There are four possible outcomes to this game, two of which are favorable and two of which are unfavorable. If the actor chooses H1 when in fact H0 is true, then we say that a Type I error has been committed. If the actor chooses H0 when in fact H1 is true, then we say that a Type II error has been committed. In a criminal trial, a Type I error occurs when a jury convicts a factually innocent defendant and a Type II error occurs when a jury acquits a factually guilty defendant.
  • 211. 9.3. HEURISTICS OF HYPOTHESIS TESTING 209 Innocent Until Proven Guilty Because we are concerned with probabilistic evidence, any decision proce- dure that we devise will occasionally result in error. Obviously, we would like to devise procedures that minimize the probabilities of committing er- rors. Unfortunately, there is an inevitable tradeoff between Type I and Type II error that precludes simultaneously minimizing the probabilities of both types. To appreciate this, consider two juries. The first jury always acquits and the second jury always convicts. Then the first jury never commits a Type I error and the second jury never commits a Type II error. The only way to simultaneously better both juries is to never commit an error of either type, which is impossible with probabilistic evidence. The distinguishing feature of hypothesis testing (and Anglo-American criminal law) is the manner in which it addresses the tradeoff between Type I and Type II error. The Neyman-Pearson formulation of hypothesis testing accords the null hypothesis a privileged status: H0 will be maintained unless there is compelling evidence against it. It is instructive to contrast the asymmetry of this formulation with situations in which neither hypothesis is privileged. In statistics, this is the problem of determining which hypothesis better explains the data. This is discrimination, not hypothesis testing. In law, this is the problem of determining whether the defendant or the plaintiff has the stronger case. This is the criterion in civil suits, not in criminal trials. In the penny-spinning example, Robin required compelling evidence against the privileged null hypothesis that penny-spinning is fair to over- come her scruples about accusing her brother of impropriety. In the termite example, Jones and Trosset required compelling evidence against the privi- leged null hypothesis that two termite species forage independently in order to write a credible article claiming that two species were competing with each other. In a criminal trial, the principle of according the null hypothesis a privileged status has a familiar characterization: the defendant is “innocent until proven guilty.” According the null hypothesis a privileged status is equivalent to declar- ing Type I errors to be more egregious than Type II errors. This connection was eloquently articulated by Justice John Harlan in a 1970 Supreme Court decision: “If, for example, the standard of proof for a criminal trial were a preponderance of the evidence rather than proof beyond a reasonable doubt, there would be a smaller risk of factual errors that result in freeing guilty persons, but a far greater risk of factual errors that result in convicting the innocent.”
  • 212. 210 CHAPTER 9. INFERENCE A preference for Type II errors instead of Type I errors can often be glimpsed in scientific applications. For example, because science is conserva- tive, it is generally considered better to wrongly accept than to wrongly reject the prevailing wisdom that termite species forage independently. Moreover, just as this preference is the foundation of statistical hypothesis testing, so is it a fundamental principle of criminal law. In his famous Commentaries, William Blackstone opined that “it is better that ten guilty persons escape, than that one innocent man suffer;” and in his influential Practical Treatise on the Law of Evidence (1824), Thomas Starkie suggested that “The maxim of the law. . . is that it is better that ninety-nine. . . offenders shall escape than that one innocent man be condemned.” In Reasonable Doubts (1996), Alan Dershowitz quotes both maxims and notes anecdotal evidence that jurors actually do prefer committing Type II to Type I errors: on Prime Time Live (October 4, 1995), O.J. Simpson juror Anise Aschenbach stated, “If we made a mistake, I would rather it be a mistake on the side of a person’s innocence than the other way.” Beyond a Reasonable Doubt To actualize its antipathy to Type I errors, the Neyman-Pearson formulation imposes an upper bound on the maximal probability of Type I error that will be tolerated. This bound is the significance level, conventionally denoted α. The significance level is specified (prior to examining the data) and only decision rules for which the probability of Type I error is no greater than α are considered. Such tests are called level α tests. To fix ideas, we consider the penny-spinning example and specify a signif- icance level of α. Let p denote the significance probability that results from performing the analysis in Section 9.1 and consider a rule that rejects the null hypothesis H0 : p = 0.5 if and only if p ≤ α. Then a Type I error occurs if and only if p = 0.5 and we observe y such that p = P(|Y −50| ≥ |y−50|) ≤ α. We claim that the probability of observing such a y is just α, in which case we have constructed a level α test. To see why this is the case, let W = |Y −50| denote the test statistic. The decision to accept or reject the null hypothesis H0 depends on the observed value, w, of this random variable. Let p(w) = PH0 (W ≥ w) denote the significance probability associated with w. Notice that w is the 1 − p(w) quantile of the random variable W under H0. Let q denote the
  • 213. 9.3. HEURISTICS OF HYPOTHESIS TESTING 211 1 − α quantile of W under H0, i.e., α = PH0 (W ≥ q) . We reject H0 if and only if we observe PH0 (W ≥ w) = p(w) ≤ α = PH0 (W ≥ q) , i.e., if and only w ≥ q. If H0 is true, then the probability of committing a Type I error is precisely PH0 (W ≥ q) = α, as claimed above. We conclude that α quantifies the level of assent that we require to risk rejecting H0, i.e., the significance level specifies how small a significance probability is required in order to conclude that a phenomenon is not a coincidence. In statistics, the significance level α is a number in the interval [0, 1]. It is not possible to quantitatively specify the level of assent required for a jury to risk convicting an innocent defendant, but the legal principle is identical: in a criminal trial, the operative significance level is beyond a reasonable doubt. Starkie (1824) described the possible interpretations of this phrase in language derived from British empirical philosopher John Locke: Evidence which satisfied the minds of the jury of the truth of the fact in dispute, to the entire exclusion of every reasonable doubt, constitute full proof of the fact. . . . Even the most direct evidence can produce nothing more than such a high degree of probability as amounts to moral certainty. From the highest it may decline, by an infinite number of gradations, until it produces in the mind nothing more than a preponderance of assent in favour of the particular fact. The gradations that Starkie described are not intrinsically numeric, but it is evident that the problem of defining reasonable doubt in criminal law is the problem of specifying a significance level in statistical hypothesis testing. In both criminal law and statistical hypothesis testing, actions typically are described in language that acknowledges the privileged status of the null hypothesis and emphasizes that the decision criterion is based on the prob- ability of committing a Type I error. In describing the action of choosing H0, many statisticians prefer the phrase “fail to reject the null hypothesis” to the less awkward “accept the null hypothesis” because choosing H0 does
  • 214. 212 CHAPTER 9. INFERENCE not imply an affirmation that H0 is correct, only that the level of evidence against H0 is not sufficiently compelling to warrant its rejection at signifi- cance level α. In precise analogy, juries render verdicts of “not guilty” rather than “innocent” because acquital does not imply an affirmation that the de- fendant did not commit the crime, only that the level of evidence against the defendant’s innocence was not beyond a reasonable doubt.3 And To a Moral Certainty The Neyman-Pearson formulation of statistical hypothesis testing is a math- ematical abstraction. Part of its generality derives from its ability to accom- modate any specified significance level. As a practical matter, however, α must be specified and we now ask how to do so. In the penny-spinning example, Robin is making a personal decision and is free to choose α as she pleases. In the termite example, the researchers were guided by decades of scientific convention. In 1925, in his extremely influential Statistical Methods for Research Workers, Ronald Fisher4 sug- gested that α = 0.05 and α = 0.01 are often appropriate significance levels. These suggestions were intended as practical guidelines, but they have be- come enshrined (especially α = 0.05) in the minds of many scientists as a sort of Delphic determination of whether or not a hypothesized theory is true. While some degree of conformity is desirable (it inhibits a researcher from choosing—after the fact—a significance level that will permit rejecting the null hypothesis in favor of the alternative in which s/he may be invested), many statisticians are disturbed by the scientific community’s slavish devo- tion to a single standard and by its often uncritical interpretation of the resulting conclusions.5 The imposition of an arbitrary standard like α = 0.05 is possible be- cause of the precision with which mathematics allows hypothesis testing to be formulated. Applying this precision to legal paradigms reveals the issues 3 In contrast, Scottish law permits a jury to return a verdict of “not proven,” thereby reserving a verdict of “not guilty” to affirm a defendant’s innocence. 4 Sir Ronald Fisher is properly regarded as the single most important figure in the history of statistics. It should be noted that he did not subscribe to all of the particulars of the Neyman-Pearson formulation of hypothesis testing. His fundamental objection to it, that it may not be possible to fully specify the alternative hypothesis, does not impact our development, since we are concerned with situations in which both hypotheses are fully specified. 5 See, for example, J. Cohen (1994). The world is round (p < .05). American Psychol- ogist, 49:997–1003.
  • 215. 9.3. HEURISTICS OF HYPOTHESIS TESTING 213 with great clarity, but is of little practical value when specifying a signifi- cance level, i.e., when trying to define the meaning of “beyond a reasonable doubt.” Nevertheless, legal scholars have endeavored for centuries to po- sition “beyond a reasonable doubt” along the infinite gradations of assent that correspond to the continuum [0, 1] from which α is selected. The phrase “beyond a reasonable doubt” is still often connected to the archaic phrase “to a moral certainty.” This connection survived because moral certainty was actually a significance level, intended to invoke an enormous body of scholarly writings and specify a level of assent: Throughout this development two ideas to be conveyed to the jury have been central. The first idea is that there are two realms of human knowledge. In one it is possible to obtain the absolute certainty of mathematical demonstration, as when we say that the square of the hypotenuse is equal to the sum of the squares of the other two sides of a right triangle. In the other, which is the empirical realm of events, absolute certainty of this kind is not possible. The second idea is that, in this realm of events, just because absolute certainty is not possible, we ought not to treat everything as merely a guess or a matter of opinion. Instead, in this realm there are levels of certainty, and we reach higher levels of certainty as the quantity and quality of the evidence available to us increase. The highest level of certainty in this empirical realm in which no absolute certainty is possible is what traditionally was called “moral certainty,” a certainty which there was no reason to doubt.6 Although it is rarely (if ever) possible to quantify a juror’s level of as- sent, those comfortable with statistical hypothesis testing may be inclined to wonder what values of α correspond to conventional interpretations of reasonable doubt. If a juror believes that there is a 5 percent probability that chance alone could have produced the circumstantial evidence presented against a defendant accused of pre-meditated murder, is the juror’s level of assent beyond a reasonable doubt and to a moral certainty? We hope not. We may be willing to tolerate a 5 percent probability of a Type I error when studying termite foraging behavior, but the analogous prospect of a 5 6 Barbara J. Shapiro (1991). “Beyond Reasonable Doubt” and “Probable Cause”: His- torical Perspectives on the Anglo-American Law of Evidence, University of California Press, Berkeley, p. 41.
  • 216. 214 CHAPTER 9. INFERENCE percent probability of wrongly convicting a factually innocent defendant is abhorrent.7 In fact, little is known about how anyone in the legal system quantifies reasonable doubt. Mary Gray cites a 1962 Swedish case in which a judge try- ing an overtime parking case explicitly ruled that a significance probability of 1/20736 was beyond reasonable doubt but that a significance probabil- ity of 1/144 was not.8 In contrast, Alan Dershowitz relates a provocative classroom exercise in which his students preferred to acquit in one scenario with a significance probability of 10 percent and to convict in an analogous scenario with a significance probability of 15 percent.9 9.4 Testing Hypotheses About a Population Mean We now apply the heuristic reasoning described in Section 9.3 to the problem of testing hypotheses about a population mean. Initially, we consider testing H0 : µ = µ0 versus H1 : µ 6= µ0. The intuition that we are seeking to formalize is fairly straightfoward. By virtue of the Weak Law of Large Numbers, the observed sample mean ought to be fairly close to the true population mean. Hence, if the null hypothesis is true, then x̄n ought to be fairly close to the hypothesized mean, µ0. If we observe X̄n = x̄n far from µ0, then we guess that µ 6= µ0, i.e., we reject H0. Given a significance level α, we want to calculate a significance probabil- ity p. The significance level is a real number that is fixed by and known to the researcher, e.g., α = 0.05. The significance probability is a real number that is determined by the sample, e.g., p . = 0.0004 in Section 9.1. We will reject H0 if and only if p ≤ α. In Section 9.3, we interpreted the significance probability as the prob- ability that chance would produce a coincidence at least as extraordinary as the phenomenon observed. Our first challenge is to make this notion mathematically precise; how we do so depends on the hypotheses that we 7 This discrepancy illustrates that the consequences of committing a Type I error in- fluence the choice of a significance level. The consequences of Jones and Trosset wrongly concluding that termite species compete are not commensurate with the consequences of wrongly imprisoning a factually innocent citizen. 8 M.W. Gray (1983). Statistics and the law. Mathematics Magazine, 56:67–81. As a graduate of Rice University, I cannot resist quoting another of Gray’s examples of statistics- as-evidence: “In another case, that of millionaire W. M. Rice, the signature on his will was disputed, and the will was declared a forgery on the basis of probability evidence. As a result, the fortune of Rice went to found Rice Institute.” 9 A.M. Dershowitz (1996). Reasonable Doubts, Simon & Schuster, New York, p. 40.
  • 217. 9.4. TESTING HYPOTHESES ABOUT A POPULATION MEAN 215 want to test. In the present situation, we submit that a natural significance probability is p = Pµ0 ¡¯ ¯X̄n − µ0 ¯ ¯ ≥ |x̄n − µ0| ¢ . (9.2) To understand why this is the case, it is essential to appreciate the following details: 1. The hypothesized mean, µ0, is a real number that is fixed by and known to the researcher. 2. The estimated mean, x̄n, is a real number that is calculated from the observed sample and known to the researcher; hence, the quantity |x̄n − µ0| is a fixed real number. 3. The estimator, X̄n, is a random variable. Hence, the inequality ¯ ¯X̄n − µ0 ¯ ¯ ≥ |x̄n − µ0| (9.3) defines an event that may or may not occur each time the experiment is performed. Specifically, (9.3) is the event that the sample mean assumes a value at least as far from the hypothesized mean as the researcher observed. 4. The significance probability, p, is the probability that (9.3) occurs. The notation Pµ0 reminds us that we are interested in the probability that this event occurs under the assumption that the null hypothesis is true, i.e., under the assumption that µ = µ0. Having formulated an appropriate significance probability for testing H0 : µ = µ0 versus H1 : µ 6= µ0, our second challenge is to find a way to compute p. We remind the reader that we have assumed that n is large. Case 1: The population variance is known or specified by the null hypothesis. We define two new quantities, the random variable Zn = X̄n − µ0 σ/ √ n and the real number z = x̄n − µ0 σ/ √ n .
  • 218. 216 CHAPTER 9. INFERENCE Under the null hypothesis H0 : µ = µ0, Zn ˙ ∼Normal(0, 1) by the Central Limit Theorem; hence, p = Pµ0 ¡¯ ¯X̄n − µ0 ¯ ¯ ≥ |x̄n − µ0| ¢ = 1 − Pµ0 ¡ − |x̄n − µ0| < X̄n − µ0 < |x̄n − µ0| ¢ = 1 − Pµ0 à − |x̄n − µ0| σ/ √ n < X̄n − µ0 σ/ √ n < |x̄n − µ0| σ/ √ n ! = 1 − Pµ0 (−|z| < Zn < |z|) . = 1 − [Φ(|z|) − Φ(−|z|)] = 2Φ(−|z|), which can be computed by the R command > 2*pnorm(-abs(z)) or by consulting a table. An illustration of the normal probability of interest is sketched in Figure 9.1. −4 −3 −2 −1 0 1 2 3 4 0.0 0.1 0.2 0.3 0.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... . . . . ... ... .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 9.1: P(|Z| ≥ |z| = 1.5) An important example of Case 1 occurs when Xi ∼ Bernoulli(µ). In this case, σ2 = Var Xi = µ(1 − µ); hence, under the null hypothesis that µ = µ0,
  • 219. 9.4. TESTING HYPOTHESES ABOUT A POPULATION MEAN 217 σ2 = µ0(1 − µ0) and z = x̄n − µ0 p µ0(1 − µ0)/n . Example 9.1 To test H0 : µ = 0.5 versus H1 : µ 6= 0.5 at significance level α = 0.05, we perform n = 2500 trials and observe 1200 successes. Should H0 be rejected? The observed proportion of successes is x̄n = 1200/2500 = 0.48, so the value of the test statistic is z = 0.48 − 0.50 p 0.5(1 − 0.5)/2500 = −0.02 0.5/50 = −2 and the significance probability is p . = 2Φ(−2) . = 0.0456 < 0.05 = α. Because p ≤ α, we reject H0. Case 2: The population variance is unknown. Because σ2 is unknown, we must estimate it from the sample. We will use the estimator introduced in Section 9.2, S2 n = 1 n − 1 n X i=1 ¡ Xi − X̄n ¢2 , and define Tn = X̄n − µ0 Sn/ √ n . Because S2 n is a consistent estimator of σ2, i.e., S2 n P → σ2, it follows from Theorem 8.3 that lim n→∞ P (Tn ≤ z) = Φ(z). Just as we could use a normal approximation to compute probabilities involving Zn, so can we use a normal approximation to compute probabili- ties involving Tn. The fact that we must estimate σ2 slightly degrades the quality of the approximation; however, because n is large, we should observe an accurate estimate of σ2 and the approximation should not suffer much. Accordingly, we proceed as in Case 1, using t = x̄n − µ0 sn/ √ n instead of z.
  • 220. 218 CHAPTER 9. INFERENCE Example 9.2 To test H0 : µ = 20 versus H1 : µ 6= 20 at significance level α = 0.05, we collect n = 400 observations, observing x̄n = 21.82935 and sn = 24.70037. Should H0 be rejected? The value of the test statistic is t = 21.82935 − 20 24.70037/20 = 1.481234 and the significance probability is p . = 2Φ(−1.481234) = 0.1385441 > 0.05 = α. Because p > α, we decline to reject H0. 9.4.1 One-Sided Hypotheses In Section 9.3 we suggested that, if Robin is not interested in whether or not penny-spinning is fair but rather in whether or not it favors her brother, then appropriate hypotheses would be p < 0.5 (penny-spinning favors Arlen) and p ≥ 0.5 (penny-spinning does not favor Arlen). These are examples of one-sided (as opposed to two-sided) hypotheses. More generally, we will consider two canonical cases: H0 : µ ≤ µ0 versus H1 : µ > µ0 H0 : µ ≥ µ0 versus H1 : µ < µ0 Notice that the possibility of equality, µ = µ0, belongs to the null hypothesis in both cases. This is a technical necessity that arises because we compute significance probabilities using the µ in H0 that is nearest H1. For such a µ to exist, the boundary between H0 and H1 must belong to H0. We will return to this necessity later in this section. Instead of memorizing different formulas for different situations, we will endeavor to understand which values of our test statistic tend to undermine the null hypothesis in question. Such reasoning can be used on a case-by- case basis to determine the relevant significance probability. In so doing, sketching crude pictures can be quite helpful! Consider testing each of the following: (a) H0 : µ = µ0 versus H1 : µ 6= µ0 (b) H0 : µ ≤ µ0 versus H1 : µ > µ0 (c) H0 : µ ≥ µ0 versus H1 : µ < µ0 Qualitatively, we will be inclined to reject the null hypothesis if
  • 221. 9.4. TESTING HYPOTHESES ABOUT A POPULATION MEAN 219 (a) We observe x̄n ≪ µ0 or x̄n ≫ µ0, i.e., if we observe |x̄n − µ0| ≫ 0. This is equivalent to observing |t| ≫ 0, so the significance probability is pa = Pµ0 (|Tn| ≥ |t|) . (b) We observe x̄n ≫ µ0, i.e., if we observe x̄n − µ0 ≫ 0. This is equivalent to observing t ≫ 0, so the significance probability is pb = Pµ0 (Tn ≥ t) . (c) We observe x̄n ≪ µ0, i.e., if we observe x̄n − µ0 ≪ 0. This is equivalent to observing t ≪ 0, so the significance probability is pc = Pµ0 (Tn ≤ t) . Example 9.2 (continued) Applying the above reasoning, we obtain the significance probabilities sketched in Figure 9.2. Notice that pb = pa/2 and that pb + pc = 1. The probability pb is fairly small, about 7%. This makes sense: we observed x̄n . = 21.8 > 20 = µ0, so the sample does contain some evidence that µ > 20. However, the statistical test reveals that the strength of this evidence is not sufficiently compelling to reject H0 : µ ≤ 20. In contrast, the probability of pc is quite large, about 93%. This also makes sense, because the sample contains no evidence that µ < 20. In such instances, performing a statistical test only confirms that which is transpar- ent from comparing the sample and hypothesized means. 9.4.2 Formulating Suitable Hypotheses Examples 9.1 and 9.2 illustrated the mechanics of hypothesis testing. Once understood, the above techniques for calculating significance probabilities are fairly straightforward and can be applied routinely to a wide variety of problems. In contrast, determining suitable hypotheses to be tested requires one to carefully consider each situation presented. These determinations cannot be reduced to formulas. To make them requires good judgment, which can only be acquired through practice. We now consider some examples that illustrate some important issues that arise when formulating hypotheses. In each case, there are certain key questions that must be answered: Why was the experiment performed? Who needs to be convinced of what? Is one type of error perceived as more important than the other?
  • 222. 220 CHAPTER 9. INFERENCE (c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..... ... .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ..... . (a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ..... . . ..... ... .. . . . . . . . . . . . . . . . . . . . . . Figure 9.2: Significance probabilities for Example 9.2. Each significance probability is the area of the corresponding shaded region. Example 9.3 A group of concerned parents wants speed humps in- stalled in front of a local elementary school, but the city traffic office is reluctant to allocate funds for this purpose. Both parties agree that humps should be installed if the average speed of all motorists who pass the school while it is in session exceeds the posted speed limit of 15 miles per hour (mph). Let µ denote the average speed of the motorists in question. A ran- dom sample of n = 150 of these motorists was observed to have a sample mean of x̄ = 15.3 mph with a sample standard deviation of s = 2.5 mph. (a) State null and alternative hypotheses that are appropriate from the parents’ perspective. (b) State null and alternative hypotheses that are appropriate from the city traffic office’s perspective. (c) Compute the value of an appropriate test statistic. (d) Adopting the parents’ perspective and assuming that they are willing to risk a 1% chance of committing a Type I error, what action should be taken? Why?
  • 223. 9.4. TESTING HYPOTHESES ABOUT A POPULATION MEAN 221 (e) Adopting the city traffic office’s perspective and assuming that they are willing to risk a 10% chance of committing a Type I error, what action should be taken? Why? Solution (a) The parents would prefer to err on the side of protecting their chil- dren, so they would rather build unnecessary speed humps than forego necessary speed humps. Hence, they would like to see the hypothe- ses formulated so that foregoing necessary speed humps is a Type I error. Since speed humps will be built if it is concluded that µ > 15 and will not be built if it is concluded that µ < 15, the parents would prefer a null hypothesis of H0 : µ ≥ 15 and an alternative hypothesis of H1 : µ < 15. Equivalently, if we suppose that the purpose of the experiment is to provide evidence to the parents, then it is clear that the parents need to be persuaded that speed humps are unnecessary. The null hypothesis to which they will default in the absence of compelling evidence is H0 : µ ≥ 15. They will require compelling evidence to the contrary, H1 : µ < 15. (b) The city traffic office would prefer to err on the side of conserving their budget for important public works, so they would rather forego neces- sary speed humps than build unnecessary speed humps. Hence, they would like to see the hypotheses formulated so that building unneces- sary speed humps is a Type I error. Since speed humps will be built if it is concluded that µ > 15 and will not be built if it is concluded that µ < 15, the city traffic office would prefer a null hypothesis of H0 : µ ≤ 15 and an alternative hypothesis of H1 : µ > 15. Equivalently, if we suppose that the purpose of the experiment is to provide evidence to the city traffic, then it is clear that the office needs to be persuaded that speed humps are necessary. The null hypothesis to which it will default in the absence of compelling evidence is H0 : µ ≤ 15. It will require compelling evidence to the contrary, H1 : µ > 15. (c) Because the population variance is unknown, the appropriate test sta- tistic is t = x̄ − µ0 s/ √ n = 15.3 − 15 2.5/ √ 150 . = 1.47.
  • 224. 222 CHAPTER 9. INFERENCE (d) We would reject the null hypothesis in (a) if x̄ is sufficiently smaller than µ0 = 15. Since x̄ = 15.3 > 15, there is no evidence against H0 : µ ≥ 15. The null hypothesis is retained and speed humps are installed. (e) We would reject the null hypothesis in (b) if x̄ is sufficiently larger than µ0 = 15, i.e., for sufficiently large positive values of t. Hence, the significance probability is p = P (Tn ≥ t) . = P(Z ≥ 1.47) = 1 − Φ(1.47) . = 0.071 < 0.10 = α. Because p ≤ α, the traffic office should reject H0 : µ ≤ 15 and install speed humps. Example 9.4 Imagine a variant of the Lanarkshire milk experiment described in Section 1.2. Suppose that it is known that 10-year-old Scottish schoolchildren gain an average of 0.5 pounds per month. To study the effect of daily milk supplements, a random sample of n = 1000 such children is drawn. Each child receives a daily supplement of 3/4 cups pasteurized milk. The study continues for four months and the weight gained by each student during the study period is recorded. Formulate suitable null and alternative hypotheses for testing the effect of daily milk supplements. Solution Let X1, . . . , Xn denote the weight gains and let µ = EXi. Then milk supplements are effective if µ > 2 and ineffective if µ < 2. One of these possibilities will be declared the null hypothesis, the other will be de- clared the alternative hypothesis. The possibility µ = 2 will be incorporated into the null hypothesis. The alternative hypothesis should be the one for which compelling ev- idence is desired. Who needs to be convinced of what? The parents and teachers already believe that daily milk supplements are beneficial and would have to be convinced otherwise. But this is not the purpose of the study! The study is performed for the purpose of obtaining objective scientific evi- dence that supports prevailing popular wisdom. It is performed to convince government bureaucrats that spending money on daily milk supplements for schoolchildren will actually have a beneficial effect. The parents and teach- ers hope that the study will provide compelling evidence of this effect. Thus, the appropriate alternative hypothesis is H1 : µ > 2 and the appropriate null hypothesis is H0 : µ ≤ 2.
  • 225. 9.4. TESTING HYPOTHESES ABOUT A POPULATION MEAN 223 9.4.3 Statistical Significance and Material Significance The significance probability is the probability that a coincidence at least as extraordinary as the phenomenon observed can be produced by chance. The smaller the significance probability, the more confidently we reject the null hypothesis. However, it is one thing to be convinced that the null hypothesis is incorrect—it is something else to assert that the true state of nature is very different from the state(s) specified by the null hypothesis. Example 9.5 A government agency requires prospective advertisers to provide statistical evidence that documents their claims. In order to claim that a gasoline additive increases mileage, an advertiser must fund an inde- pendent study in which n vehicles are tested to see how far they can drive, first without and then with the additive. Let Xi denote the increase in miles per gallon (mpg with the additive minus mpg without the additive) observed for vehicle i and let µ = EXi. The null hypothesis H0 : µ ≤ 1 is tested against the alternative hypothesis H1 : µ > 1 and advertising is authorized if H0 is rejected at a significance level of α = 0.05. Consider the experiences of two prospective advertisers: 1. A large corporation manufactures an additive that increases mileage by an average of µ = 1.01 miles per gallon. The corporation funds a large study of n = 900 vehicles in which x̄ = 1.01 and s = 0.1 are observed. This results in a test statistic of t = x̄ − µ0 s/ √ n = 1.01 − 1.00 0.1/ √ 900 = 3 and a significance probability of p = P (Tn ≥ t) . = P(Z ≥ 3) = 1 − Φ(3) . = 0.00135 < 0.05 = α. The null hypothesis is decisively rejected and advertising is authorized. 2. An amateur automotive mechanic invents an additive that increases mileage by an average of µ = 1.21 miles per gallon. The mechanic funds a small study of n = 9 vehicles in which x̄ = 1.21 and s = 0.4 are observed. This results in a test statistic of t = x̄ − µ0 s/ √ n = 1.21 − 1.00 0.4/ √ 9 = 1.575
  • 226. 224 CHAPTER 9. INFERENCE and (assuming that the normal approximation remains valid) a signif- icance probability of p = P (Tn ≥ t) . = P(Z ≥ 1.575) = 1 − Φ(1.575) . = 0.05763 > 0.05 = α. The null hypothesis is not rejected and advertising is not authorized. These experiences are highly illuminating. Although the corporation’s mean increase of µ = 1.01 mpg is much closer to the null hypothesis than the mechanic’s mean increase of µ = 1.21 mpg, the corporation’s study resulted in a much smaller significance probability. This occurred because of the smaller standard deviation and larger sample size in the corporation’s study. As a result, the government could be more confident that the corporation’s product had a mean increase of more than 1.0 mpg than they could be that the mechanic’s product had a mean increase of more than 1.0 mpg. The preceding example illustrates that a small significance probability does not imply a large physical effect and that a large physical effect does not imply a small significance probability. To avoid confusing these two con- cepts, statisticians distinguish between statistical significance and material significance (importance). To properly interpret the results of hypothesis testing, it is essential that one remember: Statistical significance is not the same as material significance. 9.5 Set Estimation Hypothesis testing is concerned with situations that demand a binary deci- sion, e.g., whether or not to install speed humps in front of an elementary school. The relevance of hypothesis testing in situations that do not demand a binary decision is somewhat less clear. For example, many statisticians feel that the scientific community overuses hypothesis testing and that other types of statistical inference are often more appropriate. As we have dis- cussed, a typical application of hypothesis testing in science partitions the states of nature into two sets, one that corresponds to a theory and one that corresponds to chance. Usually the theory encompasses a great many possible states of nature and the mere conclusion that the theory is true only begs the question of which states of nature are actually plausible. Fur- thermore, it is a rather fanciful conceit to imagine that a single scientific article should attempt to decide whether a theory is or is not true. A more
  • 227. 9.5. SET ESTIMATION 225 sensible enterprise for the authors to undertake is simply to set forth the evidence that they have discovered and allow evidence to accumulate until the scientific community reaches a consensus. One way to accomplish this is for each article to identify what its authors consider a set of plausible values for the population quantity in question. To construct a set of plausible values of µ, we imagine testing H0 : µ = µ0 versus H1 : µ 6= µ0 for every µ0 ∈ (−∞, ∞) and eliminating those µ0 for which H0 : µ = µ0 is rejected. To see where this leads, let us examine our decision criterion in the case that σ is known: we reject H0 : µ = µ0 if and only if p = Pµ0 ¡¯ ¯X̄n − µ0 ¯ ¯ ≥ |x̄n − µ0| ¢ . = 2Φ (− |z|) ≤ α, (9.4) where z = (x̄n−µ0)/(σ/ √ n). Using the symmetry of the normal distribution, we can rewrite condition (9.4) as α/2 ≥ Φ (− |z|) = P (Z < − |z|) = P (Z > |z|) , which in turn is equivalent to the condition Φ (|z|) = P (Z < |z|) = 1 − P (Z > |z|) ≥ 1 − α/2, (9.5) where Z ∼ Normal(0, 1). Now let q denote the 1 − α/2 quantile of Normal(0, 1), so that Φ(q) = 1 − α/2. Then condition (9.5) obtains if and only if |z| ≥ q. We express this by saying that q is the critical value of the test statistic |Zn|, where Zn = (X̄n − µ0)/(σ/ √ n). For example, suppose that α = 0.05, so that 1 − α/2 = 0.975. Then the critical value is computed in R as follows: > qnorm(.975) [1] 1.959964 Given a significance level α and the corresponding q, we have determined that q is the critical value of |Zn| for testing H0 : µ = µ0 versus H1 : µ 6= µ0 at significance level α. Thus, we reject H0 : µ = µ0 if and only if (iff) ¯ ¯ ¯ ¯ x̄n − µ0 σ/ √ n ¯ ¯ ¯ ¯ = |z| ≥ q iff |x̄n − µ0| ≥ qσ/ √ n iff µ0 6∈ ¡ x̄n − qσ/ √ n, x̄n + qσ/ √ n ¢ .
  • 228. 226 CHAPTER 9. INFERENCE Thus, the desired set of plausible values is the interval µ x̄n − q σ √ n , x̄n + q σ √ n ¶ . (9.6) If σ is unknown, then the argument is identical except that we estimate σ2 as s2 n = 1 n − 1 n X i=1 (xi − x̄n)2 , obtaining as the set of plausible values the interval µ x̄n − q sn √ n , x̄n + q sn √ n ¶ . (9.7) Example 9.2 (continued) A random sample of n = 400 observations is drawn from a population with unknown mean µ and unknown variance σ2, resulting in x̄n = 21.82935 and sn = 24.70037. Using a significance level of α = 0.05, determine a set of plausible values of µ. First, because α = 0.05 is the significance level, q = 1.959964 is the critical value. From (9.7), an interval of plausible values is 21.82935 ± 1.959964 · 24.70037/ √ 400 = (19.40876, 24.24994). Notice that 20 ∈ (19.40876, 24.24994), meaning that (as we discovered in Section 9.4) we would accept H0 : µ = 20 at significance level α = 0.05. Now consider the random interval I, defined in Case 1 (population vari- ance known) by I = µ X̄n − q σ √ n , X̄n − q σ √ n ¶ and in Case 2 (population variance unknown) by I = µ X̄n − q Sn √ n , X̄n − q Sn √ n ¶ . The probability that this random interval covers the real number µ0 is Pµ (I ⊃ µ0) = 1 − Pµ (µ0 6∈ I) = 1 − Pµ (reject H0 : µ = µ0) . If µ = µ0, then the probability of coverage is 1 − Pµ0 (reject H0 : µ = µ0) = 1 − Pµ0 (Type I error) ≥ 1 − α.
  • 229. 9.5. SET ESTIMATION 227 Thus, the probability that I covers the true value of the population mean is at least 1−α, which we express by saying that I is a (1−α)-level confidence interval for µ. The level of confidence, 1 − α, is also called the confidence coefficient. We emphasize that the confidence interval I is random and the popu- lation mean µ is fixed, albeit unknown. Each time that the experiment in question is performed, a random sample is observed and an interval is con- structed from it. As the sample varies, so does the interval. Any one such interval, constructed from a single sample, either does or does not contain the population mean. However, if this procedure is repeated a great many times, then the proportion of such intervals that contain µ will be at least 1 − α. Actually observing one sample and constructing one interval from it amounts to randomly selecting one of the many intervals that might or might not contain µ. Because most (at least 1−α) of the intervals do, we can be “confident” that the interval that was actually constructed does contain the unknown population mean. 9.5.1 Sample Size Confidence intervals are often used to determine sample sizes for future ex- periments. Typically, the researcher specifies a desired confidence level, 1−α, and a desired interval length, L. After determining the appropriate critical value, q, one equates L with 2qσ/ √ n and solves for n, obtaining n = (2qσ/L)2 . (9.8) Of course, this formula presupposes knowledge of the population variance. In practice, it is usually necessary to replace σ with an estimate—which may be easier said than done if the experiment has not yet been performed. This is one reason to perform a pilot study: to obtain a preliminary estimate of the population variance and use it to design a better study. Several useful relations can be deduced from equation (9.8): 1. Higher levels of confidence (1 − α) correspond to larger critical values (q), which result in larger sample sizes (n). 2. Smaller interval lengths (L) result in larger sample sizes (n). 3. Larger variances (σ2) result in larger sample sizes (n). In summary, if a researcher desires high confidence that the true mean of a highly variable population is covered by a small interval, then s/he should plan on collecting a great deal of data!
  • 230. 228 CHAPTER 9. INFERENCE Example 9.5 (continued) A rival corporation purchases the rights to the amateur mechanic’s additive. How large a study is required to determine this additive’s mean increase in mileage to within 0.05 mpg with a confidence coefficient of 1 − α = 0.99? The desired interval length is L = 2 · 0.05 = 0.1 and the critical value that corresponds to α = 0.01 is computed in R as follows: > qnorm(1-.01/2) [1] 2.575829 From the mechanic’s small pilot study, we estimate σ to be s = 0.4. Then n = (2 · 2.575829 · 0.4/0.1)2 . = 424.6, so the desired study will require n = 425 vehicles. 9.5.2 One-Sided Confidence Intervals The set of µ0 for which we would accept the null hypothesis H0 : µ = µ0 when tested against the two-sided alternative hypothesis H1 : µ 6= µ0 is a tra- ditional, 2-sided confidence interval. In situations where 1-sided alternatives are appropriate, we can construct corresponding 1-sided confidence intervals by determining the set of µ0 for which the appropriate null hypothesis would be accepted. Example 9.5 (continued) The government test has a significance level of α = 0.05. It rejects the null hypothesis H0 : µ ≤ µ0 if and only if (iff) p = P(Z ≥ t) ≤ 0.05 iff P(Z < t) ≥ 0.95 iff t ≥ qnorm(0.95) . = 1.645. Equivalently, the null hypothesis H0 : µ ≤ µ0 is accepted if and only if t = x̄ − µ0 s/ √ n < 1.645 iff x̄ < µ0 + 1.645 · s √ n iff µ0 > x̄ − 1.645 · s √ n .
  • 231. 9.6. EXERCISES 229 1. In the case of the large corporation, the null hypothesis H0 : µ ≤ µ0 is accepted if and only if µ0 > 1.01 − 1.645 · 0.1 √ 900 . = 1.0045, so the 1-sided confidence interval with confidence coefficient 1 − α = 0.95 is (1.0045, ∞). 2. In the case of the amateur mechanic, the null hypothesis H0 : µ ≤ µ0 is accepted if and only if µ0 > 1.21 − 1.645 · 0.4 √ 9 . = 0.9967, so the 1-sided confidence interval with confidence coefficient 1 − α = 0.95 is (0.9967, ∞). 9.6 Exercises 1. According to The Justice Project, “John Spirko was sentenced to death on the testimony of a witness who was ‘70 percent certain’ of his iden- tification.” Formulate this case as a problem in hypothesis testing. What can be deduced about the significance level used to convict Spirko? Does this choice of significance level strike you as suitable for a capital murder trial? 2. Blaise Pascal, the French theologian and mathematician, argued that we cannot know whether or not God exists, but that we must behave as though we do. He submitted that the consequences of wrongly be- having as though God does not exist are greater than the consequences of wrongly behaving as though God does exist, concluding that it is better to err on the side of caution and act as though God exists. This argument is known as Pascal’s Wager. Formulate Pascal’s Wager as a hypothesis testing problem. What are the Type I and Type II errors? On whom did Pascal place the burden of proof, believers or nonbelievers? 3. Dorothy owns a lovely glass dreidl. Curious as to whether or not it is fairly balanced, she spins her dreidl ten times, observing five gimels and five hehs. Surprised by these results, Dorothy decides to compute
  • 232. 230 CHAPTER 9. INFERENCE the probability that a fair dreidl would produce such aberrant results. Which of the probabilities specified in Exercise 3.7.5 is the most appro- priate choice of a significance probability for this investigation? Why? 4. It is thought that human influenza viruses originate in birds. It is quite possible that, several years ago, a human influenza pandemic was averted by slaughtering 1.5 million chickens brought to market in Hong Kong. Because it is impossible to test each chicken individually, such decisions are based on samples. Suppose that a boy has already died of a bird flu virus apparently contracted from a chicken. Several diseased chickens have already been identified. The health officials would prefer to err on the side of caution and destroy all chickens that might be infected; the farmers do not want this to happen unless it is absolutely necessary. Suppose that both the farmers and the health officals agree that all chickens should be destroyed if more than 2 percent of them are diseased. A random sample of n = 1000 chickens reveals 40 diseased chickens. (a) Let Xi = 1 if chicken i is diseased and Xi = 0 if it is not. Assume that X1, . . . , Xn ∼ P. To what family of probability distributions does P belong? What population parameter indexes this family? Use this parameter to state formulas for µ = EXi and σ2 = Var Xi. (b) State appropriate null and alternative hypotheses from the per- spective of the health officials. (c) State appropriate null and alternative hypotheses from the per- spective of the farmers. (d) Use the value of µ0 in the above hypotheses to compute the value of σ2 under H0. Then compute the value of the test statistic z. (e) Adopting the health officials’ perspective, and assuming that they are willing to risk a 0.1% chance of committing a Type I error, what action should be taken? Why? (f) Adopting the farmers’ perspective, and assuming that they are willing to risk a 10% chance of committing a Type I error, what action should be taken? Why? 5. A company that manufactures light bulbs has advertised that its 75- watt bulbs burn an average of 800 hours before failing. In reaction
  • 233. 9.6. EXERCISES 231 to the company’s advertising campaign, several dissatisfied custom- ers have complained to a consumer watchdog organization that they believe the company’s claim to be exaggerated. The consumer orga- nization must decide whether or not to allocate some of its financial resources to countering the company’s advertising campaign. So that it can make an informed decision, it begins by purchasing and testing 100 of the disputed light bulbs. In this experiment, the 100 light bulbs burned an average of x̄ = 745.1 hours before failing, with a sample standard deviation of s = 238.0 hours. Formulate null and alterna- tive hypotheses that are appropriate for this situation. Calculate a significance probability. Do these results warrant rejecting the null hypothesis at a significance level of α = 0.05? 6. To study the effects of Alzheimer’s disease (AD) on cognition, a scien- tist administers two batteries of neuropsychological tasks to 60 mildly demented AD patients. One battery is administered in the morning, the other in the afternoon. Each battery includes a task in which discourse is elicited by showing the patient a picture and asking the patient to describe it. The quality of the discourse is measured by counting the number of “information units” conveyed by the patient. The scientist wonders if asking a patient to describe Picture A in the morning is equivalent to asking the same patient to describe Picture B in the afternoon, after having described Picture A several hours ear- lier. To investigate, she computes the number of information units for Picture A minus the number of information units for Picture B for each patient. She finds an average difference of x̄ = −0.1833, with a sample standard deviation of s = 5.18633. Formulate null and alter- native hypotheses that are appropriate for this situation. Calculate a significance probability. Do these results warrant rejecting the null hypothesis at a significance level of α = 0.05? 7. Each student in a large statistics class of 600 students is asked to toss a fair coin 100 times, count the resulting number of Heads, and construct a 0.95-level confidence interval for the probability of Heads. Assume that each student uses a fair coin and constructs the confidence interval correctly. True or False: We would expect approximately 570 of the confidence intervals to contain the number 0.5. 8. The USGS decides to use a laser altimeter to measure the height µ of Mt. Wrightson, the highest point in Pima County, Arizona. It is
  • 234. 232 CHAPTER 9. INFERENCE known that measurements made by the laser altimeter have an ex- pected value equal to µ and a standard deviation of 1 meter. How many measurements should be made if the USGS wants to construct a 0.90-level confidence interval for µ that has a length of 20 centimeters? 9. Professor Johnson is interested in the probability that a certain type of randomly generated matrix has a positive determinant. His student attempts to calculate the probability exactly, but runs into difficulty because the problem requires her to evaluate an integral in 9 dimen- sions. Professor Johnson therefore decides to obtain an approximate probability by simulation, i.e., by randomly generating some matrices and observing the proportion that have positive determinants. His pre- liminary investigation reveals that the probability is roughly 0.05. At this point, Professor Park decides to undertake a more comprehensive simulation experiment that will, with 0.95-level confidence, correctly determine the probability of interest to within ±0.00001. How many random matrices should he generate to achieve the desired accuracy? 10. Consider a box that contains 10 tickets, labelled {1, 1, 1, 1, 2, 5, 5, 10, 10, 10}. From this box, I propose to draw (with replacement) n = 40 tickets. Let Y denote the sum of the values on the tickets that are drawn. To approximate p = P(170.5 < Y < 199.5), a Math 351 student writes an R function box.model that simulates the proposed experi- ment. Evaluating box.model is like observing a value, y, of the ran- dom variable Y . Then she writes a loop that repeatedly evaluates box.model and computes p̂, the proportion of times that box.model produces y ∈ (170.5, 199.5). The student intends to construct a 0.95- level confidence interval for p. If she desires an interval of length L, then how many times should she plan to evaluate box.model? Hint: How else might the student estimate p? 11. In September 2003, Lena spun a penny 89 times and observed 2 Heads. Let p denote the true probability that one spin of her penny will result in Heads. (a) The significance probability for testing H0 : p ≥ 0.3 versus H1 : p < 0.3 is p = P(Y ≤ 2), where Y ∼ Binomial(89; 0.3).
  • 235. 9.6. EXERCISES 233 i. Compute p as in Section 9.1, using the binomial distribution and pbinom. ii. Approximate p as in Section 9.4, using the normal distribu- tion and pnorm. How good is this approximation? (b) Construct a 1-sided confidence interval for p by determining for which values of p0 the null hypothesis H0 : p ≥ p0 would be accepted at a significance level of (approximately) α = 0.05.
  • 236. 234 CHAPTER 9. INFERENCE
  • 237. Chapter 10 1-Sample Location Problems The basic ideas associated with statistical inference were introduced in Chap- ter 9. We developed these ideas in the context of drawing inferences about a single population mean, and we assumed that the sample was large enough to justify appeals to the Central Limit Theorem for normal approximations. The population mean is a natural measure of centrality, but it is not the only one. Furthermore, even if we are interested in the population mean, our sample may be too small to justify the use of a large-sample normal approximation. The purpose of the next several chapters is to explore more thoroughly how statisticians draw inferences about measures of centrality. Measures of centrality are sometimes called location parameters. The title of this chapter indicates an interest in a location parameter of a single population. More specifically, we assume that X1, . . . , Xn ∼ P are inde- pendently and identically distributed, we observe a random sample ~ x = {x1, . . . , xn}, and we attempt to draw an inference about a location param- eter of P. Because it is not always easy to identify the relevant population in a particular experiment, we begin with some examples. Our analysis of these examples is clarified by posing the following four questions: 1. What are the experimental units, i.e., what are the objects that are being measured? 2. From what population (or populations) were the experimental units drawn? 3. What measurements were taken on each experimental unit? 4. What random variables are relevant to the specified inference? 235
  • 238. 236 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS For the sake of specificity, we assume that the location parameter of interest in the following examples is the population median, q2(P). Example 10.1 A machine is supposed to produce ball bearings that are 1 millimeter in diameter. To determine if the machine was correctly calibrated, a sample of ball bearings is drawn and the diameter of each ball bearing is measured. For this experiment: 1. An experimental unit is a ball bearing. Notice that we are distin- guishing between experimental units, the objects being measured (ball bearings), and units of measurement (e.g., millimeters). 2. There is one population, viz., all ball bearings that might be produced by the designated machine. 3. One measurement (diameter) is taken on each experimental unit. 4. Let Xi denote the diameter of ball bearing i. Then X1, . . . , Xn ∼ P and we are interested in drawing inferences about q2(P), the population median diameter. For example, we might test H0 : q2(P) = 1 against H1 : q2(P) 6= 1. Example 10.2 A drug is supposed to lower blood pressure. To deter- mine if it does, a sample of hypertensive patients are administered the drug for two months. Each person’s blood pressure is measured before and after the two month period. For this experiment: 1. An experimental unit is a patient. 2. There is one population of hypertensive patients. (It may be difficult to discern the precise population that was actually sampled. All hy- pertensive patients? All Hispanic male hypertensive patients who live in Houston, TX? All Hispanic male hypertensive patients who live in Houston, TX, and who are sufficiently well-disposed to the medical establishment to participate in the study? In published journal arti- cles, scientists are often rather vague about just what population was actually sampled.) 3. Two measurements (blood pressure before and after treatment) are taken on each experimental unit. Let Bi and Ai denote the blood pressures of patient i before and after treatment.
  • 239. 237 4. Let Xi = Bi − Ai, the decrease in blood pressure for patient i. Then X1, . . . , Xn ∼ P and we are interested in drawing inferences about q2(P), the population median decrease. For example, we might test H0 : q2(P) ≤ 0 against H1 : q2(P) > 0. Example 10.3 A graduate student investigated the effect of Parkin- son’s disease (PD) on speech breathing. She recruited 16 PD patients to participate in her study. She also recruited 16 normal control (NC) subjects. Each NC subject was carefully matched to one PD patient with respect to sex, age, height, and weight. The lung volume of each study participant was measured. For this experiment: 1. An experimental unit was a matched PD-NC pair. 2. The population comprises all possible PD-NC pairs that satisfy the study criteria. 3. Two measurements (PD and NC lung volume) were taken on each experimental unit. Let Di and Ci denote the PD and NC lung volumes of pair i. 4. Let Xi = log(Di/Ci) = log Di − log Ci, the logarithm of the PD pro- portion of NC lung volume. (This is not the only way of comparing Di and Ci, but it worked well in this investigation. Ratios can be difficult to analyze and logarithms convert ratios to differences. Furthermore, lung volume data tend to be skewed to the right. As in Exercise 2 of Section 7.6, logarithmic transformations of such data often have a symmetrizing effect.) Then X1, . . . , Xn ∼ P and we are interested in drawing inferences about q2(P). For example, to test the theory that PD restricts lung volume, we might test H0 : q2(P) ≥ 0 against H1 : q2(P) < 0. This chapter is divided into sections according to distributional assump- tions about the Xi: 10.1 If the data are assumed to be normally distributed, then we will be interested in inferences about the population’s center of symmetry, which we will identify as the population mean. 10.3 If the data are only assumed to be symmetrically distributed, then we will also be interested in inferences about the population’s center of symmetry, but we will identify it as the population median.
  • 240. 238 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS 10.2 If the data are only assumed to be continuously distributed, then we will be interested in inferences about the population median. Each section is subdivided into subsections, according to the type of inference (point estimation, hypothesis testing, set estimation) at issue. 10.1 The Normal 1-Sample Location Problem In this section we assume that P = Normal(µ, σ2). As necessary, we will distinguish between cases in which σ is known and cases in which σ is un- known. 10.1.1 Point Estimation Because normal distributions are symmetric, the location parameter µ is the center of symmetry and therefore both the population mean and the popu- lation median. Hence, there are (at least) two natural estimators of µ, the sample mean X̄n and the sample median q2(P̂n). Both are consistent, unbi- ased estimators of µ. We will compare them by considering their asymptotic relative efficiency (ARE). A rigorous definition of ARE is beyond the scope of this book, but the concept is easily interpreted. If the true distribution is P = N(µ, σ2), then the ARE of the sample median to the sample mean for estimating µ is e(P) = 2 π . = 0.64. This statement has the following interpretation: for large samples, using the sample median to estimate a normal population mean is equivalent to ran- domly discarding approximately 36% of the observations and calculating the sample mean of the remaining 64%. Thus, the sample mean is substantially more efficient than is the sample median at extracting location information from a normal sample. In fact, if P = Normal(µ, σ2), then the ARE of any estimator of µ to the sample mean is ≤ 1. This is sometimes expressed by saying that the sample mean is asymptotically efficient for estimating a normal mean. The sample mean also enjoys a number of other optimal properties in this case. The sample mean is unquestionably the preferred estimator for the normal 1-sample location problem.
  • 241. 10.1. THE NORMAL 1-SAMPLE LOCATION PROBLEM 239 10.1.2 Hypothesis Testing If σ is known, then the possible distributions of Xi are n Normal(µ, σ2 ) : −∞ < µ < ∞ o . If σ is unknown, then the possible distributions of Xi are n Normal(µ, σ2 ) : −∞ < µ < ∞, σ > 0 o . We partition the possible distributions into two subsets, the null and alternative hypotheses. For example, if σ is known then we might specify H0 = n Normal(0, σ2 ) o and H1 = n Normal(µ, σ2 ) : µ 6= 0 o , which we would typically abbreviate as H0 : µ = 0 and H1 : µ 6= 0. Analo- gously, if σ is unknown then we might specify H0 = n Normal(0, σ2 ) : σ > 0 o and H1 = n Normal(µ, σ2 ) : µ 6= 0, σ > 0 o , which we would also abbreviate as H0 : µ = 0 and H1 : µ 6= 0. More generally, for any real number µ0 we might specify H0 = n Normal(µ0, σ2 ) o and H1 = n Normal(µ, σ2 ) : µ 6= µ0 o if σ is known, or H0 = n Normal(µ0, σ2 ) : σ > 0 o and H1 = n Normal(µ, σ2 ) : µ 6= µ0, σ > 0 o if σ in unknown. In both cases, we would typically abbreviate these hy- potheses as H0 : µ = µ0 and H1 : µ 6= µ0. The preceding examples involve two-sided alternative hypotheses. Of course, as in Section 9.4, we might also specify one-sided hypotheses. How- ever, the material in the present section is so similar to the material in Section 9.4 that we will only discuss two-sided hypotheses. The intuition that underlies testing H0 : µ = µ0 versus H1 : µ 6= µ0 was discussed in Section 9.4:
  • 242. 240 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS • If H0 is true, then we would expect the sample mean to be close to the population mean µ0. • Hence, if X̄n = x̄n is observed far from µn, then we are inclined to reject H0. To make this reasoning precise, we reject H0 if and only if the significance probability p = Pµ0 ¡¯ ¯X̄n − µ0 ¯ ¯ ≥ |x̄n − µ0| ¢ ≤ α. (10.1) The first equation in (10.1) is a formula for a significance probability. Notice that this formula is identical to equation (9.2). The one difference between the material in Section 9.4 and the present material lies in how one computes p. For emphasis, we recall the following: 1. The hypothesized mean µ0 is a fixed number specified by the null hypothesis. 2. The estimated mean, x̄n, is a fixed number computed from the sample. Therefore, so is |x̄n − µ0|, the difference between the estimated mean and the hypothesized mean. 3. The estimator, X̄n, is a random variable. 4. The subscript in Pµ0 reminds us to compute the probability under H0 : µ = µ0. 5. The significance level α is a fixed number specified by the researcher, preferably before the experiment was performed. To apply (10.1), we must compute p. In Section 9.4, we overcame that technical difficulty by appealing to the Central Limit Theorem. This allowed us to approximate p even when we did not know the distribution of the Xi, but only for reasonably large sample sizes. However, if we know that X1, . . . , Xn are normally distributed, then it turns out that we can calculate p exactly, even when n is small. Case 1: The Population Variance is Known Under the null hypothesis that µ = µ0, X1, . . . , Xn ∼ Normal(µ0, σ2) and X̄n ∼ Normal à µ0, σ2 n ! .
  • 243. 10.1. THE NORMAL 1-SAMPLE LOCATION PROBLEM 241 This is the exact distribution of X̄n, not an asymptotic approximation. We convert X̄n to standard units, obtaining Z = X̄n − µ0 σ/ √ n ∼ Normal (0, 1) . (10.2) The observed value of Z is z = x̄n − µ0 σ/ √ n . The significance probability is p = Pµ0 ¡¯ ¯X̄n − µ0 ¯ ¯ ≥ |x̄n − µ0| ¢ = Pµ0 ï ¯ ¯ ¯ ¯ X̄n − µ0 σ/ √ n ¯ ¯ ¯ ¯ ¯ ≥ ¯ ¯ ¯ ¯ x̄n − µ0 σ/ √ n ¯ ¯ ¯ ¯ ! = P (|Z| ≥ |z|) = 2P (Z ≥ |z|) . In this case, the test that rejects H0 if and only if p ≤ α is sometimes called the 1-sample z-test. The random variable Z is the test statistic. Before considering the case of an unknown population variance, we re- mark that it is possible to derive point estimators from hypothesis tests. For testing H0 : µ = µ0 versus H1 : µ 6= µ0, the test statistics are Z(µ0) = X̄n − µ0 σ/ √ n . If we observe X̄n = x̄n, then what value of µ0 minimizes |z(µ0)|? Clearly, the answer is µ0 = x̄n. Thus, our preferred point estimate of µ is the µ0 for which it is most difficult to reject H0 : µ = µ0. This type of reasoning will be extremely useful for analyzing situations in which we know how to test but don’t know how to estimate. Case 2: The Population Variance is Unknown Statement (10.2) remains true if σ is unknown, but it is no longer possible to compute z. Therefore, we require a different test statistic for this case. A natural approach is to modify Z by replacing the unknown σ with an estimator of it. Toward that end, we introduce the test statistic Tn = X̄n − µ0 Sn/ √ n ,
  • 244. 242 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS where S2 n is the unbiased estimator of the population variance defined by equation (9.1). Because Tn and Z are different random variables, they have different probability distributions and our first order of business is to deter- mine the distribution of Tn. We begin by stating a useful fact: Theorem 10.1 If X1, . . . , Xn ∼ Normal(µ, σ2), then (n − 1)S2 n σ2 = n X i=1 ¡ Xi − X̄n ¢2 /σ2 ∼ χ2 (n − 1). The χ2 (chi-squared) distribution was described in Section 5.5 and Theorem 10.1 is closely related to Theorem 5.3. Next we write Tn = X̄n − µ0 Sn/ √ n = X̄n − µ0 σ/ √ n · σ/ √ n Sn/ √ n = Z · σ Sn = Z/ q S2 n/σ2 = Z/ q [(n − 1)S2 n/σ2] /(n − 1). Using Theorem 10.1, we see that Tn can be written in the form Tn = Z p Y/ν , where Z ∼ Normal(0, 1) and Y ∼ χ2(ν). If Z and Y are independent random variables, then it follows from Definition 5.7 that Tn ∼ t(n − 1). Both Z and Y = (n − 1)S2 n/σ2 depend on X1, . . . , Xn, so one would be inclined to think that Z and Y are dependent. This is usually the case, but it turns out that they are independent if X1, . . . , Xn ∼ Normal(µ, σ2). This is another remarkable property of normal distributions, usually stated as follows: Theorem 10.2 If X1, . . . , Xn ∼ Normal(µ, σ2), then X̄n and S2 n are inde- pendent random variables. The result that interests us can then be summarized as follows: Corollary 10.1 If X1, . . . , Xn ∼ Normal(µ0, σ2), then Tn = X̄n − µ0 Sn/ √ n ∼ t(n − 1).
  • 245. 10.1. THE NORMAL 1-SAMPLE LOCATION PROBLEM 243 Now let tn = x̄n − µ0 sn/ √ n , the observed value of the test statistic Tn. The significance probability is p = Pµ0 (|Tn| ≥ |tn|) = 2Pµ0 (Tn ≥ |tn|) . In this case, the test that rejects H0 if and only if p ≤ α is called Student’s 1-sample t-test. Because it is rarely the case that the population variance is known when the population mean is not, Student’s 1-sample t-test is used much more frequently than the 1-sample z-test. We will use the R function pt to compute significance probabilities for Student’s 1-sample t-test, as illustrated in the following examples. Example 10.4 Suppose that, to test H0 : µ = 0 versus H1 : µ 6= 0 (a 2-sided alternative), we draw a sample of size n = 25 and observe x̄ = 1 and s = 3. Then t = (1 − 0)/(3/ √ 25) = 5/3 and the 2-tailed significance probability is computed using both tails of the t(24) distribution, i.e., p = 2 ∗ pt(−5/3, df = 24) . = 0.1086. Example 10.5 Suppose that, to test H0 : µ ≤ 0 versus H1 : µ > 0 (a 1-sided alternative), we draw a sample of size n = 25 and observe x̄ = 2 and s = 5. Then t = (2 − 0)/(5/ √ 25) = 2 and the 1-tailed significance probability is computed using one tail of the t(24) distribution, i.e., p = 1 − pt(2, df = 24) . = 0.0285. 10.1.3 Interval Estimation As in Section 9.5, we will derive confidence intervals from tests. We imagine testing H0 : µ = µ0 versus H1 : µ 6= µ0 for every µ0 ∈ (−∞, ∞). The µ0 for which H0 : µ = µ0 is rejected are implausible values of µ; the µ0 for which H0 : µ = µ0 is accepted constitute the confidence interval. To accomplish this, we will have to derive the critical values of our tests. A significance level of α will result in a confidence coefficient of 1 − α. Case 1: The Population Variance is Known If σ is known, then we reject H0 : µ = µ0 if and only if p = Pµ0 ¡¯ ¯X̄n − µ0 ¯ ¯ ≥ |x̄n − µ0| ¢ = 2Φ (− |zn|) ≤ α,
  • 246. 244 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS where zn = (x̄n −µ0)/(σ/ √ n). By the symmetry of the normal distribution, this condition is equivalent to the condition 1 − Φ (− |zn|) = P (Z > − |zn|) = P (Z < |zn|) = Φ (|zn|) ≥ 1 − α/2, where Z ∼ Normal(0, 1), and therefore to the condition |zn| ≥ qz, where qz denotes the 1 − α/2 quantile of Normal(0, 1). The quantile qz is the critical value of the two-sided 1-sample z-test. Thus, given a significance level α and a corresponding critical value qz, we reject H0 : µ = µ0 if and only if (iff) ¯ ¯ ¯ ¯ x̄n − µ0 σ/ √ n ¯ ¯ ¯ ¯ = |zn| ≥ qz iff |x̄n − µ0| ≥ qzσ/ √ n iff µ0 6∈ ¡ x̄n − qzσ/ √ n, x̄n + qzσ/ √ n ¢ and we conclude that the desired set of plausible values is the interval µ x̄n − qz σ √ n , x̄n + qz σ √ n ¶ . Notice that both the preceding derivation and the resulting confidence interval are identical to the derivation and confidence interval in Section 9.5. The only difference is that, because we are now assuming that X1, . . . , Xn ∼ Normal(µ, σ2) instead of relying on the Central Limit Theorem, no approx- imation is required. Example 10.6 Suppose that we desire 90% confidence about µ and σ = 3 is known. Then α = 0.10 and qz . = 1.645. Suppose that we draw n = 25 observations and observe x̄n = 1. Then 1 ± 1.645 3 √ 25 = 1 ± 0.987 = (0.013, 1.987) is a 0.90-level confidence interval for µ. Case 2: The Population Variance is Unknown If σ is unknown, then it must be estimated from the sample. The reasoning in this case is the same, except that we rely on Student’s 1-sample t-test. As before, we use S2 n to estimate σ2. The critical value of the 2-sided 1-sample t-test is qt, the 1−α/2 quantile of a t distribution with n−1 degrees of freedom, and the confidence interval is µ x̄n − qt sn √ n , x̄n + qt sn √ n ¶ .
  • 247. 10.2. THE GENERAL 1-SAMPLE LOCATION PROBLEM 245 Example 10.7 Suppose that we desire 90% confidence about µ and σ is unknown. Suppose that we draw n = 25 observations and observe x̄n = 1 and s = 3. Then qt = qt(.95, df = 24) . = 1.711 and 1 ± 1.711 × 3/ √ 25 = 1 ± 1.027 = (−0.027, 2.027) is a 90% confidence interval for µ. Notice that the confidence interval is larger when we use s = 3 instead of σ = 3. 10.2 The General 1-Sample Location Problem In Section 10.1 we assumed that X1, . . . , Xn ∼ P and P = Normal(µ, σ2). In this section, we again assume that X1, . . . , Xn ∼ P, but now we assume only that the Xi are continuous random variables. Because P is not assumed to be symmetric, we must decide which lo- cation parameter to study. The population median, q2(P), enjoys several advantages. Unlike the population mean, the population median always ex- ists and is not sensitive to the influence of outliers. Furthermore, it turns out that one can develop fairly elementary ways to study medians, even when little is known about the probability distribution P. For simplicity, we will denote the population median by θ. 10.2.1 Hypothesis Testing It is convenient to begin our study of the general 1-sample location problem with a discussion of hypothesis testing. As is Section 10.1, we initially con- sider testing a 2-sided alternative, H0 : θ = θ0 versus H1 : θ 6= θ0. We will explicate a procedure known as the sign test. The intuition that underlies the sign test is elementary. If the population median is θ = θ0, then when we sample P we should observe roughly half the xi above θ0 and half the xi below θ0. Hence, if we observe proportions of xi above/below θ0 that are very different from one half, then we are inclined to reject the possibility that θ = θ0. More formally, let p+ = PH0 (Xi > θ0) and p− = PH0 (Xi < θ0). Because the Xi are continuous, PH0 (Xi = θ0) = 0 and therefore p+ = p− = 0.5. Hence, under H0, observing whether Xi > θ0 or Xi < θ0 is equivalent to tossing a fair coin, i.e., to observing a Bernoulli trial with success probability p = 0.5. The sign test is the following procedure:
  • 248. 246 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS 1. Let ~ x = {x1, . . . , xn} denote the observed sample. If the Xi are con- tinuous random variables, then P(Xi = θ0) = 0 and it should be that each xi 6= θ0. In practice, of course, it may happen that we do observe one or more xi = θ0. For the moment, we assume that ~ x contains no such values. 2. Let Y = #{Xi > θ0} = #{Xi − θ0 > 0} be the test statistic. Under H0 : θ = θ0, Y ∼ Binomial(n; p = 0.5). The observed value of the test statistic is y = #{xi > θ0} = #{xi − θ0 > 0}. 3. Notice that EY = n/2. The significance probability is p = Pθ0 µ¯ ¯ ¯ ¯Y − n 2 ¯ ¯ ¯ ¯ ≥ ¯ ¯ ¯ ¯y − n 2 ¯ ¯ ¯ ¯ ¶ . The sign test rejects H0 : θ = θ0 if and only if p ≤ α. 4. To compute p, we first note that ¯ ¯ ¯ ¯Y − n 2 ¯ ¯ ¯ ¯ ≥ ¯ ¯ ¯ ¯y − n 2 ¯ ¯ ¯ ¯ is equivalent to the event (a) {Y ≤ y or Y ≥ n − y} if y ≤ n/2; (b) {Y ≥ y or Y ≤ n − y} if y ≥ n/2. To accomodate both cases, let c = min(y, n − y). Then p = Pθ0 (Y ≤ c) + Pθ0 (Y ≥ c) = 2Pθ0 (Y ≤ c) = 2*pbinom(c,n,.5). Example 10.8(a) Suppose that we want to test H0 : θ = 100 versus H1 : θ 6= 100 at significance level α = 0.05, having observed the sample ~ x = {98.73, 97.17, 100.17, 101.26, 94.47, 96.39, 99.67, 97.77, 97.46, 97.41}. Here n = 10, y = #{xi > 100} = 2, and c = min(2, 10 − 2) = 2, so p = 2*pbinom(2,10,.5) = 0.109375 > 0.05 and we decline to reject H0.
  • 249. 10.2. THE GENERAL 1-SAMPLE LOCATION PROBLEM 247 Example 10.8(b) Now suppose that we want to test H0 : θ ≤ 97 versus H1 : θ > 97 at significance level α = 0.05, using the same data. Here n = 10, y = #{xi > 97} = 8, and c = min(8, 10 − 8) = 2. Because large values of Y are evidence against H0 : θ ≤ 97, p = Pθ0 (Y ≥ y) = Pθ0 (Y ≥ 8) = 1 − Pθ0 (Y ≤ 7) = 1-pbinom(7,10,.5) = 0.0546875 > 0.05 and we decline to reject H0. Thus far we have assumed that the sample contains no values for which xi = θ0. In practice, we may well observe such values. For example, if the measurements in Example 10.8(a) were made less precisely, then we might have observed the following sample: ~ x = {99, 97, 100, 101, 94, 96, 100, 98, 97, 97}. (10.3) If we want to test H0 : θ = 100 versus H1 : θ 6= 100, then we have two values that equal θ0 and the sign test requires modification. We assume that #{xi = θ0} is fairly small; otherwise, the assumption that the Xi are continuous is questionable. We consider two possible ways to proceed: 1. Perhaps the most satisfying solution is to compute all of the signifi- cance probabilities that correspond to different ways of counting the xi = θ0 as larger or smaller than θ0. If there are k observations xi = θ0, then this will produce 2k significance probabilities, which we might av- erage to obtain a single p. 2. Alternatively, let p0 denote the significance probability obtained by counting in the way that is most favorable to H0 (least favorable to H1). This is the largest of the possible significance probabilities, so if p0 ≤ α then we reject H0. Similarly, let p1 denote the significance probability obtained by counting in the way that is least favorable to H0 (most favorable to H1). This is the smallest of the possible significance probabilities, so if p1 > α then we decline to reject H0. If p0 > α ≥ p1, then we simply declare the results to be equivocal. Example 10.8(c) Suppose that we want to test H0 : θ = 100 versus H1 : θ 6= 100 at significance level α = 0.05, having observed the sample
  • 250. 248 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS (10.3). Here n = 10 and y = #{xi > 100} depends on how we count the observations x3 = x7 = 100. There are 22 = 4 possibilities: possibility y = #{xi > 100} c = min(y, 10 − y) p y3 < 100, y7 < 100 1 1 0.021484 y3 < 100, y7 > 100 2 2 0.109375 y3 > 100, y7 < 100 2 2 0.109375 y3 > 100, y7 > 100 3 3 0.343750 Noting that p0 . = 0.344 > 0.05 > 0.021 . = p1, we might declare the results to be equivocal. However, noting that 3 of the 4 possibilities lead us to accept H0 (and that the average p . = 0.146), we might conclude—somewhat more decisively—that there is insufficient evidence to reject H0. The distinction between these two interpretations is largely rhetorical, as the fundamental logic of hypothesis testing requires that we decline to reject H0 unless there is compelling evidence against it. 10.2.2 Point Estimation Next we consider the problem of estimating the population median. A nat- ural estimate is the plug-in estimate, the sample median. Another approach begins by posing the following question: For what value of θ0 is the sign test least inclined to reject H0 : θ = θ0 in favor of H1 : θ 6= θ0? The answer to this question is also a natural estimate of the population median. In fact, the plug-in and sign-test approaches lead to the same estimation procedure. To understand why, we focus on the case that n is even, in which case n/2 is a possible value of Y = #{Xi > θ0}. If |y − n/2| = 0, then p = P µ¯ ¯ ¯ ¯Y − n 2 ¯ ¯ ¯ ¯ ≥ 0 ¶ = 1. We see that the sign test produces the maximal significance probability of p = 1 when y = n/2, i.e., when θ0 is chosen so that precisely half the observations exceed θ0. This means that the sign test is least likely to reject H0 : θ = θ0 when θ0 is the sample median. (A similar argument leads to the same conclusion when n is odd.) Thus, using the sign test to test hypotheses about population medians corresponds to using the sample median to estimate population medians, just as using Student’s t-test to test hypotheses about population means corresponds to using the sample mean to estimate population means. One
  • 251. 10.2. THE GENERAL 1-SAMPLE LOCATION PROBLEM 249 consequence of this remark is that, when the population mean and median are identical, the “Pitman efficiency” of the sign test to Student’s t-test equals the asymptotic relative efficiency of the sample median to the sample median. For example, using the sign test on normal data is asymptotically equivalent to randomly discarding 36% of the observations, then using Stu- dent’s t-test on the remaining 64%. 10.2.3 Interval Estimation Finally, we consider the problem of constructing a (1 − α)-level confidence interval for the population median. Again we rely on the sign test, deter- mining for which θ0 the level-α sign test of H0 : θ = θ0 versus H1 : θ 6= θ0 will accept H0. The sign test will reject H0 : θ = θ0 if and only if y (θ0) = # {xi > θ0} is either too large or too small. Equivalently, H0 will be accepted if θ0 is such that the numbers of observations above and below θ0 are roughly equal. To determine the critical value for the desired sign test, we suppose that Y ∼ Binomial(n; 0.5). We would like to find k such that α = 2P(Y ≤ k), or α/2 = pbinom(k, n, 0.5). In practice, we won’t be able to solve this equation exactly. We will use the qbinom function plus trial-and-error to solve it approximately, then modify our choice of α accordingly. Having determined an acceptable (α, k), the sign test rejects H0 : θ = θ0 at level α if and only if either y(θ0) ≤ k or y(θ0) ≥ n − k. We need to translate these inequalities into an interval of plausible values of θ0. To do so, it is helpful to sort the values observed in the sample. Definition 10.1 The order statistics of ~ x = {x1, . . . , xn} are any permuta- tion of the xi such that x(1) ≤ x(2) ≤ · · · ≤ x(n−1) ≤ x(n). If ~ x contains n distinct values, then there is a unique set of order statistics and the above inequalities are strict; otherwise, we say that ~ x contains ties. Thus, x(1) is the smallest value in ~ x and x(n) is the largest. If n = 2m+1 (n is odd), then the sample median is x(m+1); if n = 2m (n is even), then the sample median is [x(m) + x(m+1)]/2.
  • 252. 250 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS For simplicity we assume that ~ x contains no ties. If θ0 < x(k+1), then at least n − k observations exceed θ0 and the sign test rejects H0 : θ = θ0. Similarly, if θ0 > x(n−k), then no more than k observations exceed θ0 and the sign test rejects H0 : θ = θ0. We conclude that the sign test accepts H0 : θ = θ0 if and only if θ0 lies in the (1 − α)-level confidence interval ³ x(k+1), x(n−k) ´ . Example 10.8(d) Using the n = 10 observations from Example 10.8(a), we endeavor to construct a 0.90-level confidence interval for the population median. We begin by determining a suitable choice of (α, k). If 1−α = 0.90, then α/2 = 0.05. R returns qbinom(.05,10,.5) = 2. Next we experiment: k pbinom(k, 10, 0.5) 2 0.0546875 1 0.01074219 We choose k = 2, resulting in a confidence level of 1 − α = 1 − 2 · 0.0546875 = 0.890625 . = 0.89, nearly equal to the requested level of 0.90. Now, upon sorting the data (the sort function in R may be useful), we quickly discern that the desired confidence interval is ³ x(3), x(8) ´ = (97.17, 99.67). 10.3 The Symmetric 1-Sample Location Problem 10.4 A Case Study from Neuropsychology
  • 253. 10.5. EXERCISES 251 10.5 Exercises Problem Set A 1. Assume that a large number, n = 400, of observations are indepen- dently drawn from a normal distribution with unknown population mean µ and unknown population variance σ2. The resulting sample, ~ x, is used to test H0 : µ ≤ 0 versus H1 : µ > 0 at significance level α = 0.05. (a) What test should be used in this situation? If we observe ~ x that results in x̄ = 3.194887 and s2 = 104.0118, then what is the value of the test statistic? (b) If we observe ~ x that results in a test statistic value of 1.253067, then which of the following R expressions best approximates the significance probability? i. 2*pnorm(1.253067) ii. 2*pnorm(-1.253067) iii. 1-pnorm(1.253067) iv. 1-pt(1.253067,df=399) v. pt(1.253067,df=399) (c) True of False: if we observe ~ x that results in a significance prob- ability of p = 0.03044555, then we should reject the null hypoth- esis. 2. A device counts the number of ions that arrive in a given time interval, unless too many arrive. An experiment that relies on this device pro- duces the following counts, where Big means that the count exceeded 255. 251 238 249 Big 243 248 229 Big 235 244 254 251 252 244 230 222 224 246 Big 239 Use these data to construct a confidence interval for the population median number of ions with a confidence coefficient of approximately 0.95. Problem Set B The following data are from Darwin (1876), The Effect of Cross- and Self-Fertilization in the Vegetable Kingdom, Second Edition, London: John Murray. They appear as Data Set 3 in A Handbook of Small Data Sets, accompanied by the following description:
  • 254. 252 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS “Pairs of seedlings of the same age, one produced by cross-fertilization and the other by self-fertilization, were grown together so that the members of each pair were reared under nearly identical con- ditions. The aim was to demonstrate the greater vigour of the cross-fertilized plants. The data are the final heights [in inches] of each plant after a fixed period of time. Darwin consulted [Francis] Galton about the analysis of these data, and they were discussed further in [Ronald] Fisher’s Design of Experiments.” Pair Fertilized Cross Self 1 23.5 17.4 2 12.0 20.4 3 21.0 20.0 4 22.0 20.0 5 19.1 18.4 6 21.5 18.6 7 22.1 18.6 8 20.4 15.3 9 18.3 16.5 10 21.6 18.0 11 23.3 16.3 12 21.0 18.0 13 22.1 12.8 14 23.0 15.5 15 12.0 18.0 1. Show that this problem can be formulated as a 1-sample location prob- lem. To do so, you should: (a) Identify the experimental units and the measurement(s) taken on each unit. (b) Define appropriate random variables X1, . . . , Xn ∼ P. Remem- ber that the statistical procedures that we will employ assume that these random variables are independent and identically dis- tributed. (c) Let θ denote the location parameter (measure of centrality) of interest. Depending on which statistical procedure we decide to use, either θ = EXi = µ or θ = q2(Xi). State appropriate null and alternative hypotheses about θ.
  • 255. 10.5. EXERCISES 253 2. Does it seem reasonable to assume that the sample ~ x = (x1, . . . , xn), the observed values of X1, . . . , Xn, were drawn from: (a) a normal distribution? Why or why not? (b) a symmetric distribution? Why or why not? 3. Assume that X1, . . . , Xn are normally distributed and let θ = EXi = µ. (a) Test the null hypothesis derived above using Student’s 1-sample t-test. What is the significance probability? If we adopt a signif- icance level of α = 0.05, should we reject the null hypothesis? (b) Construct a (2-sided) confidence interval for θ with a confidence coefficient of approximately 0.90. 4. Now we drop the assumption of normality. Assume that X1, . . . , Xn are symmetric (but not necessarily normal), continuous random variables and let θ = q2(Xi). (a) Test the null hypothesis derived above using the Wilcoxon signed rank test. What is the significance probability? If we adopt a significance level of α = 0.05, should we reject the null hypothesis? (b) Estimate θ by computing the median of the Walsh averages. (c) Construct a (2-sided) confidence interval for θ with a confidence coefficient of approximately 0.90. 5. Finally we drop the assumption of symmetry, assuming only that X1, . . . , Xn are continuous random variables, and let θ = q2(Xi). (a) Test the null hypothesis derived above using the sign test. What is the significance probability? If we adopt a significance level of α = 0.05, should we reject the null hypothesis? (b) Estimate θ by computing the sample median. (c) Construct a (2-sided) confidence interval for θ with a confidence coefficient of approximately 0.90. Problem Set C The ancient Greeks greatly admired rectangles with a height-to-width ratio of 1 : 1 + √ 5 2 = 0.618034.
  • 256. 254 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS They called this number the “golden ratio” and used it repeatedly in their art and architecture, e.g., in building the Parthenon. Furthermore, golden rectangles are often found in the art of later western cultures. A cultural anthropologist wondered if the Shoshoni, a native American civilization, also used golden rectangles. The following measurements, which appear as Data Set 150 in A Handbook of Small Data Sets, are height-to- width ratios of beaded rectangles used by the Shoshoni in decorating various leather goods: 0.693 0.662 0.690 0.606 0.570 0.749 0.672 0.628 0.609 0.844 0.654 0.615 0.668 0.601 0.576 0.670 0.606 0.611 0.553 0.933 We will analyze the Shoshoni rectangles as a 1-sample location problem. 1. There are two natural scales that we might use in analyzing these data. One possibility is to analyze the ratios themselves; the other is to analyze the (natural) logarithms of the ratios. For which of these possibilities would an assumption of normality seem more plausible? Please justify your answer. 2. Choose the possibility (ratios or logarithms of ratios) for which an as- sumption of normality seems more plausible. Formulate suitable null and alternative hypotheses for testing the possibility that the Shoshoni were using golden rectangles. Using Student’s 1-sample t-test, compute a significance probability for testing these hypotheses. Would you re- ject or accept the null hypothesis using a significance level of 0.05? 3. Suppose that we are unwilling to assume that either the ratios or the log-ratios were drawn from a normal distribution. Use the sign test to construct a 0.90-level confidence interval for the population median of the ratios. Problem Set D Researchers studied the effect of the drug caprotil on essential hypertension, reporting their findings in the British Medical Jour- nal. They measured the supine systolic and diastolic blood pressures of 15 patients with moderate essential hypertension, immediately before and two hours after administering caprotil. The following measurements are Data
  • 257. 10.5. EXERCISES 255 Set 72 in A Handbook of Small Data Sets: Patient Systolic Diastolic before after before after 1 210 201 130 125 2 169 165 122 121 3 187 166 124 121 4 160 157 104 106 5 167 147 112 101 6 176 145 101 85 7 185 168 121 98 8 206 180 124 105 9 173 147 115 103 10 146 136 102 98 11 174 151 98 90 12 201 168 119 98 13 198 179 106 110 14 148 129 107 103 15 154 131 100 82 We will consider the question of whether or not caprotil affects systolic and diastolic blood pressure differently. 1. Let SB and SA denote before and after systolic blood pressure; let DB and DB denote before and after diastolic blood pressure. There are several random variables that might be of interest: Xi = (SBi − SAi) − (DBi − DAi) (10.4) Xi = SBi − SAi SBi − DBi − DAi DBi (10.5) Xi = SBi − SAi SBi ÷ DBi − DAi DBi (10.6) Xi = log µ SBi − SAi SBi ÷ DBi − DAi DBi ¶ (10.7) Suggest rationales for considering each of these possibilities. 2. Which (if any) of the above random variables appear to be normally distributed? Which appear to be symmetrically distributed? 3. Does caprotil affect systolic and diastolic blood pressure differently? Write a brief report that summarizes your investigation and presents your conclusion(s).
  • 258. 256 CHAPTER 10. 1-SAMPLE LOCATION PROBLEMS
  • 259. Chapter 11 2-Sample Location Problems Thus far, in Chapters 9 and 10, we have studied inferences about a single population. In contrast, the present chapter is concerned with comparing two populations with respect to some measure of centrality, typically the population mean or the population median. Specifically, we assume the following: 1. X1, . . . , Xn1 ∼ P1 and Y1, . . . , Yn2 ∼ P2 are continuous random vari- ables. The Xi and the Yj are mutually independent. In particular, there is no natural pairing of X1 with Y1, X2 with Y2, etc. 2. P1 has location parameter θ1 and P2 has location parameter θ2. We assume that comparisons of θ1 and θ2 are meaningful. For example, we might compare population means, θ1 = µ1 = EXi and θ2 = µ2 = EYj, or population medians, θ1 = q2(Xi) and θ2 = q2(Yj), but we would not compare the mean of one population and the median of another population. The shift parameter, ∆ = θ1 − θ2, measures the difference in population location. 3. We observe random samples ~ x = {x1, . . . , xn1 } and ~ y = {y1, . . . , yn2 }, from which we attempt to draw inferences about ∆. Notice that we do not assume that n1 = n2. The same four questions that we posed at the beginning of Chapter 10 can be asked here. What distinguishes 2-sample problems from 1-sample problems is the number of populations from which the experimental units were drawn. The prototypical case of a 2-sample problem is the case of a treatment population and a control population. We begin by considering some examples. 257
  • 260. 258 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS Example 11.1 A researcher investigated the effect of Alzheimer’s dis- ease (AD) on ability to perform a confrontation naming task. She recruited 60 mildly demented AD patients and 60 normal elderly control subjects. The control subjects resembled the AD patients in that the two groups had comparable mean ages, years of education, and (estimated) IQ scores; how- ever, the control subjects were not individually matched to the AD patients. Each person was administered the Boston Naming Test (BNT), on which higher scores represent better performance. For this experiment: 1. An experimental unit is a person. 2. The experimental units belong to one of two populations: AD patients or normal elderly persons. 3. One measurement (score on BNT) is taken on each experimental unit. 4. Let Xi denote the BNT score for AD patient i. Let Yj denote the BNT score for control subject j. Then X1, . . . , Xn1 ∼ P1, Y1, . . . , Yn2 ∼ P2, and we are interested in drawing inferences about ∆ = θ1 − θ2. Notice that ∆ < 0 if and only if θ1 < θ2. Thus, to document that AD compromises confrontation naming ability, we might test H0 : ∆ ≥ 0 against H1 : ∆ < 0. Example 11.2 A drug is supposed to lower blood pressure. To deter- mine if it does, n1 + n2 hypertensive patients are recruited to participate in a double-blind study. The patients are randomly assigned to a treatment group of n1 patients and a control group of n2 patients. Each patient in the treatment group receives the drug for two months; each patient in the control group receives a placebo for the same period. Each patient’s blood pressure is measured before and after the two month period, and neither the patient nor the technician know to which group the patient was assigned. For this experiment: 1. An experimental unit is a patient. 2. The experimental units belong to one of two populations: hypertensive patients who receive the drug and hypertensive patients who receive the placebo. Notice that there are two populations despite the fact that all n1 + n2 patients were initially recruited from a single population. Different treatment protocols create different populations.
  • 261. 11.1. THE NORMAL 2-SAMPLE LOCATION PROBLEM 259 3. Two measurements (blood pressure before and after treatment) are taken on each experimental unit. 4. Let B1i and A1i denote the before and after blood pressures of patient i in the treatment group. Similarly, let B2j and A2j denote the before and after blood pressures of patient j in the control group. Let Xi = B1i −A1i, the decrease in blood pressure for patient i in the treatment group, and let Yj = B2j−A2j, the decrease in blood pressure for patient j in the control group. Then X1, . . . , Xn1 ∼ P1, Y1, . . . , Yn2 ∼ P2, and we are interested in drawing inferences about ∆ = θ1 −θ2. Notice that ∆ > 0 if and only if θ1 > θ2, i.e., if the decrease in blood pressure is greater for the treatment group than for the control group. Thus, a drug company required to produce compelling evidence of the drug’s efficacy might test H0 : ∆ ≤ 0 against H1 : ∆ > 0. This chapter is divided into three sections: 11.1 If the data are assumed to be normally distributed, then we will be interested in inferences about the difference in population means. We will distinguish three cases, corresponding to what is known about the population variances. 11.2 If the data are only assumed to be continuously distributed, then we will be interested in inferences about the difference in population me- dians. We will assume a shift model, i.e., we will assume that P1 and P2 only differ with respect to location. 11.3 If the data are also assumed to be symmetrically distributed, then we will be interested in inferences about the difference in population centers of symmetry. If we assume symmetry, then we need not assume a shift model. 11.1 The Normal 2-Sample Location Problem In this section we assume that P1 = Normal ³ µ1, σ2 1 ´ and P2 = Normal ³ µ2, σ2 2 ´ . In describing inferential methods for ∆ = µ1 −µ2, we emphasize connections with material in Chapter 9 and Section 10.1. For example, the natural
  • 262. 260 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS estimator of a single normal population mean µ is the plug-in estimator µ̂, the sample mean, an unbiased, consistent, asymptotically efficient estimator of µ. In precise analogy, the natural estimator of ∆ = µ1 −µ2, the difference in populations means, is ˆ ∆ = µ̂1 − µ̂2 = X̄ − Ȳ , the difference in sample means. Because E ˆ ∆ = EX̄ − EȲ = µ1 − µ2 = ∆, ˆ ∆ is an unbiased estimator of ∆. It is also consistent and asymptotically efficient. In Chapter 9 and Section 10.1, hypothesis testing and set estimation for a single population mean were based on knowing the distribution of the standardized natural estimator, a random variable of the form sample mean − hypothesized mean standard deviation of sample mean . The denominator of this random variable, often called the standard error, was either known or estimated, depending on our knowledge of the popula- tion variance σ2. For σ2 known, we learned that Z = X̄ − µ0 p σ2/n ( ∼ Normal(0, 1) if X1, . . . , Xn ∼ Normal ¡ µ0, σ2 ¢ ˙ ∼ Normal(0, 1) if n large ) . For σ2 unknown and estimated by S2, we learned that T = X̄ − µ0 p S2/n ( ∼ t(n − 1) if X1, . . . , Xn ∼ Normal ¡ µ0, σ2 ¢ ˙ ∼ Normal(0, 1) if n large ) . These facts allowed us to construct confidence intervals for and test hy- potheses about the population mean. The confidence intervals were of the form µ sample mean ¶ ± q · µ standard error ¶ , where the critical value q is the appropriate quantile of the distribution of Z or T. The tests also were based on Z or T, and the significance probabilities were computed using the corresponding distribution. The logic for drawing inferences about two populations means is identical to the logic for drawing inferences about one population mean—we simply
  • 263. 11.1. THE NORMAL 2-SAMPLE LOCATION PROBLEM 261 replace “mean” with “difference in means” and base inferences about ∆ on the distribution of sample difference − hypothesized difference standard deviation of sample difference = ˆ ∆ − ∆0 standard error . Because Xi ∼ Normal(µ1, σ2 1) and Yj ∼ Normal(µ2, σ2 2), X̄ ∼ Normal à µ1, σ2 1 n1 ! and Ȳ ∼ Normal à µ2, σ2 2 n2 ! . Because X̄ and Ȳ are independent, it follows from Theorem 5.2 that ˆ ∆ = X̄ − Ȳ ∼ Normal à ∆ = µ1 − µ2, σ2 1 n1 + σ2 2 n2 ! . We now distinguish three cases: 1. Both σi are known (and possibly unequal). The inferential theory for this case is easy; unfortunately, population variances are rarely known. 2. The σi are unknown, but necessarily equal (σ1 = σ2 = σ). This case should strike the student as somewhat implausible. If the population variances are not known, then under what circumstances might we reasonably assume that they are equal? Although such circumstances do exist, the primary importance of this case is that the correspond- ing theory is elementary. Nevertheless, it is important to study this case because the methods derived from the assumption of an unknown common variance are widely used—and abused. 3. The σi are unknown and possibly unequal. This is clearly the case of greatest practical importance, but the corresponding theory is some- what unsatisfying. The problem of drawing inferences when the pop- ulation variances are unknown and possibly unequal is sufficiently no- torious that it has a name: the Behrens-Fisher problem. 11.1.1 Known Variances If ∆ = ∆0, then Z = ˆ ∆ − ∆0 r σ2 1 n1 + σ2 2 n2 ∼ Normal(0, 1).
  • 264. 262 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS Given α ∈ (0, 1), let qz denote the 1 − α/2 quantile of Normal(0, 1). We construct a (1 − α)-level confidence interval for ∆ by writing 1 − α = P (|Z| < qz) = P  | ˆ ∆ − ∆| < qz s σ2 1 n1 + σ2 2 n2   = P   ˆ ∆ − qz s σ2 1 n1 + σ2 2 n2 < ∆ < ˆ ∆ − qz s σ2 1 n1 + σ2 2 n2   The desired confidence interval is ˆ ∆ ± qz s σ2 1 n1 + σ2 2 n2 . Example 11.3 For the first population, suppose that we know that the population standard deviation is σ1 = 5 and that we observe a sample of size n1 = 60 with sample mean x̄ = 7.6. For the second population, suppose that we know that the population standard deviation is σ2 = 2.5 and that we observe a sample of size n2 = 15 with sample mean ȳ = 5.2. To construct a 0.95-level confidence interval for ∆, we first compute qz = qnorm(.975) = 1.959964 . = 1.96, then (7.6 − 5.2) ± 1.96 s 52 60 + 2.52 15 . = 2.4 ± 1.79 = (0.61, 4.21). Example 11.4 For the first population, suppose that we know that the population variance is σ2 1 = 8 and that we observe a sample of size n1 = 10 with sample mean x̄ = 9.7. For the second population, suppose that we know that the population variance is σ2 2 = 96 and that we observe a sample of size n2 = 5 with sample mean ȳ = 2.6. To construct a 0.95-level confidence interval for ∆, we first compute qz = qnorm(.975) = 1.959964 . = 1.96, then (9.7 − 2.6) ± 1.96 r 8 10 + 96 5 . = 7.1 ± 8.765 = (−1.665, 15.865).
  • 265. 11.1. THE NORMAL 2-SAMPLE LOCATION PROBLEM 263 To test H0 : ∆ = ∆0 versus H1 : ∆ 6= ∆0, we exploit the fact that Z ∼ Normal(0, 1) under H0. Let z denote the observed value of Z. Then a natural level-α test is the test that rejects H0 if and only if p = P∆0 (|Z| ≥ |z|) ≤ α, which is equivalent to rejecting H0 if and only if |z| ≥ qz. This test is sometimes called the 2-sample z-test. Example 11.3 (continued) To test H0 : ∆ = 0 versus H1 : ∆ 6= 0, we compute z = (7.6 − 5.2) − 0 p 52/60 + 2.52/15 . = 2.629. Because |2.629| > 1.96, we reject H0 at significance level α = 0.05. The significance probability is p = P∆0 (|Z| ≥ |2.629|) = 2 ∗ pnorm(−2.629) . = 0.008562. Example 11.4 (continued) To test H0 : ∆ = 0 versus H1 : ∆ 6= 0, we compute z = (9.7 − 2.6) − 0 p 8/10 + 96/5 . = 1.5876. Because |1.5876| < 1.96, we decline to reject H0 at significance level α = 0.05. The significance probability is p = P∆0 (|Z| ≥ |1.5876|) = 2 ∗ pnorm(−1.5876) . = 0.1124. 11.1.2 Unknown Common Variance Now we assume that σ1 = σ2 = σ, but that the common variance σ2 is unknown. Because σ2 is unknown, we must estimate it. Let S2 1 = 1 n1 − 1 n1 X i=1 (Xi − X̄)2 denote the sample variance for the Xi and let S2 2 = 1 n2 − 1 n2 X j=1 (Yj − Ȳ )2
  • 266. 264 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS denote the sample variance for the Yj. If we only sampled the first popu- lation, then we would use S2 1 to estimate the first population variance, σ2 1. Likewise, if we only sampled the second population, then we would use S2 2 to estimate the second population variance, σ2 2. Neither is appropriate in the present situation, as S2 1 does not use the second sample and S2 2 does not use the first sample. Therefore, we create a weighted average of the separate sample variances, S2 P = (n1 − 1)S2 1 + (n2 − 1)S2 2 (n1 − 1) + (n2 − 1) = 1 n1 + n2 − 2   n1 X i=1 (Xi − X̄)2 + n2 X j=1 (Yj − Ȳ )2   , the pooled sample variance. Then ES2 P = (n1 − 1)ES2 1 + (n2 − 1)ES2 2 (n1 − 1) + (n2 − 1) = (n1 − 1)σ2 + (n2 − 1)σ2 (n1 − 1) + (n2 − 1) = σ2 , so the pooled sample variance is an unbiased estimator of a common popula- tion variance. It is also consistent and asymptotically efficient for estimating a common normal variance. Instead of Z = ˆ ∆ − ∆0 r σ2 1 n1 + σ2 2 n2 = ˆ ∆ − ∆0 r³ 1 n1 + 1 n2 ´ σ2 , we now rely on T = ˆ ∆ − ∆0 r³ 1 n1 + 1 n2 ´ S2 P . The following result allows us to construct confidence intervals and test hypotheses about the shift parameter ∆ = µ1 − µ2. Theorem 11.1 If ∆ = ∆0, then T ∼ t(n1 + n2 − 2). Given α ∈ (0, 1), let qt denote the 1 − α/2 quantile of t(n1 + n2 − 2). Exploiting Theorem 11.1, a (1 − α)-level confidence interval for ∆ is ˆ ∆ ± qt sµ 1 n1 + 1 n2 ¶ S2 P .
  • 267. 11.1. THE NORMAL 2-SAMPLE LOCATION PROBLEM 265 Example 11.3 (continued) Now suppose that, instead of knowing population standard deviations σ1 = 5 and σ2 = 2.5, we observe sample standard deviations s1 = 5 and s2 = 2.5. The ratio of sample variances, s2 1/s2 2 = 4 6= 1, strongly suggests that the population variances are unequal. We proceed under the assumption that σ1 = σ2 for the purpose of illustra- tion. The pooled sample variance is S2 P = s 59 · 52 + 14 · 2.52 59 + 14 = 21.40411. To construct a 0.95-level confidence interval for ∆, we first compute qt = qt(.975, 73) = 1.992997 . = 1.993, then (7.6 − 5.2) ± 1.993 sµ 1 60 + 1 15 ¶ · 21.40411 . = 2.4 ± 2.66 = (−0.26, 5.06). Example 11.4 (continued) Now suppose that, instead of knowing population variances σ2 1 = 8 and σ2 2 = 96, we observe sample variances s2 1 = 8 and s2 2 = 96. Again, the ratio of sample variances, s2 2/s2 1 = 12 6= 1, strongly suggests that the population variances are unequal. We proceed under the assumption that σ1 = σ2 for the purpose of illustration. The pooled sample variance is S2 P = s 9 · 8 + 4 · 96 9 + 4 = 35.07692. To construct a 0.95-level confidence interval for ∆, we first compute qt = qt(.975, 13) = 2.160369 . = 2.16, then (9.7 − 2.6) ± 2.16 sµ 1 10 + 1 5 ¶ · 35.07692 . = 7.1 ± 7.01 = (0.09, 14.11). To test H0 : ∆ = ∆0 versus H1 : ∆ 6= ∆0, we exploit the fact that T ∼ t(n1 + n2 − 2) under H0. Let t denote the observed value of T. Then a natural level-α test is the test that rejects H0 if and only if p = P∆0 (|T| ≥ |t|) ≤ α, which is equivalent to rejecting H0 if and only if |t| ≥ qt. This test is called Student’s 2-sample t-test.
  • 268. 266 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS Example 11.3 (continued) To test H0 : ∆ = 0 versus H1 : ∆ 6= 0, we compute t = (7.6 − 5.2) − 0 p (1/60 + 1/15) · 21.40411 . = 1.797. Because |1.797| < 1.993, we decline to reject H0 at significance level α = .05. The significance probability is p = P∆0 (|T| ≥ |1.797|) = 2 ∗ pt(−1.797, 73) . = 0.0764684. Example 11.4 (continued) To test H0 : ∆ = 0 versus H1 : ∆ 6= 0, we compute t = (9.7 − 2.6) − 0 p (1/10 + 1/5) · 35.07692 . = 2.19. Because |2.19| > 2.16, we reject H0 at significance level α = .05. The significance probability is p = P∆0 (|T| ≥ |2.19|) = 2 ∗ pt(−2.19, 13) . = 0.04747. 11.1.3 Unknown Variances Now we drop the assumption that σ1 = σ2. We must then estimate each population variance separately, σ2 1 with S2 1 and σ2 2 with S2 2. Instead of Z = ˆ ∆ − ∆0 r σ2 1 n1 + σ2 2 n2 we now rely on TW = ˆ ∆ − ∆0 r S2 1 n1 + S2 2 n2 . Unfortunately, there is no analogue of Theorem 11.1—the exact distribution of TW is not known. The exact distribution of TW appears to be intractable, but Welch (1937, 1947) argued that TW ˙ ∼ t(ν), with ν = ³ σ2 1 n1 + σ2 2 n2 ´2 (σ2 1/n1)2 n1−1 + (σ2 2/n2)2 n2−1 .
  • 269. 11.1. THE NORMAL 2-SAMPLE LOCATION PROBLEM 267 Because σ2 1 and σ2 2 are unknown, we estimate ν by ν̂ = ³ S2 1 n1 + S2 2 n2 ´2 (S2 1 /n1)2 n1−1 + (S2 2 /n2)2 n2−1 . Simulation studies have revealed that the approximation TW ˙ ∼ t(ν̂) works well in practice. Given α ∈ (0, 1), let qt denote the 1−α/2 quantile of t(ν̂). Using Welch’s approximation, an approximate (1 − α)-level confidence interval for ∆ is ˆ ∆ ± qt s S2 1 n1 + S2 2 n2 . Example 11.3 (continued) Now we estimate the unknown popula- tion variances separately, σ2 1 by s2 1 = 52 and σ2 2 by s2 2 = 2.52. Welch’s approximation involves ν̂ = ³ 52 60 + 2.52 15 ´2 (52/60)2 60−1 + (2.52/15)2 15−1 = 45.26027 . = 45.26 degrees of freedom. To construct a 0.95-level confidence interval for ∆, we first compute qt = qt(.975, 45.26) . = 2.014, then (7.6 − 5.2) ± 2.014 q 52/60 + 2.52/15 . = 2.4 ± 1.84 = (0.56, 4.24). Example 11.4 (continued) Now we estimate the unknown popula- tion variances separately, σ2 1 by s2 1 = 8 and σ2 2 by s2 2 = 96. Welch’s approxi- mation involves ν̂ = ³ 8 10 + 96 5 ´2 (8/10)2 10−1 + (96/5)2 5−1 = 4.336931 . = 4.337 degrees of freedom. To construct a 0.95-level confidence interval for ∆, we first compute qt = qt(.975, 4.337) . = 2.6934,
  • 270. 268 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS then (9.7 − 2.6) ± 2.6934 q 8/10 + 96/5 . = 7.1 ± 13.413 = (−6.313, 20.513). To test H0 : ∆ = ∆0 versus H1 : ∆ 6= ∆0, we exploit the approximation TW ˙ ∼ t(ν̂) under H0. Let tW denote the observed value of TW . Then a natural approximate level-α test is the test that rejects H0 if and only if p = P∆0 (|TW | ≥ |tW |) ≤ α, which is equivalent to rejecting H0 if and only if |tW | ≥ qt. This test is sometimes called Welch’s approximate t-test. Example 11.3 (continued) To test H0 : ∆ = 0 versus H1 : ∆ 6= 0, we compute tW = (7.6 − 5.2) − 0 p 52/60 + 2.52/15 . = 2.629. Because |2.629| > 2.014, we reject H0 at significance level α = 0.05. The significance probability is p = P∆0 (|TW | ≥ |2.629|) = 2 ∗ pt(−2.629, 45.26) . = 0.011655. Example 11.4 (continued) To test H0 : ∆ = 0 versus H1 : ∆ 6= 0, we compute tW = (9.7 − 2.6) − 0 p 8/10 + 96/5 . = 1.4257. Because |1.4257| < 2.6934, we decline to reject H0 at significance level α = 0.05. The significance probability is p = P∆0 (|TW | ≥ |1.4257|) = 2 ∗ pt(−1.4257, 4.337) . = 0.2218. Examples 11.3 and 11.4 were carefully constructed to reveal the sensi- tivity of Student’s 2-sample t-test to the assumption of equal population variances. Welch’s approximation is good enough that we can use it to benchmark Student’s test when variances are unequal. In Example 11.3, Welch’s approximate t-test produced a significance probability of p . = 0.012, leading us to reject the null hypothesis at α = 0.05. Student’s 2-sample t-test produced a misleading significance probability of p . = 0.076, leading
  • 271. 11.1. THE NORMAL 2-SAMPLE LOCATION PROBLEM 269 us to commit a Type II error. In Example 11.4, Welch’s approximate t-test produced a significance probability of p . = 0.222, leading us to accept the null hypothesis at α = 0.05. Student’s 2-sample t-test produced a misleading significance probability of p . = 0.047, leading us to commit a Type I error. Evidently, Student’s 2-sample t-test (and the corresponding procedure for constructing confidence intervals) should not be used unless one is convinced that the population variances are identical. The consequences of using Stu- dent’s test when the population variances are unequal may be exacerbated when the sample sizes are unequal. In general: • If n1 = n2, then t = tW . • If the population variances are (approximately) equal, then t and tW tend to be (approximately) equal. • If the larger sample is drawn from the population with the larger vari- ance, then t will tend to be less than tW . All else equal, this means that Student’s test will tend to produce significance probabilities that are too large. • If the larger sample is drawn from the population with the smaller variance, then t will tend to be greater than tW . All else equal, this means that Student’s test will tend to produce significance probabilities that are too small. • If the population variances are (approximately) equal, then ν̂ will be (approximately) n1 + n2 − 2. • It will always be the case that ν̂ ≤ n1+n2−2. All else equal, this means that Student’s test will tend to produce significance probabilities that are too large. From these observations we draw the following conclusions: 1. If the population variances are unequal, then Student’s 2-sample t-test may produce misleading significance probabilities. 2. If the population variances are equal, then Welch’s approximate t- test is approximately equivalent to Student’s 2-sample t-test. Thus, if one uses Welch’s test in the situation for which Student’s test is appropriate, one is not likely to be led astray.
  • 272. 270 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS 141 148 132 138 154 142 150 146 155 158 150 140 147 148 144 150 149 145 149 158 143 141 144 144 126 140 144 142 141 140 145 135 147 146 141 136 140 146 142 137 148 154 137 139 143 140 131 143 141 149 148 135 148 152 143 144 141 143 147 146 150 132 142 142 143 153 149 146 149 138 142 149 142 137 134 144 146 147 140 142 140 137 152 145 133 138 130 138 134 127 128 138 136 131 126 120 124 132 132 125 139 127 133 136 121 131 125 130 129 125 136 131 132 127 129 132 116 134 125 128 139 132 130 132 128 139 135 133 128 130 130 143 144 137 140 136 135 126 139 131 133 138 133 137 140 130 137 134 130 148 135 138 135 138 Table 11.1: Maximum breadth (in millimeters) of 84 skulls of Etruscan males (top) and 70 skulls of modern Italian males. 3. Don’t use Student’s 2-sample t-test! I remember how shocked I was when I first heard this advice as a first-year graduate student in a course devoted to the theory of hypothesis testing. The instructor, Erich Lehmann, one of the great statisticians of the 20th century and the author of a famous book on hypothesis testing, told us: “If you get just one thing out of this course, I’d like it to be that you should never use Student’s 2-sample t-test.” 11.2 The Case of a General Shift Family 11.3 The Symmetric Behrens-Fisher Problem 11.4 Case Study: Etruscan versus Italian Head Breadth In a collection of essays on the origin of the Etruscan empire, N.A. Barnicott and D.R. Brothwell compared measurements on ancient and modern bones.1 1 N.A. Barnicott and D.R. Brothwell (1959). The evaluation of metrical data in the comparison of ancient and modern bones. In Medical Biology and Etruscan Origins, edited
  • 273. 11.4. CASE STUDY: ETRUSCAN VERSUS ITALIAN HEAD BREADTH 271 −2 0 1 2 125 140 155 Etruscan Theoretical Quantiles Sample Quantiles −2 −1 0 1 2 115 130 145 Italian Theoretical Quantiles Sample Quantiles Figure 11.1: Normal probability plots of two samples of maximum skull breadth. Measurements of the maximum breadth of 84 Etruscan skulls and 70 modern Italian skulls were subsequently reproduced as Data Set 155 in A Handbook of Small Data Sets and are displayed in Table 11.1. We use these data to explore the difference (if any) between Etruscan and modern Italian males with respect to head breadth. In the discussion that follows, x will denote Etruscans and y will denote modern Italians. We begin by asking if it is reasonable to assume that maximum skull breadth is normally distributed. Normal probability plots of our two sam- ples are displayed in Figure 11.1. The linearity of these plots conveys the distinct impression of normality. Kernel density estimates constructed from the two samples are superimposed in Figure 11.2, created by the following R commands: > plot(density(x),type="l",xlim=c(100,180), + xlab="Maximum Skull Breadth", + main="Kernel Density Estimates") > lines(density(y),type="l") Not only do the kernel density estimates reinforce our impression of nor- by G.E.W. Wolstenholme and C.M. O’Connor, Little, Brown & Company, p. 136.
  • 274. 272 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS 100 120 140 160 180 0.00 0.02 0.04 0.06 Kernel Density Estimates Maximum Skull Breadth Density Figure 11.2: Kernel density estimates constructed from two samples of max- imum skull breadth. The sample mean for the Etruscan skulls is x̄ . = 143.8; the sample mean for the modern Italian skulls is ȳ . = 132.4. mality, they also suggest that the two populations have comparable vari- ances. (The ratio of sample variances is s2 1/s2 2 = 1.07819.) The difference is maximum breadth between Etruscan and modern Italian skulls is nicely summarized by a shift parameter. Now we construct a probability model. This is a 2-sample location prob- lem in which an experimental unit is a skull. The skulls were drawn from two populations, Etruscan males and modern Italian males, and one mea- surement (maximum breadth) was made on each experimental unit. Let Xi denote the maximum breadth of Etruscan skull i and let Yj denote the maximum breadth of Italian skull j. We assume that the Xi and Yj are independent, with Xi ∼ Normal(µ1, σ2 1) and Yj ∼ Normal(µ2, σ2 2). Notice that, although the sample variances are nearly equal, we do not assume that the population variances are identical. Instead, we will use Welch’s ap- proximation to construct an approximate 0.95-level confidence interval for
  • 275. 11.4. CASE STUDY: ETRUSCAN VERSUS ITALIAN HEAD BREADTH 273 ∆ = µ1 − µ2. Because the confidence coefficient 1 − α = 0.95, α = 0.05. The desired confidence interval is of the form ˆ ∆ ± q s s2 1 n1 + s2 2 n2 , where q is the 1 − α/2 = 0.975 quantile of a t distribution with ν̂ degrees of freedom. We can easily compute these quantities in R. To compute ˆ ∆, the estimated shift parameter: > Delta <- mean(x)-mean(y) To compute the standard error: > n1 <- length(x) > n2 <- length(y) > v1 <- var(x)/n1 > v2 <- var(y)/n2 > se <- sqrt(v1+v2) To compute ν̂, the estimated degrees of freedom: > nu <- (v1+v2)^2/(v1^2/(n1-1)+v2^2/(n2-1)) To compute q, the desired quantile: > q <- qt(.975,df=nu) Finally, to compute the lower and upper endpoints of the desired confidence interval: > lower <- Delta-q*se > upper <- Delta+q*se These calculations result in a 0.95-level confidence interval for ∆ = µ1 − µ2 of (9.459782, 13.20212), so that we can be fairly confident that the maximum breadth of Etruscan male skulls is, on average, roughly a centimeter greater than the maximum breadth of modern Italian male skulls.
  • 276. 274 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS 11.5 Exercises Problem Set A 1. We have been using various mathematical symbols in our study of 1- and 2-sample location problems. Each of the symbols listed below is used to represent a real number. State which of the following state- ments applies to each symbol: i. The real number represented by this symbol is an unknown pop- ulation parameter. ii. The real number represented by this symbol is calculated from the observed data. iii. The real number represented by this symbol is specified by the experimenter. Here are the symbols: µ µ0 x̄ s2 t α ∆ ∆0 p ν̂ 2. Assume that X1, . . . , X10 ∼ Normal(µ1, σ2 1) and that Y1, . . . , Y20 ∼ Normal(µ2, σ2 2). None of the population parameters are known. Let ∆ = µ1 − µ2. To test H0 : ∆ ≥ 0 versus H1 : ∆ < 0 at significance level α = 0.05, we observe samples ~ x and ~ y. (a) What test should be used in this situation? If we observe ~ x and ~ y that result in x̄ = −0.82, s1 = 4.09, ȳ = 1.39, and s2 = 1.22, then what is the value of the test statistic? (b) If we observe ~ x and ~ y that result in s1 = 4.09, s2 = 1.22,, and a test statistic value of 1.76, then which of the following R expres- sions best approximates the significance probability? i. 2*pnorm(-1.76) ii. pt(-1.76,df=28) iii. pt(1.76,df=10) iv. pt(-1.76,df=10) v. 2*pt(1.76,df=28) (c) True of False: if we observe ~ x and ~ y that result in a significance probability of p = 0.96, then we should reject the null hypothesis.
  • 277. 11.5. EXERCISES 275 Problem Set B Each of the following scenarios can be modelled as a 1- or 2-sample location problem. For 1-sample problems, let Xi denote the random variables of interest and let µ = EXi. For 2-sample problems, let Xi and Yj denote the random variables of interest; let µ1 = EXi, µ2 = EYj, and ∆ = µ1 − µ2. For each scenario, you should answer/do the following: (a) What is the experimental unit? (b) From how many populations were the experimental units drawn? Identify the population(s). How many units were drawn from each population? Is this a 1- or a 2-sample problem? (c) How many measurements were taken on each experimental unit? Identify them. (d) Define the parameter(s) of interest for this problem. For 1- sample problems, this should be µ; for 2-sample problems, this should be ∆. (e) State appropriate null and alternative hypotheses. Here are the scenarios: 1. A mathematics/education concentrator theorizes that learning math- ematics and statistics is sometimes impeded by the widespread use of odd symbols like α, χ, and ω. She reasons that, if her theory is cor- rect, then students who belong to sororities and fraternities—who she presumes are more familiar with Greek letters—should have an easier time learning the mathematical subjects that use such symbols. To investigate, she obtains a list of all William & Mary students who are enrolled in Math 111 (calculus) and a list of all William & Mary stu- dents who belong to a sorority or fraternity. She uses this information to choose (at random) 20 calculus students who do belong to a soror- ity or fraternity and 20 calculus students who do not. She persuades each of these students to take a calculus quiz, specially designed to use lots of Greek letters. How might she use the resulting data to test her theory? (Respond to (a)–(e) above.) 2. Umberto theorizes that living with a dog diminishes depression in the elderly, here defined as more than 70 years of age. To investigate his theory, he recruits 15 single elderly men who own dogs and 15 single elderly men who do not own any pets. The Hamilton instrument for
  • 278. 276 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS measuring depressive tendency is administered to each subject. High scores indicate depression. How might Umberto use the resulting data to test his theory? (Respond to (a)–(e) above.) 3. The William & Mary women’s tennis team uses championship balls in their matches and less expensive practice balls in their team practices. The players have formed a strong impression that the practice balls do not wear as well as the championship balls, i.e., that the practice balls lose their bounce more quickly than the championship balls. To investigate this perception, Nina and Delphine conceive the following experiment. Before one practice, the team opens new cans of cham- pionship balls and practice balls, which they then use for that day’s practice. After practice, Nina and Delphine randomly select 10 of the used championship balls and 10 of the used practice balls. They drop each ball from a height of 1 meter and measure the height of its first bounce. How might Nina and Delphine test the team’s impression that practice balls do not wear as well as championship balls? (Respond to (a)–(e) above.) 4. A political scientist theorizes that women tend to be more opposed to military intervention than do men. To investigate this theory, he devises an instrument on which a subject responds to several recent U.S. military interventions on a 5-point Likert scale (1=“strongly sup- port,”. . . ,5=“strongly oppose”). A subject’s score on this instrument is the sum of his/her individual responses. The scientist randomly se- lects 50 married couples in which neither spouse has a registered party affiliation and administers the instrument to each of the 100 individu- als so selected. How might he use his results to determine if his theory is correct? (Respond to (a)–(e) above.) 5. A shoe company claims that wearing its racing flats will typically im- prove one’s time in a 10K road race by more than 30 seconds. A running magazine sponsors an event to test this claim. It arranges for 120 runners to enter two road races, held two weeks apart on the same course. For the second race, each of these runners is supplied with the new racing flat. How might the race results be used to determine the validity of the shoe company’s claim? (Respond to (a)–(e) above.) 6. Susan theorizes that impregnating wood with an IGR (insect growth regulator) will reduce wood consumption by termites. To investigate
  • 279. 11.5. EXERCISES 277 this theory, she impregnates 60 wood blocks with a solvent contain- ing the IGR and 60 wood blocks with just the solvent. Each block is weighed, then placed in a separate container with 100 ravenous ter- mites. After two weeks, she removes the blocks and weighs them again to determine how much wood has been consumed. How might Su- san use her results to determine if her theory is correct? (Respond to (a)–(e) above.) 7. To investigate the effect of swing dancing on cardiovascular fitness, an exercise physiologist recruits 20 couples enrolled in introductory swing dance classes. Each class meets once a week for ten weeks. Participants are encouraged to go out dancing on at least two additional occasions each week. In general, lower resting pulses are associated with greater cardiovascular fitness. Accordingly, each participant’s resting pulse is measured at the beginning and at the end of the ten-week class. How might the resulting data be used to determine if swing dancing improves cardiovascular fitness? (Respond to (a)–(e) above.) 8. It is thought that Alzheimer’s disease (AD) impairs short-term memory more than it impairs long-term memory. To test this theory, a psychol- ogist studied 60 mildly demented AD patients and 60 normal elderly control subjects. Each subject was administered a short-term and a long-term memory task. On each task, high scores are better than low scores. How might the psychologist use the resulting task scores to determine if the theory is correct? (Respond to (a)–(e) above.) 9. According to an article in Newsweek (May 10, 2004, page 89), recent “studies have shown consistently that women are better than men at reading and responding to subtle cues about mood and temperament.” Some psychologists believe that such differences can be explained in part by biological differences between male and female brains. One such psychologist conducts a study in which day-old babies are shown three human faces and three mechanical objects. The time that the baby stares at each face/object is recorded. Of interest is how much time the baby spends staring at faces versus how much time the baby spends staring at objects. The psychologist’s theory predicts that this comparison will differ by sex, with female babies preferring faces to objects to a greater extent than do male babies. How might the psy- chologist use his results to determine if his theory is correct? (Respond to (a)–(e) above.)
  • 280. 278 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS Problem Set C In the early 1960s, the Western Collaborative Group Study investigated the relation between behavior and risk of coronary heart disease in middle-aged men. Type A behavior is characterized by urgency, aggression and ambition; Type B behavior is noncompetitive, more relaxed and less hurried. The following data, which appear in Table 2.1 of Selvin (1991) and Data Set 47 in A Handbook of Small Data Sets, are the cholesterol measurements of 20 heavy men of each behavior type. (In fact, these 40 men were the heaviest in the study. Each weighed at least 225 pounds.) We consider whether or not they provide evidence that heavy Type A men have higher cholesterol levels than heavy Type B men. Cholesterol Levels for Heavy Type A Men 233 291 312 250 246 197 268 224 239 239 254 276 234 181 248 252 202 218 212 325 Cholesterol Levels for Heavy Type B Men 344 185 263 246 224 212 188 250 148 169 226 175 242 252 153 183 137 202 194 213 1. Respond to (a)–(e) in Problem Set B. 2. Does it seem reasonable to assume that the samples ~ x and ~ y, the ob- served values of X1, . . . , Xn1 and Y1, . . . , Yn2 , were drawn from normal distributions? Why or why not? 3. Assume that the Xi and the Yj are normally distributed. (a) Test the null hypothesis derived above using Welch’s approximate t-test. What is the significance probability? If we adopt a signif- icance level of α = 0.05, should we reject the null hypothesis? (b) Construct a (2-sided) confidence interval for ∆ with a confidence coefficient of approximately 0.90. Problem Set D Researchers measured urinary β-thromboglobulin excre- tion in 12 diabetic patients and 12 normal control subjects, reporting their findings in Thrombosis and Haemostasis. The following measurements are Data Set 313 in A Handbook of Small Data Sets: Normal 4.1 6.3 7.8 8.5 8.9 10.4 11.5 12.0 13.8 17.6 24.3 37.2 Diabetic 11.5 12.1 16.1 17.8 24.0 28.8 33.9 40.7 51.3 56.2 61.7 69.2
  • 281. 11.5. EXERCISES 279 1. Do these measurements appear to be samples from symmetric distri- butions? Why or why not? 2. Both samples of positive real numbers appear to be drawn from dis- tributions that are skewed to the right, i.e., the upper tail of the dis- tribution is longer than the lower tail of the distribution. Often, such distributions can be symmetrized by applying a suitable data transfor- mation. Two popular candidates are: (a) The natural logarithm: ui = log(xi) and vj = log(yj). (b) The square root: ui = √ xi and vj = √ yj. Investigate the effect of each of these transformations on the above measurements. Do the transformed measurements appear to be sam- ples from symmetric distributions? Which transformation do you pre- fer? 3. Do the transformed measurements appear to be samples from normal distributions? Why or why not? 4. The researchers claimed that diabetic patients have increased urinary β-thromboglobulin excretion. Assuming that the transformed mea- surements are samples from normal distributions, how convincing do you find the evidence for their claim? Problem Set E 1. Chemistry lab partners Arlen and Stuart collaborated on an experi- ment in which they measured the melting points of 20 specimens of two types of sealing wax. Twelve of the specimens were of one type (A); eight were of the other type (B). Each student then used Welch’s approximate t-test to test the null hypothesis of no difference in mean melting point between the two methods: • Arlen applied Welch’s approximate t-test to the original melting points, which were measured in degrees Fahrenheit. • Stuart first converted each melting point to degrees Celsius (by subtracting 32, then multiplying by 5/9), then applied Welch’s approximate t-test to the converted melting points.
  • 282. 280 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS Comment on the potential differences between these two analyses. In particular, is it True or False that (ignoring round-off error) Arlen and Stuart will obtain identical significance probabilities? Please justify your comments. 2. A graduate student in ornithology would like to determine if created marshes differ from natural marches in their appeal to avian commu- nities. He plans to observe n1 = 9 natural marshes and n2 = 9 created marshes, counting the number of red-winged blackbirds per acre that inhabit each marsh. His thesis committee wants to know how much he thinks he will be able to learn from this experiment. Let Xi denote the number of blackbirds per acre in natural marsh i and let Yj denote the number of blackbirds per acre in created marsh j. In order to respond to his committee, the student makes the simplifying assumptions that Xi ∼ Normal(µ1, σ2) and Yj ∼ Normal(µ2, σ2). He estimates that iqr(Xi) = iqr(Yj) = 10. Calculate L, the length of the 0.90-level confidence interval for ∆ = µ1 − µ2 that he can expect to construct. 3. A film buff has formed the vague impression that movies tend to be longer than they used to be. Are they really longer? Or do they just seem longer? To investigate, he randomly samples U.S. feature films made in 1956 and U.S. feature films made in 1996, obtaining the following results:
  • 283. 11.5. EXERCISES 281 Year Title Minutes 1956 Accused of Murder 74 Away All Boats 114 Baby Doll 114 The Bold and the Brave 87 Come Next Spring 92 The Flaming Teen-Age 55 Gun Girls 67 Helen of Troy 118 The Houston Story 79 Patterns 83 The Price of Fear 79 The Revolt of Mamie Stover 92 Written on the Wind 99 The Young Guns 87 1996 $40,000 70 Barb Wire 98 Breathing Room 90 Daddy’s Girl 95 Ed’s Next Move 88 From Dusk to Dawn 108 Galgameth 110 The Glass Cage 96 Kissing a Dream 91 Love & Sex etc. 88 Love is All There Is 120 Making the Rules 96 Spirit Lost 90 Work 90 Do these data provide convincing evidence that 1996 movies are longer than 1956 movies? Compute a significance probability that may be used to encourage or discourage the film buff’s impression. Explain how this number should be interpreted. Identify and defend any as- sumptions that you made in your calculations.
  • 284. 282 CHAPTER 11. 2-SAMPLE LOCATION PROBLEMS
  • 285. Chapter 12 k-Sample Location Problems Now we generalize our study of location problems from two to k ≥ 3 popula- tions. Again we are concerned with comparing the populations with respect to some measure of centrality, typically the population mean or the popu- lation median. We designate the populations by P1, . . . , Pk and the corre- sponding sample sizes by n1, . . . , nk. Our bookkeeping will be facilitated by the use of double subscripts, e.g., X11, . . . , X1n1 ∼ P1, X21, . . . , X2n2 ∼ P2, . . . Xk1, . . . , Xknk ∼ Pk. These expressions can be summarized succinctly by writing Xij ∼ Pi. We assume the following: 1. The Xij are mutually independent continuous random variables. 2. Pi has location parameter θi, e.g., θi = µi = EXij or θi = q2(Xij). 3. We observe random samples ~ xi = {xi1, . . . , xini }, from which we at- tempt to draw inferences about (θ1, . . . , θk). In general, we do not assume that n1 = · · · = nk. However, certain procedures do require equal sample sizes. Furthermore, certain procedures that can be used with unequal sample sizes are greatly simplified when the sample sizes are equal. 283
  • 286. 284 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS The same four questions that we posed at the beginning of Chapter 10 and asked in Chapters 10–11 can be asked here. What distinguishes k-sample problems from 1-sample and 2-sample problems is the number of populations from which the experimental units were drawn. The prototypical case of a k-sample problem is the case of several treatment populations. One may wonder why we distinguish between k = 2 and k ≥ 3 pop- ulations. In fact, many methods for k-sample problems can be applied to 2-sample problems, in which case they often simplify to methods studied in Chapter 11. However, many issues arise with k ≥ 3 populations that do not arise with two populations, so the problem of comparing more than two location parameters is considerably more complicated than the problem of comparing only two. For this reason, our study of k-sample location problems will be less comprehensive than our previous studies of 1-sample and 2-sample location problems. 12.1 The Case of a Normal Shift Family In this section we assume that P = Normal(µi, σ2). This is sometimes called the fixed effects model for the oneway analysis of variance (ANOVA). Notice that we are assuming that each normal population has the same variance. Recall that we criticized the assumption of equal variances for the normal 2-sample problem. In that setting, however, Welch’s approximate t-test provides a viable alternative that is available in many popular statistical software packages. In the more complicated setting of k normal popula- tions, the assumption of equal variances (sometimes called the assumption of homoscedasticity) is fairly standard, if only because it is less clear how to proceed when the variances are unequal. The problem of unequal variances is discussed in Section 12.3. 12.1.1 The Fundamental Null Hypothesis The fundamental problem of the analysis of variance is the problem of testing the null hypothesis that all of the population means are the same, i.e., H0 : µ1 = · · · = µk, (12.1) against the alternative hypothesis that they are not all the same. Notice that the statement that the population means are not identical does not imply that each population mean is distinct. For example, if µ1 = µ2 = 1.5
  • 287. 12.1. THE CASE OF A NORMAL SHIFT FAMILY 285 and µ3 = 2.2, then H0 is false. We stress that the analysis of variance is concerned with inferences about means, not variances. To motivate our test of H0, we formulate another null hypothesis that is equivalent to H0. First, let N = k X i=1 ni denote the sum of the sample sizes and let µ̄· = k X i=1 ni N µi denote the population grand mean. The population grand mean is a weighted average of the individual population means, each population weighted in proportion to how many of the observations were drawn from it. If H0 is true, then µ1 = · · · = µk have a common value, say µ, and the population grand mean equals that common value: µ̄· = k X i=1 ni N µ = µ N k X i=1 ni = µ. Next we introduce a quantity that measures how nearly the individual population means equal the population grand mean. Let γ = k X i=1 ni (µi − µ̄·)2 . (12.2) Notice that γ ≥ 0 and that γ = 0 if and only if each µi = µ̄·. But each µi = µ̄· if and only if each individual mean assumes a common value, which occurs if and only if the individual means are identical. Thus, H0 is equivalent to the null hypothesis H′ 0 : γ = 0, which is to be tested against the alternative hypothesis H′ 1 : γ > 0. 12.1.2 Testing the Fundamental Null Hypothesis The idea that underlies our test is to estimate γ and reject H′ 0 when the estimate is sufficiently larger than zero. To estimate γ, we need only estimate
  • 288. 286 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS the population means that appear in (12.2). The individual sample means, X̄i· = 1 ni ni X j=1 Xij, are unbiased estimators of the individual population means, and the sample grand mean, X̄·· = k X i=1 ni N X̄i· = k X i=1 ni N   1 ni ni X j=1 Xij   = 1 N k X i=1 ni X j=1 Xij is an unbiased estimator of the population grand mean. Hence, a natural estimator of γ is the between-groups or treatment sum of squares, SSB = k X i=1 ni ¡ X̄i· − X̄·· ¢2 , the variation of the individual sample means about the sample grand mean. A useful formula for computing the observed value of SSB from the observed values of the individual sample means is ssB = k X i=1 nix̄2 i· − 1 N Ã k X i=1 nix̄i· !2 . What remains is to determine when SSB is “sufficiently larger than zero.” We consider two cases, depending on whether or not the common population variance σ2 is known. Known Population Variance Situations in which σ2 is known are rarely encountered, but it is useful to consider how to proceed in this case. Here is the key fact that we require: Theorem 12.1 Under the fundamental null hypothesis (12.1), the random variable SSB/σ2 ∼ χ2 (k − 1), where χ2(ν) denotes the chi-squared distribution with ν degrees of freedom, introduced in Section 5.5. The quantity k − 1 is the between-groups degrees of freedom.
  • 289. 12.1. THE CASE OF A NORMAL SHIFT FAMILY 287 Theorem 12.1 suggests a way to determine whether or not SSB is “suf- ficiently larger than zero.” Under H0, P (SSB ≥ q) = P ³ SSB/σ2 ≥ q/σ2 ´ = P ³ Y ≥ q/σ2 ´ , where Y ∼ χ2(k − 1); hence, we can use the chi-squared distribution to compute significance probabilities and/or critical values. Example 12.1 Suppose that we draw samples of n1 = 20, n2 = 25, and n3 = 30 observations from normal populations with unknown means and common variance σ2 = 9, obtaining sample means of x̄1 = 1.489, x̄2 = 1.712, and x̄3 = 3.082. To test the fundamental null hypothesis that the individual population means are identical, we first compute N = 20+25+30 = 75 and evaluate SSB, obtaining ssB = ³ 20 · 1.4892 + 25 · 1.7122 + 30 · 3.0822 ´ − (20 · 1.489 + 25 · 1.712 + 30 · 3.082)2 /75 . = 39.402. Now we use the R function pchisq to compute a significance probability p: > 1-pchisq(39.402/9,df=2) [1] 0.1120287 For conventional levels of significance, p > 0.10 is too large to warrant rejecting the null hypothesis. Unknown Population Variance Now we consider the more realistic case of an unknown population variance. Our development will mimic the case of a known population variance, but it is complicated by the need to estimate σ2. Recall that, in Section 11.1.2, we estimated the unknown common population variance of k = 2 normal populations with the pooled sample variance, S2 P = (n1 − 1)S2 1 + (n2 − 1)S2 2 (n1 − 1) + (n2 − 1) , where S2 i is the sample variance for sample i. This procedure is easily ex- tended to the present case of k ≥ 3 by defining the pooled sample variance
  • 290. 288 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS as S2 P = (n1 − 1)S2 1 + · · · + (nk − 1)S2 k (n1 − 1) + · · · + (n2 − 1) = 1 n1 + · · · + nk − k k X i=1 (ni − 1) S2 i = 1 N − k k X i=1 ni X j=1 ¡ Xij − X̄i· ¢2 . As in the case of k = 2, ES2 P = (n1 − 1)ES2 1 + · · · + (nk − 1)ES2 k (n1 − 1) + · · · + (nk − 1) = (n1 − 1)σ2 + · · · (nk − 1)σ2 (n1 − 1) + · · · + (nk − 1) = σ2 , so the pooled sample variance is an unbiased estimator of a common popula- tion variance. It is also consistent and asymptotically efficient for estimating a common normal variance. In the previous case of a known population variance, our statistic for testing the fundamental null hypothesis was SSB/σ2. In the present case of an unknown population variance, we estimate σ2 with S2 P . Our test statistic will turn out to be SSB/S2 P multiplied by a constant. In order to simplify the formulas that follow, we multiply S2 P by N − k, obtaining the within-groups or error sum of squares SSW = (N − k)S2 P = k X i=1 (ni − 1) S2 i = k X i=1 ni X j=1 ¡ Xij − X̄i· ¢2 . In contrast to SSB, which measures the variation of the individual sample means about the sample grand mean, SSW measures the variations of the individual observations about the corresponding sample means. For com- pleteness, we also define the total sum of squares, SST = k X i=1 ni X j=1 ¡ Xij − X̄·· ¢2 , which measures the variation of the individual observations about the sample grand mean. There is a beautiful relationship between SSB, SSW , and SST , viz.,
  • 291. 12.1. THE CASE OF A NORMAL SHIFT FAMILY 289 Theorem 12.2 SSB + SSW = SST This formula turns out to be a corollary of the Pythagorean Theorem in N-dimensional Euclidean space! (In Section 14.2, we will explore a sim- ilar formula in greater detail.) The reason that our method for testing the fundamental null hypothesis is called the analysis of variance is that the method relies on decomposing total squared error into squared error be- tween groups and squared error within groups. This elegant—and extremely useful—decomposition is only possible when we use squared error. The quantities SSB, SSW , and SST are random variables. The following facts, which subsume Theorem 12.1, summarize the statistical behavior of these random variables. Theorem 12.3 The random variable SST /σ2 ∼ χ2 (N − 1). The quantity N − 1 is the total degrees of freedom. Under the fundamental null hypothesis (12.1), SSB and SSW are inde- pendent random variables and SSB/σ2 ∼ χ2 (k − 1), SSW /σ2 ∼ χ2 (N − k). The quantity k − 1 is the between-groups degrees of freedom and the quantity N − k is the within-groups degrees of freedom. We have already remarked that the random variable SSB S2 P = SSB SSW /(N − k) would seem to be a natural statistic for testing the fundamental null hy- pothesis. Although sound in theory, this approach fails in practice because the distribution of SSB/S2 P is not tractable. Fortunately, this approach can be salvaged by a trivial modification. Applying the definition of Fisher’s F distribution in Section 5.5 to the independent χ2 random variables SSB/σ2 and SSW /σ2, we discover Corollary 12.1 Under the fundamental null hypothesis (12.1), F = SSB σ2 /(k − 1) SSW σ2 /(N − k) = SSB/(k − 1) SSW /(N − k) ∼ F(k − 1, N − k),
  • 292. 290 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS where F(ν1, ν2) denotes Fisher’s F distribution with ν1 and ν2 degrees of freedom. The random variable F is the desired test statistic; notice that F = SSB/(k − 1) SSW /(N − k) = 1 k − 1 SSB S2 P . Appealing to Corollary 12.1, we see that the ANOVA F-test of the fun- damental null hypothesis of equal population means is to reject H0 at sig- nificance level α if and only if the significance probability p = P(Y ≥ f) ≤ α, where f denotes the observed value of F and Y ∼ F(k−1, N −k). Of course, we can also formulate the test using critical values instead of significance probabilities, in which case we reject H0 at significance level α if and only if f ≥ q, where q is the 1 − α quantile of the F(k − 1, N − k) distribution. Example 12.2 Suppose that we draw samples of n1 = 25, n2 = 20, and n3 = 20 observations from normal populations with unknown means and unknown common variance, obtaining the following sample quantities: i = 1 i = 2 i = 3 ni 25 20 20 x̄i· 9.783685 10.908170 15.002820 s2 i 29.89214 18.75800 51.41654 To test the null hypothesis of equal population means at significance level α = 0.05, we begin by computing the observed values of SSB and SSW , obtaining ssB . = 322.4366 and ssW = (25−1)·29.89214+(20−1)·18.75800+(20−1)·51.41654 . = 2050.7280. It follows that the observed value of the test statistic is f = ssB/(k − 1) ssW /(N − k) . = 322.4366/2 2050.7280/62 . = 4.874141. Now we use the R function pf to compute a significance probability p: > 1-pf(4.874141,df1=2,df2=62) [1] 0.01081398
  • 293. 12.1. THE CASE OF A NORMAL SHIFT FAMILY 291 Because p < α, we reject the null hypothesis. Equivalently, we might use the R function qf to compute a critical value q: > qf(1-.05,df1=2,df2=62) [1] 3.145258 Because f > q, we reject the null hypothesis. The information related to an ANOVA F-test is usually collected in an ANOVA table: Source of Sum of Degrees of Mean Test Significance Variation Squares Freedom Squares Statistic Probability Between SSB k − 1 MSB F p Within SSW N − k MSW = S2 P Total SST N − 1 Note that we have introduced new notation for the mean squares, MSB = SSB/(k−1) and MSW = SSW /(N−k), allowing us to write F = MSB/MSW . It is also helpful to examine R2 = SSB/SST , the proportion of total varia- tion “explained” by differences in the sample means. Example 12.2 (continued) For the ANOVA performed in Example 12.2, the ANOVA table is Source SS df MS F p Between 322.4366 2 161.21830 4.874141 0.01081398 Within 2050.7280 62 33.07625 Total 2373.1640 64 The proportion of total variation explained by differences in the sample means is 322.4366/2373.1640 . = 0.1358678. Thus, although there is sufficient variation between the sample means for us to infer that the population means are not identical, this variation accounts for a fairly small proportion of the total variation in the data. 12.1.3 Planned Comparisons Rejecting the fundamental null hypothesis of equal population means leaves numerous alternatives. Typically, a scientist would like to say more than simply “H0 : µ1 = · · · = µk is false.” Concluding that the population
  • 294. 292 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS means are not identical naturally invites investigation of how they differ. Sections 12.1.3 and 12.1.4 describe several useful inferential procedures for performing more elaborate comparisons of population means. Section 12.1.3 describes two procedures that are appropriate when the scientist has deter- mined specific comparisons of interest in advance of the experiment. For reasons that will become apparent, this is the preferred case. However, it is often the case that a specific comparison occurs to a scientist after examin- ing the results of the experiment. Although statistical inference in such cases is rather tricky, a variety of procedures for a posteriori inference have been developed. Two such procedures are described in Section 12.1.4. Inspired by K.A. Brownlee’s classic statistics text,1 we motivate the con- cept of a planned comparison by considering a famous physics experiment. Example 12.3 Heyl (1930) attempted to determine the gravitational constant using k = 3 different materials—gold, platinum, and glass. It seems natural to ask not just if the three materials lead to identical determinations of the gravitational constant, by testing H0 : µ1 = µ2 = µ3, but also to ask: 1. If glass differs from the two heavy metals, by testing H0 : µ1 + µ2 2 = µ3 vs. H1 : µ1 + µ2 2 6= µ3, or, equivalently, H0 : µ1 + µ2 = 2µ3 vs. H1 : µ1 + µ2 6= 2µ3, or, equivalently, H0 : µ1 + µ2 − 2µ3 = 0 vs. H1 : µ1 + µ2 − 2µ3 6= 0, or, equivalently, H0 : θ1 = 0 vs. H1 : θ1 6= 0, where θ1 = µ1 + µ2 − 2µ3. 2. If the two heavy metals differ from each other, by testing H0 : µ1 = µ2 vs. H1 : µ1 6= µ2, 1 K.A. Brownlee, Statistical Theory and Methodology in Science and Engineering, Second Edition, John Wiley & Sons, 1965.
  • 295. 12.1. THE CASE OF A NORMAL SHIFT FAMILY 293 or, equivalently, H0 : µ1 − µ2 = 0 vs. H1 : µ1 − µ2 6= 0, or, equivalently, H0 : θ2 = 0 vs. H1 : θ2 6= 0, where θ2 = µ1 − µ2. Notice that both of the planned comparisons proposed in Example 12.3 have been massaged into testing a null hypothesis of the form θ = 0. For this construction to make sense, θ must have a special structure, which statisticians identify as a contrast. Definition 12.1 A contrast is a linear combination (weighted sum) of the k population means, θ = k X i=1 ciµi, for which Pk i=1 ci = 0. Example 12.3 (continued) In the contrasts suggested previously, 1. θ1 = 1 · µ1 + 1 · µ2 + (−2) · µ3 and 1 + 1 − 2 = 0; and 2. θ2 = 1 · µ1 + (−1) · µ2 + 0 · µ3 and 1 − 1 + 0 = 0. We usually identify different contrasts by their coefficients, e.g., c = (1, 1, −2) or c = (1, −1, 0). The methods of Section 12.1.2 are easily extended to the problem of testing a single contrast, H0 : θ = 0 versus H1 : θ 6= 0. In Definition 12.1, each population mean µi can be estimated by the unbiased estimator X̄i·; hence, an unbiased estimator of θ is θ̂ = k X i=1 ciX̄i·. We will reject H0 if θ̂ is observed sufficiently far from zero.
  • 296. 294 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS Once again, we rely on a squared error criterion and ask if the observed quantity (θ̂)2 is sufficiently far from zero. However, the quantity (θ̂)2 is not a satisfactory measure of departure from H0 : θ = 0 because its magnitude depends on the magnitude of the coefficients in the contrast. To remove this dependency, we form a ratio that does not depend on how the coefficients were scaled. The sum of squares associated with the contrast θ is the random variable SSθ = ³Pk i=1 ciX̄i· ´2 Pk i=1 c2 i /ni . The following facts about the distribution of SSθ lead to a test of H0 : θ = 0 versus H1 : θ 6= 0. Theorem 12.4 Under the fundamental null hypothesis H0 : µ1 = · · · = µk, SSθ is independent of SSW , SSθ/σ2 ∼ χ2(1), and F(θ) = SSθ σ2 /1 SSW σ2 /(N − k) = SSθ SSW /(N − k) ∼ F(1, N − k). The F-test of H0 : θ = 0 is to reject H0 if and only if p = PH0 (F(θ) ≥ f(θ)) ≤ α, i.e., if and only if f(θ) ≥ q = qf(1-α,df1=1,df2=N-k), where f(θ) denotes the observed value of F(θ). Example 12.3 (continued) Heyl (1930) collected the following data: Gold 83 81 76 78 79 72 Platinum 61 61 67 67 64 Glass 78 71 75 72 74 Applying the methods of Section 12.1.2, we obtain the following ANOVA table: Source SS df MS F p Between 565.1 2 282.6 26.1 0.000028 Within 140.8 13 10.8 Total 705.9 15
  • 297. 12.1. THE CASE OF A NORMAL SHIFT FAMILY 295 To test H0 : θ1 = 0 versus H1 : θ1 6= 0, we first compute ssθ1 = [1 · x̄1· + 1 · x̄2· + (−2) · x̄3·]2 12/6 + 12/5 + (−2)2/5 . = 29.16667, then f (θ1) = ssθ ssW /(N − k) . = 29.16667 140.8333/(16 − 3) . = 2.692308. Finally, we use the R function pf to compute a significance probability p: > 1-pf(2.692308,df1=1,df2=13) [1] 0.1247929 Because p > 0.05, we decline to reject the null hypothesis at significance level α = 0.05. Equivalently, we might use the R function qf to compute a critical value q: > qf(1-.05,df1=1,df2=13) [1] 4.667193 Because f < q, we decline to reject the null hypothesis. In practice, one rarely tests a single contrast. However, testing multiple contrasts involves more than testing each contrast as though it was the only contrast. Entire books have been devoted to the problem of multiple comparisons; the remainder of Section 12.1 describes four popular procedures for testing multiple contrasts. Orthogonal Contrasts When it can be used, the method of orthogonal contrasts is generally pre- ferred. It is quite elegant, but has certain limitations. We begin by explain- ing what it means for contrasts to be orthogonal. Definition 12.2 Two contrasts with coefficient vectors (c1, . . . , ck) and (d1, . . . , dk) are orthogonal if and only if k X i=1 cidi ni = 0. A collection of contrasts is mutually orthogonal if and only if each pair of contrasts in the collection is orthogonal.
  • 298. 296 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS Notice that, if n1 = · · · = nk, then the orthogonality condition simplifies to k X i=1 cidi = 0. Students who know some linear algebra should recognize that this condition states that the dot product between the vectors c and d vanishes, i.e., that the vectors c and d are orthogonal (perpendicular) to each other. Example 12.3 (continued) Whether or not two contrasts are orthog- onal depends not only on their coefficient vectors, but also on the size of the samples drawn from each population. • Suppose that Heyl (1930) had collected samples of equal size for each of the three materials that he used. If n1 = n2 = n3, then θ1 and θ2 are orthogonal because 1 · 1 + 1 · (−1) + (−2) · 0 = 0. • In fact, Heyl (1930) collected samples with sizes n1 = 6 and n2 = n3 = 5. In this case, θ1 and θ2 are not orthogonal because 1 · 1 6 + 1 · (−1) 5 + (−2) · 0 5 = 1 6 − 1 5 6= 0. However, θ1 is orthogonal to θ3 = 18µ1 − 17µ2 − µ3 because 1 · 18 6 + 1 · (−17) 5 + (−2) · (−1) 5 = 3 − 3.2 + 0.2 = 0. It turns out that the number of mutually orthogonal contrasts cannot exceed k − 1. Obviously, this fact limits the practical utility of the method; however, families of mutually orthogonal contrasts have two wonderful prop- erties that commend their use. First, any family of k − 1 mutually orthogonal contrasts partitions SSB into k − 1 separate components, SSB = SSθ1 + · · · + SSθk−1 , each with one degree of freedom. This information is usually incorporated into an expanded ANOVA table, as in. . .
  • 299. 12.1. THE CASE OF A NORMAL SHIFT FAMILY 297 Example 12.3 (continued) In the case of Heyl’s (1930) data, the orthogonal contrasts θ1 and θ3 partition the between-groups sum-of-squares: Source SS df MS F p Between 565.1 2 282.6 26.1 0.000028 θ1 29.2 1 29.2 2.7 0.124793 θ3 535.9 1 535.9 49.5 0.000009 Within 140.8 13 10.8 Total 705.9 15 Testing the fundamental null hypothesis, H0 : µ1 = µ2 = µ3, results in a tiny signficance probability, leading us to conclude that the population means are not identical. The decomposition of the variation between groups into contrasts θ1 and θ3 provides insight into the differences between the population means. Testing the null hypothesis, H0 : θ1 = 0, results in a large signficance probability, leading us to conclude that the heavy metals do not, in tandem, differ from glass. However, testing the null hypothesis, H0 : θ3 = 0, results in a tiny signficance probability, leading us to conclude that the heavy metals do differ from each other. This is only possible if the glass mean lies between the gold and platinum means. For this simple example, our conclusions are easily checked by examining the raw data. The second wonderful property of mutually orthogonal contrasts is that tests of mutually orthogonal contrasts are mutually independent. As we shall demonstrate, this property provides us with a powerful way to address a crucial difficulty that arises whenever we test multiple hypotheses. The difficulty is as follows. When testing a single null hypothesis that is true, there is a small chance (α) that we will falsely reject the null hypothesis and commit a Type I error. When testing multiple null hypotheses, each of which are true, there is a much larger chance that we will falsely reject at least one of them. We desire control of this family-wide error rate, often abbreviated FWER. Definition 12.3 The family-wide error rate (FWER) of a family of con- trasts is the probability under the fundamental null hypothesis H0 : µ1 = · · · = µk of falsely rejecting at least one null hypothesis. The fact that tests of mutually orthogonal contrasts are mutually in- dependent allows us to deduce a precise relation between the significance level(s) of the individual tests and the FWER.
  • 300. 298 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS 1. Let Er denote the event that H0 : θr = 0 is falsely rejected. Then P(Er) = α is the rate of Type I error for an individual test. 2. Let E denote the event that at least one Type I error is committed, i.e., E = k−1 [ r=1 Er. The family-wide rate of Type I error is FWER = P(E). 3. The event that no Type I errors are committed is Ec = k−1 r=1 Ec r, and the probability of this event is P(Ec) = 1 − FWER. 4. By independence, 1 − FWER = P (Ec ) = P (Ec 1) × · · · × P ¡ Ec k−1 ¢ = (1 − α)k−1 ; hence, FWER = 1 − (1 − α)k−1 . Notice that FWER > α, i.e., the family rate of Type I error is greater than the error rate for an individual test. For example, if k = 3 and α = 0.05, then FWER = 1 − (1 − .05)2 = 0.0975. This phenomenon is sometimes called “alpha slippage.” To protect against alpha slippage, we usually prefer to specify the family rate of Type I error that will be tolerated, then compute a significance level that will ensure the specified family rate. For example, if k = 3 and we desire FWER = 0.05, then we solve 0.05 = 1 − (1 − α)2 to obtain a significance level of α = 1 − √ 0.95 . = 0.0253.
  • 301. 12.1. THE CASE OF A NORMAL SHIFT FAMILY 299 Bonferroni t-Tests It is often the case that one desires to test contrasts that are not mutually orthogonal. This can happen with a small family of contrasts. For example, suppose that we want to compare a control mean µ1 to each of two treatment means, µ2 and µ3, in which case the natural contrasts have coefficient vectors c = (1, −1, 0) and d = (1, 0, −1). In this case, the orthogonality condition simplifies to 1/n1 = 0, which is impossible. Furthermore, as we have noted, families of more than k − 1 contrasts cannot be mutually orthogonal. Statisticians have devised a plethora of procedures for testing multiple contrasts that are not mutually orthogonal. Many of these procedures ad- dress the case of multiple pairwise contrasts, i.e., contrasts for which each coefficient vector has exactly two nonzero components. We describe one such procedure that relies on Bonferroni’s inequality. Suppose that we plan m pairwise comparisons. These comparisons are defined by contrasts θ1, . . . , θm, each of the form µi − µj, not necessarily mutually orthogonal. Notice that each H0 : θr = 0 versus H1 : θr 6= 0 is a normal 2-sample location problem with equal variances. From this observa- tion, the following facts can be deduced. Theorem 12.5 Under the fundamental null hypothesis H0 : µ1 = · · · = µk, Z = X̄i· − X̄j· r³ 1 ni + 1 nj ´ σ2 ∼ N(0, 1) and T (θr) = X̄i· − X̄j· r³ 1 ni + 1 nj ´ MSW ∼ t(N − k). From Theorem 12.5, the t-test of H0 : θr = 0 is to reject if and only if p = P (|T (θr)| ≥ |t(θr)|) ≤ α, i.e., if and only if |t (θr)| ≥ q = qt(1-α/2,df=N-k), where t(θr) denotes the observed value of T(θr). This t-test is virtually identical to Student’s 2-sample t-test, described in Section 11.1.2, except that it pools all k samples to estimate the common variance instead of only pooling the two samples that are being compared.
  • 302. 300 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS At this point, you may recall that Section 11.1 strongly discouraged the use of Student’s 2-sample t-test, which assumes a common population vari- ance. Instead, we recommended Welch’s approximate t-test. In the present case, our test of the fundamental null hypothesis H0 : µ1 = · · · = µk has already imposed the assumption of a common population variance, so our use of the T statistic in Theorem 12.5 is theoretically justified. But this justification is rather too glib, as it merely begs the question of why we as- sumed a common population variance in the first place. The general answer to this question is that the ANOVA methodology is extremely powerful and that comparable procedures in the case of unequal population variances may not exist. (Fortunately, ANOVA often provides useful insights even when its assumptions are violated. In such cases, however, one should interpret significance probabilities with extreme caution.) In the present case, a good procedure does exist, viz., the pairwise application of Welch’s approximate t-test. The following discussion of how to control the family-wide error rate in such cases applies equally to either type of pairwise t-test. Unless the pairwise contrasts are mutually orthogonal, we cannot use the multiplication rule for independent events to compute the family rate of Type I error. However, Bonferroni’s inequality states that FWER = P(E) = P Ã m [ r=1 Er ! ≤ m X r=1 P (Er) = mα; hence, we can ensure that the family rate of Type I error is no greater than a specified FWER by testing each contrast at significance level α = FWER/m. Example 12.3 (continued) Instead of planning θ1 and θ3, suppose that we had planned θ4 and θ5, defined by coefficient vectors c = (−1, 0, 1) and d = (0, −1, 1) respectively. To test θ4 and θ5 with a family-wide error rate of FWER ≤ 0.10, we first compute t (θ4) = x̄3· − x̄1· r³ 1 n3 + 1 n1 ´ msW . = −2.090605 and t (θ5) = x̄3· − x̄2· r³ 1 n3 + 1 n2 ´ msW . = 4.803845, resulting in the following significance probabilities:
  • 303. 12.1. THE CASE OF A NORMAL SHIFT FAMILY 301 > 2*pt(-2.090605,df=13) [1] 0.0567719 > 2*pt(-4.803845,df=13) [1] 0.0003444588 There are m = 2 pairwise comparisons. To ensure FWER ≤ 0.10, we com- pare the significance probabilities to α = 0.10/2 = 0.05, which leads us to reject H0 : θ5 = 0 and to decline to reject H0 : θ4 = 0. What do we lose by using Bonferroni’s inequality instead of the multipli- cation rule? Without the assumption of independence, we must be slightly more conservative in choosing a significance level that will ensure a specified family-wide rate of error. For the same FWER, Bonferroni’s inequality leads to a slightly smaller α than does the multiplication rule. The discrepancy grows as m increases. 12.1.4 Post Hoc Comparisons We now consider situations in which we determine that a comparison is of interest after inspecting the data. For example, suppose that we had decided to compare gold to platinum after inspecting Heyl’s (1930) data. This ought to strike you as a form of cheating. Almost every randomly generated data set will have an appealing pattern in it that may draw the attention of an interested observer. To allow such patterns to determine what the scientist will investigate is to invite abuse. Fortunately, statisticians have devised procedures that protect ethical scientists from the heightened risk of Type I error when the null hypothesis was constructed after the data were examined. The present section describes two such procedures. Bonferroni t-Tests To fully appreciate the distinction between planned and post hoc compar- isons, it is highly instructive to examine the method of Bonferroni t-tests. Suppose that only pairwise comparisons are of interest. Because we are test- ing after we have had the opportunity to inspect the data (and therefore to construct the contrasts that appear to be nonzero), we suppose that all pairwise contrasts were of interest a priori. Hence, whatever the number of pairwise contrasts actually tested a posteriori, we set m = Ã k 2 ! = k(k − 1) 2
  • 304. 302 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS and proceed as before. The difference between planned and post hoc comparisons is especially sobering when k is large. For example, suppose that we desire that the family-wide error rate does not exceed 0.10 when testing two pairwise con- trasts among k = 10 groups. If the comparisons were planned, then m = 2 and we can perform each test at signficance level α = 0.10/2 = 0.05. How- ever, if the comparisons were constructed after examining the data, then m = 45 and we must perform each test at signficance level α = 0.10/45 . = 0.0022. Obviously, much stronger evidence is required to reject the same null hypothesis when the comparison is chosen after examining the data. Scheffé F-Tests The reasoning that underlies Scheffé F-Tests for post hoc comparisons is analogous to the reasoning that underlies Bonferroni t-tests for post hoc comparisons. To accommodate the possibility that a general contrast was constructed after examining the data, Scheffé’s procedure is predicated on the assumption that all possible contrasts were of interest a priori. This makes Scheffé’s procedure the most conservative of all multiple comparison procedures. Scheffé’s F-test of H0 : θr = 0 versus H1 : θr 6= 0 is to reject H0 if and only if p = 1-pf(f(θ)/(k-1),df1=k-1,df2=N-k) ≤ α, i.e., if and only if f (θr) k − 1 ≥ q = qf(1-α,k-1,N-k), where f(θr) denotes the observed value of the F(θr) defined for the method of planned orthogonal contrasts. It can be shown that, no matter how many H0 : θr = 0 are tested by this procedure, the family-wide rate of Type I error is no greater than α. Example 12.3 (continued) Let θ6 = µ1 − µ3. Scheffé’s F-test pro- duces the following results: Source f(θr)/2 p θ1 1.3 0.294217 θ2 25.3 0.000033 θ3 24.7 0.000037 θ6 2.2 0.151995
  • 305. 12.2. THE CASE OF A GENERAL SHIFT FAMILY 303 For the first three comparisons, our conclusions are not appreciably affected by whether the contrasts were constructed before or after examining the data. However, if θ6 had been planned, we would have obtained f(θ6) = 4.4 and p = 0.056772, which might easily lead to a different conclusion. 12.2 The Case of a General Shift Family 12.2.1 The Kruskal-Wallis Test 12.3 The Behrens-Fisher Problem
  • 306. 304 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS 12.4 Exercises 1. Jean Kerr devoted an entire chapter of Please Don’t Eat the Daisies (1959) to the subject of dieting, observing that. . . “Today, with the science of nutrition advancing so rapidly, there is plenty of food for conversation, if for nothing else. We have the Rockefeller diet, the Mayo diet, high-protein diets, low-protein diets, “blitz” diets which feature cottage cheese and something that tastes like very thin sandpaper, and—finally—a liquid diet that duplicates all the rich, nour- ishing goodness of mother’s milk. I have no way of know- ing which of these is the most efficacious for losing weight, but there’s no question in my mind that as a conversation- stopper the “mother’s milk diet” is quite a ways out ahead.” For her master’s thesis, a nutrition student at the University of Arizona decides to compare several weight loss strategies. She recruits 140 moderately obese adult women and randomly assigns each woman to one of the following diets: Rockefeller, Mayo, Atkins (high-protein), a low-protein diet, a blitz diet, a liquid diet, and—as a control—Aunt Jean’s marshmallow fudge diet. Each woman is weighed before dieting, asked to follow the prescribed diet for eight weeks, then weighed again. The resulting data will be analyzed using the analysis of variance and related statistical techniques. (a) This is a k-sample problem. What is the value of k? (b) What null hypothesis is tested by an analysis of variance? (Your answer should specify relations between certain population pa- rameters. Be sure to define these parameters!) (c) How many pairwise comparisons are possible? (d) The student is especially interested in three pairwise comparisons: Atkins versus low-protein, low-protein versus fudge, and fudge versus liquid. Specify contrasts that correspond to each of these comparisons. (e) Are the preceding contrasts orthogonal? Why or why not? 2. As part of her senior thesis, a William & Mary physics major decides to repeat Heyl’s (1930) experiment for determining the gravitational
  • 307. 12.4. EXERCISES 305 constant using 4 different materials: silver, copper, topaz, and quartz. She plans to test 10 specimens of each material. (a) Three comparisons are planned: i. Metal (silver & copper) versus Gem (topaz & quartz) ii. Silver versus Copper iii. Topaz versus Quartz What contrasts correspond to these comparisons? Are they or- thogonal? Why or why not? If the desired family rate of Type I error is 0.05, then what significance level should be used for testing the null hypotheses H0 : θr = 0? (b) After analyzing the data, an ANOVA table is constructed. Com- plete the table from the information provided. Source SS df MS F p Between θ1 0.001399 θ2 0.815450 θ3 0.188776 Within 9.418349 Total (c) Referring to the above table, explain what conclusion the student should draw about each of her planned comparisons. (d) Assuming that the ANOVA assumption of homoscedasticity is warranted, use the above table to estimate the common popula- tion variance. 3. R. R. Sokal observed 25 females of each of three genetic lines (RS, SS, NS) of the fruitfly Drosophila melanogaster and recorded the number of eggs laid per day by each female for the first 14 days of her life. The lines labelled RS and SS were selectively bred for resistance and for susceptibility to the insecticide DDT. A nonselected control line is labelled NS. The purpose of the experiment was to investigate the following research questions: • Do the two selected lines (RS and SS) differ in fecundity from the nonselected line (NS)?
  • 308. 306 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS • Does the line selected for resistance (RS) differ in fecundity from the line selected for susceptibility (SS)? The data are presented in Table 12.1. RS 12.8 21.6 14.8 23.1 34.6 19.7 22.6 29.6 16.4 20.3 29.3 14.9 27.3 22.4 27.5 20.3 38.7 26.4 23.7 26.1 29.5 38.6 44.4 23.2 23.6 SS 38.4 32.9 48.5 20.9 11.6 22.3 30.2 33.4 26.7 39.0 12.8 14.6 12.2 23.1 29.4 16.0 20.1 23.3 22.9 22.5 15.1 31.0 16.9 16.1 10.8 NS 35.4 27.4 19.3 41.8 20.3 37.6 36.9 37.3 28.2 23.4 33.7 29.2 41.7 22.6 40.4 34.4 30.4 14.9 51.8 33.8 37.9 29.5 42.4 36.6 47.4 Table 12.1: Fecundity of Female Fruitflies (a) Use side-by-side boxplots and normal probability plots to investi- gate the ANOVA assumptions of normality and homoscedasticity. Do these assumptions seem plausible? Why or why not? (b) Construct constrasts that correspond to the research questions framed above. Verify that these constrasts are orthogonal. At what significance level should the contrasts be tested in order to maintain a family rate of Type I error equal to 5%? (c) Use ANOVA and the method of orthogonal contrasts to construct an ANOVA table. State the null and alternative hypotheses that are tested by these methods. For each null hypothesis, state whether or not it should be rejected. (Use α = 0.05 for the ANOVA hypothesis and the significance level calculated above for the contrast hypotheses.) 4. A number of Byzantine coins were discovered in Cyprus. These coins were minted during the reign of King Manuel I, Comnenus (1143–1180). It was determined that n1 = 9 of these coins were minted in an early coinage, n2 = 7 were minted several years later, n3 = 4 were minted in a third coinage, and n4 = 7 were minted in a fourth coinage. The silver content (percentage) of each coin was measured, with the results presented in Table 12.2.
  • 309. 12.4. EXERCISES 307 1 5.9 6.8 6.4 7.0 6.6 7.7 7.2 6.9 6.2 2 6.9 9.0 6.6 8.1 9.3 9.2 8.6 3 4.9 5.5 4.6 4.5 4 5.3 5.6 5.5 5.1 6.2 5.8 5.8 Table 12.2: Silver Content of Byzantine Coins (a) Investigate the ANOVA assumptions of normality and homoscedas- ticity. Do these assumptions seem plausible? Why or why not? (b) Construct an ANOVA table. State the null and alternative hy- potheses tested by this method. Should the null hypothesis be rejected at the α = 0.10 level? (c) Examining the data, it appears that coins minted early in King Manuel’s reign (the first two coinages) tended to contain more silver than coins minted later in his reign (the last two coinages). Construct a contrast that is suitable for investigating if this is the case. State appropriate null and alternative hypotheses and test them using Scheffé’s F-test for multiple comparisons with a significance level of 5%. 5. R. E. Dolkart and colleagues compared antibody responses in normal and alloxan diabetic mice. Three groups of mice were studied: normal, alloxan diabetic, and alloxan diabetic treated with insulin. Several comparisons are of interest: • Does the antibody response of alloxan diabetic mice differ from the antibody response of normal mice? • Does the antibody response of alloxan diabetic mice treated with insulin differ from the antibody response of normal mice? • Does treating alloxan diabetic mice with insulin affect their anti- body response? Table 12.3 contains the measured amounts of nitrogen-bound bovine serum albumen produced by the mice. (a) Using the above data, investigate the ANOVA assumptions of nor- mality and homoscedasticity. Do these assumptions seem plausi- ble for these data? Why or why not?
  • 310. 308 CHAPTER 12. K-SAMPLE LOCATION PROBLEMS Normal 156 282 197 297 116 127 119 29 253 122 349 110 143 64 26 86 122 455 655 14 Alloxan 391 46 469 86 174 133 13 499 168 62 127 276 176 146 108 276 50 73 Alloxan 82 100 98 150 243 68 228 131 73 18 +insulin 20 100 72 133 465 40 46 34 44 Table 12.3: Antibody Responses of Diabetic Mice (b) Now transform the data by taking the square root of each mea- surement. Using the transformed data, investigate the ANOVA assumptions of normality and homoscedasticity. Do these as- sumptions seem plausible for the transformed data? Why or why not? (c) Using the transformed data, construct an ANOVA table. State the null and alternative hypotheses tested by this method. Should the null hypothesis be rejected at the α = 0.05 level? (d) Using the transformed data, construct suitable contrasts for inves- tigating the research questions framed above. State appropriate null and alternative hypotheses and test them using the method of Bonferroni t-tests. At what significance level should these hy- potheses be tested in order to maintain a family rate of Type I error equal to 5%? Which null hypotheses should be rejected?
  • 311. Chapter 13 Association 13.1 Categorical Random Variables 13.2 Normal Random Variables The continuous random variables (X, Y ) define a function that assigns a pair of real numbers to each experimental outcome. Let B = [a, b] × [c, d] ⊂ ℜ2 be a rectangular set of such pairs and suppose that we want to compute P ((X, Y ) ∈ B) = P (X ∈ [a, b], Y ∈ [c, d]) . Just as we compute P(X ∈ [a, b]) using the pdf of X, so we compute P((X, Y ) ∈ B) using the joint probability density function of (X, Y ). To do so, we must extend the concept of area under the graph of a function of one variable to the concept of volume under the graph of a function of two variables. Theorem 13.1 Let X be a continuous random variable with pdf fx and let Y be a continuous random variable with pdf fy. In this context, fx and fy are called the marginal pdfs of (X, Y ). Then there exists a function f : ℜ2 → ℜ, the joint pdf of (X, Y ), such that P ((X, Y ) ∈ B) = VolumeB(f) = Z b a Z d c f(x, y) dy dx (13.1) for all rectangular subsets B. If X and Y are independent, then f(x, y) = fx(x)fy(y). 309
  • 312. 310 CHAPTER 13. ASSOCIATION Remark: If (13.1) is true for all rectangular subsets of ℜ2, then it is true for all subsets in the sigma-field generated by the rectangular subsets. We can think of the joint pdf as a function that assigns an elevation to a point identified by two coordinates, longitude (x) and latitude (y). Noting that topographic maps display elevations via contours of constant elevation, we can describe a joint pdf by identifying certain of its contours, i.e., subsets of ℜ2 on which f(x, y) is contant. Definition 13.1 Let f denote the joint pdf of (X, Y ) and fix c > 0. Then n (x, y) ∈ ℜ2 : f(x, y) = c o is a contour of f. 13.2.1 Bivariate Normal Distributions Suppose that X ∼ Normal(0, 1) and Y ∼ Normal(0, 1), not necessarily in- dependent. To measure the degree of dependence between X and Y , we consider the quantity E(XY ). • If there is a positive association between X and Y , then experimental outcomes that have. . . – positive values of X will tend to have positive values of Y , so XY will tend to be positive; – negative values of X will tend to have negative values of Y , so XY will tend to be positive. Hence, E(XY ) > 0 indicates positive association. • If there is a negative association between X and Y , then experimental outcomes that have. . . – positive values of X will tend to have negative values of Y , so XY will tend to be negative; – negative values of X will tend to have positive values of Y , so XY will tend to be negative. Hence, E(XY ) < 0 indicates negative association.
  • 313. 13.2. NORMAL RANDOM VARIABLES 311 If X ∼ Normal(µx, σ2 x) and Y ∼ Normal(µy, σ2 y), then we measure depen- dence after converting to standard units: Definition 13.2 Let µx = EX and σ2 x = Var X < ∞. Let µy = EY and σ2 y = Var Y < ∞. The population product-moment correlation coefficient of X and Y is ρ = ρ(X, Y ) = E "µ X − µx σx ¶ Ã Y − µy σy !# . The product-moment correlation coefficient has the following properties: Theorem 13.2 If X and Y have finite variances, then 1. −1 ≤ ρ ≤ 1 2. ρ = ±1 if and only if Y − µy σy = ± X − µx σx , in which case Y is completely determined by X. 3. If X and Y are independent, then ρ = 0. 4. If X and Y are normal random variables for which ρ = 0, then X and Y are independent. If ρ = ±1, then the values of (X, Y ) fall on a straight line. If |ρ| < 1, then the five population parameters (µx, µy, σ2 x, σ2 y, ρ) determine a unique bivariate normal pdf. The contours of this joint pdf are concentric ellipses centered at (µx, µy). We use one of these ellipses to display the basic features of the bivariate normal pdf in question. Definition 13.3 Let f denote a nondegenerate (|ρ| < 1) bivariate normal pdf. The population concentration ellipse is the contour of f that contains the four points (µx ± σx, µy ± σy) . It is not difficult to create an R function that plots concentration ellipses. The function binorm.ellipse is described in Appendix R and/or can be obtained from the web page for this book/course.
  • 314. 312 CHAPTER 13. ASSOCIATION Example 13.1 The following R commands produce the population concentration ellipse for a bivariate normal distribution with parameters µx = 10, µy = 20, σ2 x = 4, σ2 y = 16 and ρ = 0.5: > pop <- c(10,20,4,16,.5) > binorm.ellipse(pop) The ellipse plotted by these commands is displayed in Figure 13.1. x y 5 10 15 15 20 25 Concentration Ellipse Figure 13.1: The population concentration ellipse for a bivariate normal distribution with parameters (µx, µy, σ2 x, σ2 y, ρ) = (10, 20, 4, 16, 0.5). Unless the population concentration ellipse is circular, it has a unique major axis. The line that coincides with this axis is the first principal compo- nent of the population and plays an important role in multivariate statistics. We will encounter this line again in Chapter 14.
  • 315. 13.2. NORMAL RANDOM VARIABLES 313 13.2.2 Bivariate Normal Samples A bivariate sample is a set of paired observations: (x1, y1) , (x2, y2) , . . . , (xn, yn) . We assume that each pair (xi, yi) was independently drawn from the same bivariate distribution. Bivariate samples are usually stored in an n × 2 data matrix,       x1 y1 x2 y2 . . . xn yn       , and are often displayed by plotting each (xi, yi) in the Cartesian plane. The resulting figure is called a scatter diagram. Example 13.2 Twenty students enrolled in Math 351 (Applied Statis- tics) at the College of William & Mary produced the following scores on two midterm tests: x y 87 87 25 57 76 91 84 67 91 67 82 66 94 86 89 74 92 92 76 85 84 75 99 92 92 55 74 74 84 74 94 69 99 98 63 81 82 80 91 85
  • 316. 314 CHAPTER 13. ASSOCIATION A scatter diagram of these data is displayed in Figure 13.2. Typically, it is easier to discern patterns by inspecting a scatter diagram than by inspecting a table of numbers. In particular, note the presence of an apparent outlier. • • • • • • • • • • • • • • • • • • • • x = score on Test 1 y = score on Test 2 40 60 80 100 40 60 80 100 Figure 13.2: A scatter diagram of a bivariate sample. Each point corresponds to a student. The horizontal position of the point represents the student’s score on the first midterm test; the vertical position of the point represents the student’s score on the second midterm test. The population from which the bivariate sample in Example 13.2 was drawn is not known, so this sample should not be interpreted as a typical example of a bivariate normal sample. However, it is not difficult to create an R function that simulates sampling from a specified bivariate normal population. The function binorm.sample is described in Appendix R and/or can be obtained from the web page for this book/course.
  • 317. 13.2. NORMAL RANDOM VARIABLES 315 Example 13.1 (continued) The following R command draws n = 5 observations from the previously specified bivariate normal distribution: > binorm.sample(pop,5) [,1] [,2] [1,] 12.293160 24.07643 [2,] 11.819520 24.13076 [3,] 11.529582 17.28637 [4,] 6.912459 23.39430 [5,] 11.043991 18.12538 Notice that binorm.sample returns the sample in the form of a data matrix. Having observed a bivariate normal sample, we inquire how to estimate the five population parameters (µx, µy, σ2 x, σ2 y, ρ). We have already discussed how to estimate the population means (µx, µy) with the sample means (x̄, ȳ) and the population variances (σ2 x, σ2 y) with the sample variances (s2 x, s2 y). The plug-in estimate of ρ is ρ̂ = 1 n n X i=1 "µ xi − µ̂x σ̂x ¶ Ã yi − µ̂y σ̂y !# = 1 n n X i=1   Ã xi − x̄ p (n − 1)s2 x/n !   yi − ȳ q (n − 1)s2 y/n     = 1 n − 1 n X i=1 "µ xi − x̄ sx ¶ Ã yi − ȳ sy !# , where σ̂x = q c σ2 x and σ̂y = q c σ2 y. This quantity is Pearson’s product-moment correlation coefficient, usually denoted r. It is not difficult to create an R function that computes the estimates (x̄, ȳ, s2 x, s2 y, r) from a bivariate data matrix. The function binorm.estimate is described in Appendix R and/or can be obtained from the web page for this book/course.
  • 318. 316 CHAPTER 13. ASSOCIATION Example 13.1 (continued) The following R commands draw n = 100 observations from a bivariate normal distribution with parameters µx = 10, µy = 20, σ2 x = 4, σ2 y = 16 and ρ = 0.5, then estimate the parameters from the sample: > Data <- binorm.sample(pop,100) > binorm.estimate(Data) [1] 9.8213430 20.3553502 4.2331147 16.7276819 0.5632622 Naturally, the estimates do not equal the estimands because of sampling variation. Finally, it is not difficult to create an R function that plots a scatter di- agram and overlays the sample concentration ellipse, i.e., the concentration ellipse constructed using the computed sample quantities (x̄, ȳ, s2 x, s2 y, r) in- stead of the unknown population quantities (µx, µy, σ2 x, σ2 y, ρ). The function binorm.scatter is described in Appendix R and/or can be obtained from the web page for this book/course. Example 13.1 (continued) The following R command creates the overlaid scatter diagram displayed in Figure 13.3: > binorm.scatter(Data) When analyzing bivariate data, it is good practice to examine both the scatter diagram and the sample concentration ellipse in order to ascertain how well the latter summarizes the former. A poor summary suggests that the sample may not have been drawn from a bivariate normal distribution, as in Figure 13.4. 13.2.3 Inferences about Correlation We have already observed that ρ̂ = r is the plug-in estimate of ρ. In this section, we consider how to test hypotheses about and construct confidence intervals for ρ. Given normal random variables X and Y , an obvious question is whether or not they are uncorrelated. To answer this question, we test the null hypothesis H0 : ρ = 0 against the alternative hypothesis H1 : ρ 6= 0. (One might also be interested in one-sided hypotheses and ask, for example, whether or not there is convincing evidence of positive correlation.) We can derive a test from the following fact about the plug-in estimator of ρ.
  • 319. 13.2. NORMAL RANDOM VARIABLES 317 • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • x y 0 5 10 15 20 10 15 20 25 30 Scatter Diagram Figure 13.3: A scatter diagram of a bivariate normal sample, with the sample concentration ellipse overlaid. Theorem 13.3 Suppose that (Xi, Yi), i = 1, . . . , n, are independent pairs of random variables with a bivariate normal distribution. Let ρ̂ denote the plug-in estimator of ρ. If Xi and Yi are uncorrelated, i.e., ρ = 0, then ρ̂ √ n − 2 p 1 − ρ̂2 ∼ t(n − 2). Assuming that (Xi, Yi) have a bivariate normal distribution, Theorem 13.3 allows us to compute a significance probability for testing H0 : ρ = 0 versus H1 : ρ 6= 0. Let T ∼ t(n − 2). Then the probability of observing |ρ̂| ≥ |r| under H0 is p = P Ã |T| ≥ r √ n − 2 √ 1 − r2 !
  • 320. 318 CHAPTER 13. ASSOCIATION • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • x y -5 0 5 -5 0 5 Scatter Diagram Figure 13.4: A scatter diagram for which the sample concentration ellipse is a poor summary. These data were not drawn from a bivariate normal distribution. and we reject H0 if and only if p ≤ α. Equivalently, we reject H0 if and only if (iff) ¯ ¯ ¯ ¯ ¯ r √ n − 2 √ 1 − r2 ¯ ¯ ¯ ¯ ¯ ≥ qt iff r2(n − 2) 1 − r2 ≥ q2 t iff r2 ≥ q2 t n − 2 + q2 t , where qt = qt(1 − α/2, n − 2). When testing hypotheses about correlation, it is important to appreci- ate the distinction between statistical significance and material significance. Strong evidence that an association exists is not the same as evidence of a strong association. The following examples illustrate the distinction.
  • 321. 13.2. NORMAL RANDOM VARIABLES 319 Example 13.3 I used binorm.sample to draw a sample of n = 300 observations from a bivariate normal distribution with a population cor- relation coefficient of ρ = 0.1. This is a rather weak association. I then used binorm.estimate to compute a sample correlation coefficient of r = 0.16225689. The test statistic is r √ n − 2 √ 1 − r2 = 2.838604 and the significance probability is p ¯ = 2 ∗ pt(−2.838604, 298) = 0.004842441. This is fairly decisive evidence that ρ 6= 0, but concluding that X and Y are correlated does not warrant concluding that X and Y are strongly correlated. Example 13.4 I used binorm.sample to draw a sample of n = 10 observations from a bivariate normal distribution with a population corre- lation coefficient of ρ = 0.8. This is a fairly strong association. I then used binorm.estimate to compute a sample correlation coefficient of r = 0.3759933. The test statistic is r √ n − 2 √ 1 − r2 = 1.147684 and the significance probability is p = 2 ∗ pt(−1.147684, 8) = 0.2842594. There is scant evidence that ρ 6= 0, despite the fact that X and Y are strongly correlated. Although testing whether of not ρ = 0 is an important decision, it is not the only inference of interest. For example, if we want to construct confidence intervals for ρ, then we need to test H0 : ρ = ρ0 versus H1 : ρ 6= ρ0. To do so, we rely on an approximation due to Ronald Fisher. Let ζ = 1 2 log µ 1 + ρ 1 − ρ ¶ and rewrite the hypotheses as H0 : ζ = ζ0 versus H1 : ζ 6= ζ0. This is sometimes called Fisher’s z-transformation. Fisher discovered that ζ̂ = 1 2 log µ 1 + ρ̂ 1 − ρ̂ ¶ ˙ ∼ Normal µ ζ, 1 n − 3 ¶ ,
  • 322. 320 CHAPTER 13. ASSOCIATION which allows us to compute an approximate significance probability. Let Z ∼ Normal(0, 1) and set z = 1 2 log µ 1 + r 1 − r ¶ . Then p . = P ³ |Z| ≥ |z − ζ0| √ n − 3 ´ and we reject H0 : ζ = ζ0 if and only if p ≤ α. Equivalently, we reject H0 : ζ = ζ0 if and only if |z − ζ0| √ n − 3 ≥ qz, where qz = qnorm(1 − α/2). To construct an approximate (1 − α)-level confidence interval for ρ, we first observe that z ± qz √ n − 3 (13.2) is an approximate (1 − α)-level confidence interval for ζ. We then use the inverse of Fisher’s z-transformation, ρ = e2ζ − 1 e2ζ + 1 , to transform (13.2) to a confidence interval for ρ. Example 13.5 Suppose that we draw n = 100 observations from a bivariate normal distribution and observe r = 0.5. To construct a 0.95-level confidence interval, we use qz . = 1.96. First we compute z = 1 2 log µ 1 + 0.5 1 − 0.5 ¶ = 0.5493061 and z ± qz √ n − 3 . = 0.5493061 ± 1.96 √ 97 = (0.350302, 0.7483103) to obtain a confidence interval (a, b) for ζ. The corresponding confidence interval for ρ is à e2a − 1 e2a + 1 , e2b − 1 e2b + 1 ! = (0.3366433, 0.6341398). Notice that the plug-in estimate ρ̂ = r = 0.5 is not the midpoint of this interval.
  • 323. 13.3. MONOTONIC ASSOCIATION 321 13.3 Monotonic Association 13.4 Spurious Association
  • 324. 322 CHAPTER 13. ASSOCIATION 13.5 Exercises 1. Consider the following data matrix: 4.81310497776088 5.50546805210632 3.20790912734096 3.23537831017746 2.03360531141548 1.57466192734915 3.80353555823225 4.0777212868518 3.44874039566775 3.57596515608872 4.02513467455476 4.39110976256498 4.18921274133904 4.62315118989928 1.57765999081644 0.929857871257454 2.55801286069007 2.31628619574412 3.30197349607145 3.36840541617217 3.49344457748324 3.63918641630698 3.84773963203205 4.14023528753161 1.6571339655711 1.04225104421118 2.01676932918443 1.55085225294214 3.26802020797819 3.32038821566353 3.21119453633111 3.24002458012926 3.98834405943784 4.33907997569859 3.39396169865743 3.49849637984759 3.98470335590536 4.33393124338638 2.92484672005844 2.83506761480053 3.24990948234283 2.98840952533401 4.48210022495756 1.24582866569767 2.49246311350902 4.05960045290903 2.5490793094774 3.97953306072058 3.56806772786439 2.53846581953658 2.58341332552653 3.93097742957316 3.00614070448958 3.33315063705718 3.59845899773574 2.49548607350678 3.24798603840268 2.99112968584062 3.27071210738312 2.95899017086906 3.61265049129421 2.47541627084607 3.98487089689919 1.94901712504748 2.92139406397179 3.453000485443 2.10733672639563 4.60425141279254 3.20304499253985 3.05468592240708 1.84295811639769 4.97813922865297 3.11571443259585 3.17818998468951 3.5505950180758 2.56317596269101 3.41454250084746 2.75558327775034 2.6505463184044 3.83603704056258
  • 325. 13.5. EXERCISES 323 (a) Do the x values appear to have been drawn from a normal distri- bution? Why or why not? (b) Do the y values appear to have been drawn from a normal distri- bution? Why or why not? (c) Do the (x, y) values appear to have been drawn from a bivariate normal distribution? Why or why not? (d) Suggest an explanation for the phenomena observed in (a)–(c). Is this a paradox? How do you think that these (x, y) pairs were obtained? Hint: Do not try to type these data into R! They are available elec- tronically. Assuming that the data matrix is stored in a text file named ex131.dat, located in the root directory of a diskette, the following command reads the data into the Windows version of R: > Data <- matrix(scan("a:ex131.dat"),byrow=T,ncol=2) The following R commands then create vectors of x and y values: > x <- Data[,1] > y <- Data[,2] 2. Consider the test score data reported in Example 13.2. (a) Quantify the association between midterm test scores by com- puting Pearson’s product-moment correlation coefficient. Is the association positive or negative? (b) Examining the scatter diagram displayed in Figure 13.2, one stu- dent appears to be an outlier. Omitting the corresponding row of the data matrix, re-compute Pearson’s product-moment correla- tion coefficient. How does the outlier affect the value of r? Hint: If Data is a complete data matrix, then Data[-17,] is the same data matrix without row 17.
  • 326. 324 CHAPTER 13. ASSOCIATION 3. Pearson and Lee reported the following heights (in inches) of eleven pairs of siblings: sister brother 69 71 64 68 65 66 63 67 65 70 62 71 65 70 64 73 66 72 59 65 62 66 Assuming that these pairs were drawn from a bivariate normal popu- lation, construct a confidence interval for ρ, the population product- moment correlation coefficient, that has a confidence level of approxi- mately 0.90. Hint: If x is the vector of sister heights and y is the vector of brother heights (in the same order), then the following R command creates the above data matrix: > Data <- cbind(x,y) 4. Let α = 0.05. (a) Suppose that we sample from a bivariate normal distribution with ρ = 0.5. Assuming that we observe r = 0.5, how large a sample will be needed to reject H0 : ρ = 0 in favor of H0 : ρ 6= 0? (b) Suppose that we sample from a bivariate normal distribution with ρ = 0.1. Assuming that we observe r = 0.1, how large a sample will be needed to reject H0 : ρ = 0 in favor of H0 : ρ 6= 0?
  • 327. Chapter 14 Simple Linear Regression One way to quantify the association between two random variables, X and Y , is to quantify the extent to which knowledge of X allows one to predict values of Y . Notice that this approach to association is asymmetric: one variable (conventionally denoted X) is the predictor variable and the other variable (conventionally denoted Y ) is the predicted variable. The predictor variable is often called the independent variable and the predicted variable is often called the dependent variable. We will eschew this terminology, as it has nothing to do with the probabilistic (in)dependence of events and random variables. 14.1 The Regression Line Suppose that Y ∼ Normal(µy, σ2 y) and that we want to predict the outcome of an experiment in which we observe Y . If we know µy, then the obvious value of Y to predict is EY = µy. The expected value of the squared error of this prediction is E(Y − µy)2 = Var Y = σ2 y. Now suppose that X ∼ Normal(µx, σ2 x) and that we observe X = x. Again we want to predict Y . Does knowing X = x allow us to predict Y more accurately? The answer depends on the association between X and Y . If X and Y are independent, then knowing X = x will not help us predict Y . If X and Y are dependent, then knowing X = x should help us predict Y . Example 14.1 Suppose that we want to predict the adult height to which a male baby will grow. Knowing only that adult male heights are normally distributed, we would predict the average height of this population. 325
  • 328. 326 CHAPTER 14. SIMPLE LINEAR REGRESSION However, if we knew that the baby’s father had attained a height of 6′-11′′, then we surely would be inclined to revise our prediction and predict that the baby will grow to a greater-than-average height. When X and Y are normally distributed, the key to predicting Y from X = x is the following result. Theorem 14.1 Suppose that (X, Y ) have a bivariate normal distribution with parameters (µx, µy, σ2 x, σ2 y, ρ). Then the conditional distribution of Y given X = x is Y |X = x ∼ Normal µ µy + ρ σy σx (x − µx) , ³ 1 − ρ2 ´ σ2 y ¶ . Because Y |X = x is normally distributed, the obvious value of Y to predict when X = x is ŷ(x) = E(Y |X = x) = µy + ρ σy σx (x − µx) . (14.1) Interpreting (14.1) as a function that assigns a predicted value of Y to each value of x, we see that the prediction function (14.1) corresponds to a line that passes through the point (µx, µy) with slope ρ σy/σx. The prediction function (14.1) is the population regression function and the corresponding line is the population regression line. The expected squared error of the prediction (14.1) is Var(Y |X = x) = (1 − ρ2 )σ2 y. Notice that this quantity does not depend on the value of x. If X and Y are strongly correlated, then ρ ≈ ±1, (1−ρ2)σ2 y ≈ 0, and prediction is extremely accurate. If X and Y are uncorrelated, then ρ = 0, (1 − ρ2)σ2 y = σ2 y, and the accuracy of prediction is not improved by knowing X = x. These remarks suggest a natural way of interpreting what ρ actually measures: the proportion by which the expected squared error of prediction is reduced by virtue of knowing X = x is σ2 y − ¡ 1 − ρ2 ¢ σ2 y σ2 y = ρ2 , the population coefficient of determination. Statisticians often express this interpretation by saying that ρ2 is “the proportion of variation explained by linear regression.” Of course, as we emphasized in Section 13.4, this is not an explanation in the sense of articulating a causal mechanism.
  • 329. 14.1. THE REGRESSION LINE 327 Example 14.2 Suppose that (µx, µy, σ2 x, σ2 y, ρ) = (10, 20, 22, 42, 0.5). Then ŷ(x) = 20 + 0.5 · 4 2 (x − 10) = x + 10 and ρ2 = 0.25. Rewriting (14.1), the equation for the population regression line, as ŷ(x) − µy σy = ρ x − µx σx , we discern an important fact: Corollary 14.1 Suppose that (x, y) lies on the population regression line. If x lies z standard deviations above µx, then y lies ρz standard deviations above µy. Example 14.2 (continued) The value x = 12 lies (12 − 10)/2 = 1 standard deviations above the X-population mean, µx = 10. The predicted y-value that corresponds to x = 12, ŷ(12) = 12 + 10 = 22, lies (22 − 20)/4 = 0.5 standard deviations above the Y -population mean, µy = 20. Example 14.2 (continued) The 0.90 quantile of X is x = qnorm(.9,mean=10,sd=2) = 12.5631. The predicted y-value that corresponds to x = 12.5631 is ŷ(12.5631) = 22.5631. At what quantile of Y does the predicted y-value lie? The answer is P (Y ≤ ŷ(x)) = pnorm(22.5631,mean=20,sd=4) = 0.7391658. At first, most students find the preceding example counterintuitive. If x lies at the 0.90 quantile of X, then should we not predict ŷ(x) to lie at the 0.90 quantile of Y ? This is a natural first impression, but one that must be dispelled. We begin by considering two familiar situations: 1. Consider the case of a young boy whose father is extremely tall, at the 0.995 quantile of adult male heights. We surely would predict that the boy will grow to be quite tall. But precisely how tall? A father’s height does not completely determine his son’s height. Height is also
  • 330. 328 CHAPTER 14. SIMPLE LINEAR REGRESSION affected by myriad other factors, considered here as chance variation. Statistically speaking, it’s more likely that the boy will grow to an adult height slightly shorter than his extremely tall father than that he will grow to be even taller. 2. Consider the case of two college freshman, William and Mary, who are enrolled in an introductory chemistry class of 250 students. On the first midterm examination, Mary attains the 5th highest score and William obtains the 245th highest (5th lowest) score. How should we predict their respective performances on the second midterm examonation? There is undoubtedly a strong, positive correlation between scores on the two tests. We surely will predict that Mary will do quite well on the second test and that William will do rather badly. But how well and how badly? One test score does not completely determine another—if it did, then computing semester grades would be easy! Mary can’t do much better on the second test than she did on the first, but she might easily do worse. Statistically speaking, it’s likely that she’ll rank slightly below 5th on the second test. Likewise, William can’t do much worse on the second test than he did on the first. Statistically speaking, it’s likely that he’ll rank slightly above 245th on the second test. The phenomenon that we have just described, that experimental units with extreme X quantiles will tend to have less extreme Y quantiles, is purely statistical. It was first discerned by Sir Francis Galton, who called it “regression to mediocrity.” Modern statisticians call it regression to the mean, or simply the regression effect. Having refined our intuition, we can now explain the regression effect by examining the population concentration ellipse in Figure 14.1. For sim- plicity, we assume that X and Y have been converted to standard units. The bivariate normal population represented in Figure 14.1 has population parameters µx = µy = 0, σ2 x = σ2 y = 1, and ρ = 0.5. Recall that the line that coincides with the major axis of the ellipse is called the first principal component. In Figure 14.1, the first principal component is the line y = x and the regression line is the line y = x/2. Both lines pass through the point (µx, µy) = (0, 0), but their slopes differ by a factor of |ρ| = 0.5. Let us explore the implications of the fact that, if |ρ| < 1, then the regression line does not coincide with the major axis of the concentration ellipse. Given X = x, it might seem tempting to predict Y = x. But this
  • 331. 14.1. THE REGRESSION LINE 329 µy = 0 µx = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . −x x y = x a+b 2 b a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 14.1: The Regression Effect. would be a mistake! Here, x > µx and clearly P (Y > x|X = x) < 1 2 , so ŷ(x) = x overpredicts Y |X = x. Similarly, ŷ(−x) = −x underpredicts Y |X = −x. The population regression line is the line of conditional expected values, y = E(Y |X = x). Let (x, a) and (x, b) denote the lower and upper points at which the vertical line X = x intersects the population concentration ellipse. As one might guess, it turns out that ŷ(x) = E(Y |X = x) = a + b 2 .
  • 332. 330 CHAPTER 14. SIMPLE LINEAR REGRESSION However, the midpoint of the vertical line segment that connects (x, a) and (x, b) is not (x, x). The discrepancy between using the first princi- pal component to predict ŷ(x) = x and using the regression line to predict ŷ(x) = (a + b)/2, indicated by an arrow in Figure 14.1, is the regression effect. The correlation coefficient ρ mediates the strength of the regression effect. If ρ = ±1, then Y − µx σy = ± X − µx σx and Y is completely determined by X. In this case there is no regression effect: if x lies z standard deviations above µx, then we know that y lies z standard deviations above µy. At the other extreme, if ρ = 0, then knowing X = x does not reduce the expected squared error of prediction at all. In this case, we regress all the way to the mean: regardless of where x lies, we predict ŷ = µy. Thus far, we have focussed on predicting Y from X = x in the case that the population concentration ellipse is known. We have done so in order to emphasize that the regression effect is an inherent property of prediction, not a statistical anomaly caused by chance variation. In practice, however, the population concentration ellipse typically is not known and we must rely on the sample concentration ellipse, estimated from bivariate data. This means that we must substitute (x̄, ȳ, s2 x, s2 y, r) for (µx, µy, σ2 x, σ2 y, ρ). The sample regression function is ŷ(x) = ȳ + r sy sx (x − x̄) (14.2) and the corresponding line is the sample regression line. Notice that the slope of the sample regression line does not depend on whether we use plug- in or unbiased estimates of the population variances. The variances affect the regression line through the (square root of) their ratio, c σ2 y c σ2 x = 1 n Pn i=1 (yi − ȳ)2 1 n Pn i=1 (xi − x̄)2 = 1 n−1 Pn i=1 (yi − ȳ)2 1 n−1 Pn i=1 (xi − x̄)2 = s2 y s2 x , which is not affected by the choice of plug-in or unbiased. Example 14.2 (continued) I used binorm.sample to draw a sample of n = 100 observations from a bivariate normal distribution with parameters pop = ³ µx, µy, σ2 x, σ2 y, ρ ´ = ³ 10, 20, 22 , 42 , 0.5 ´ .
  • 333. 14.2. THE METHOD OF LEAST SQUARES 331 I then used binorm.estimate to compute sample estimates of pop, obtaining est = ³ x̄, ȳ, s2 x, s2 y, r ´ = (10.0006837, 19.3985929, 4.4512393, 14.1754248, 0.4707309) . The resulting formula for the sample regression line is ŷ(x) = ȳ + r sy sx (x − x̄) = ȳ + 1.784545 (x − x̄) = 1.55192 + 1.784545x. It is not difficult to create an R function that plots a scatter diagram of the sample and overlays both the sample concentration ellipse and the sample regression line. The function binorm.regress is described in Appendix R and/or can be obtained from the web page for this book/course. The com- mands used in this example are as follows: > pop <- c(10,20,4,16,.5) > Data <- binorm.sample(pop,100) > est <- binorm.estimate(Data) > binorm.regress(Data) The scatter diagram created by binorm.regress is displayed in Figure 14.2. 14.2 The Method of Least Squares In Section 14.1 we derived the regression line from properties of bivariate normal distributions. Having derived it, we now note that the sample re- gression line can be computed from any set of n ≥ 2 points (xi, yi) ∈ ℜ2 for which the xi assume more than one distinct value (and therefore sx > 0). In this section, we derive the regression line in this more general setting. Given points (xi, yi) ∈ ℜ2, i = 1, . . . , n, we ask two conceptually distinct questions: 1. What line best summarizes the (x, y) pairs? 2. What line best predicts values of y from values of x? We will answer each of these questions by applying the method of least squares. The possible lines are of the form y = a + bx. Given a candidate line, we measure the error between the line and each (xi, yi), then sum the
  • 334. 332 CHAPTER 14. SIMPLE LINEAR REGRESSION • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • x y 5 10 15 20 10 15 20 25 Regression Line Figure 14.2: Scatter diagram, sample concentration ellipse, and sample re- gression line of n = 100 observations sampled from a bivariate normal dis- tribution. Notice that the sample regression line is not the major axis of the sample concentration ellipse. squared errors from i = 1, . . . , n. The best line is the one that minimizes this sum of squared errors: min a,b n X i=1 " error à (xi, yi) y = a + bx !#2 (14.3) The distinction between (1) summary and (2) prediction lies in how we define error. To define the line that best summarizes the (x, y) pairs, it is natural to define the error between a point and a line as the Euclidean distance from the point to the line. This is found by measuring the length of the perpendicular
  • 335. 14.2. THE METHOD OF LEAST SQUARES 333 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . • • • • • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 14.3: Perpendicular Errors for Summary line segment that connects them, as in Figure 14.3. Thus, summary error à (xi, yi) y = a + bx ! = perpendicular distance à (xi, yi) y = a + bx ! . Using this definition of error, the solution of Problem 14.3 is the major axis of the sample concentration ellipse, the first principal component of the sample. We emphasize: the first principal component is used for summary, not prediction. In contrast, to define the line that best predicts y values from x values, it is natural to define the error between a point (xi, yi) and a line y = a + bx as the difference between the observed value y = yi and the predicted value y = ŷ (xi) = a + bxi. The difference yi − ŷ(xi) is a residual error and the absolute difference |yi − ŷ(xi)| is the length of the vertical line segment that connects (xi, yi) and y = a + bx, as in Figure 14.4. Using this definition of error, the solution of
  • 336. 334 CHAPTER 14. SIMPLE LINEAR REGRESSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . • • • • • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 14.4: Vertical Errors for Prediction Problem 14.3 is the sample regression line. We emphasize: the regression line is used for prediction, not summary. The remainder of this section provides a more detailed exposition of the squared error approach to prediction. Let SS(a, b) = n X i=1 (yi − a − bxi)2 , the sum of the squared residual errors that result from the prediction func- tion ŷ(x) = a + bx. The method of least squares chooses (a, b) to minimize SS(a, b). Before analyzing this problem, we first consider an easier prob- lem. If we knew {y1, . . . , yn} but not the corresponding {x1, . . . , xn}, then it would be impossible to measure errors associated with prediction functions that involve x. In this situation we would be forced to restrict attention to prediction functions of the form ŷ = a, which corresponds to restricting attention to lines with zero slope. The method of least squares then chooses a to minimize n X i=1 (yi − a)2 = SS(a, 0).
  • 337. 14.2. THE METHOD OF LEAST SQUARES 335 Theorem 14.2 The value of a that minimizes SS(a, 0) is a = ȳ. Proof We can conclude that SS(a, 0)/n is minimal when a = ȳ by applying part (2) of Theorem 6.1 to the empirical distribution of {y1, . . . , yn}; however, it is instructive to verify this conclusion by direct calculation: SS(a, 0) = n X i=1 (yi − a)2 = n X i=1 (yi − ȳ + ȳ − a)2 = n X i=1 (yi − ȳ)2 + n X i=1 2 (yi − ȳ) (ȳ − a) + n X i=1 (ȳ − a)2 = (n − 1)s2 y + 2 (ȳ − a) " n X i=1 yi − nȳ # + n (ȳ − a)2 = (n − 1)s2 y + n (ȳ − a)2 The second term in this expression is the only term that involves a. It achieves its minimal value of zero when a = ȳ. ✷ For future reference, we define the total sum of squares to be SST = SS (ȳ, 0) = n X i=1 (yi − ȳ)2 = (n − 1)s2 y. This is the smallest squared error possible when predicting y without infor- mation about x. Now we consider the problem of finding the line y = a + bx that best predicts values of y from values of x. The method of least squares chooses (a, b) to minimize SS(a, b). Let (a∗, b∗) denote the minimizing values of (a, b) and define the error sum of squares to be SSE = SS (a∗ , b∗ ) . Because we have not restricted attention to b = 0, ŷ(x) = a∗ + b∗x must predict at least as well as ŷ = ȳ. Thus, SSE = SS (a∗ , b∗ ) ≤ SS (ȳ, 0) = SST . We have already stated that y = a∗ + b∗x is the sample regression line. We can verify that statement by a calculation that resembles the proof of Theorem 14.2.
  • 338. 336 CHAPTER 14. SIMPLE LINEAR REGRESSION Theorem 14.3 Let (xi, yi) ∈ ℜ2, i = 1, . . . , n, be a set of (x, y) pairs with at least two distinct values of x. Let b∗ = r sy sx and a∗ = ȳ − b∗ x̄. Then SS (a∗ , b∗ ) ≤ SS(a, b) for all choices of (a, b). Proof First, write SS(a, b) = n X i=1 (yi − a − bxi)2 = n X i=1 (yi − ȳ + ȳ − bx̄ + bx̄ − a − bxi)2 = n X i=1 [(yi − ȳ) + (ȳ − bx̄ − a) − b (xi − x̄)]2 . Expanding the square in this expression results in six terms. The three squared terms are: n X i=1 (yi − ȳ)2 = (n − 1)s2 y, n X i=1 (ȳ − bx̄ − a)2 = n (ȳ − bx̄ − a)2 , n X i=1 (−b)2 (xi − x̄)2 = b2 n X i=1 (xi − x̄)2 = b2 (n − 1)s2 x. The three cross-product terms are: n X i=1 2 (yi − ȳ) (ȳ − bx̄ − a) = 2 (ȳ − bx̄ − a) n X i=1 (yi − ȳ) = 2 (ȳ − bx̄ − a) " n X i=1 yi − nȳ # = 0, n X i=1 2 (yi − ȳ) (−b) (xi − x̄) = −2b n X i=1 (xi − x̄) (yi − ȳ) = −2b(n − 1)sxsy 1 n−1 Pn i=1 (yi − ȳ) (xi − x̄) sxsy = −2b(n − 1)sxsyr, n X i=1 2 (ȳ − bx̄ − a) (−b) (xi − x̄) = −2b (ȳ − bx̄ − a) n X i=1 (xi − x̄) = 0.
  • 339. 14.2. THE METHOD OF LEAST SQUARES 337 Hence, SS(a, b) = (n − 1)s2 y + n (ȳ − bx̄ − a)2 + b2 (n − 1)s2 x − 2b(n − 1)sxsyr = n (ȳ − bx̄ − a)2 + (n − 1) h b2 s2 x − 2bsxrsy + r2 s2 y i −(n − 1)r2 s2 y + (n − 1)s2 y = n (ȳ − bx̄ − a)2 + (n − 1) [bsx − rsy]2 + ³ 1 − r2 ´ (n − 1)s2 y. The third term in this expression does not involve b or a. The second term achieves its minimal value of zero when b = rsy/sx = b∗. The first term is the only term that involves a. Whatever the value of b, the first term achieves its minimal value of zero when a = ȳ − bx̄. Hence, for b = b∗, the minimizing value of a is a = ȳ − b∗x̄ = a∗. ✷ The total sum of squares, SST , measures the prediction error from ŷ = ȳ. The error sum of squares, SSE = SS (a∗ , b∗ ) = n X i=1 [yi − (ȳ − b∗ x̄) − b∗ xi]2 = n X i=1 [yi − ȳ − b∗ (xi − x̄)]2 = n X i=1 · (yi − ȳ) − r sy sx (xi − x̄) ¸2 = n X i=1 (yi − ȳ)2 − 2r sy sx n X i=1 (xi − x̄) (yi − ȳ) + r2 s2 y s2 x n X i=1 (xi − x̄)2 = (n − 1)s2 y − 2rs2 y(n − 1) 1 n−1 Pn i=1 (xi − x̄) (yi − ȳ) sxsy + r2 s2 y(n − 1) = (n − 1)s2 y − 2(n − 1)s2 yr2 + r2 (n − 1)s2 y = (n − 1)s2 y ³ 1 − r2 ´ = ³ 1 − r2 ´ SST , measures the prediction error from the sample regression line. Now we de- fine the regression sum of squares to be the sum of the squared differences between the two predictions, SSR = n X i=1 [ŷ − ŷ (xi)]2 = n X i=1 · ȳ − ȳ − r sy sx (xi − x̄) ¸2 = r2 s2 y s2 x n X i=1 (xi − x̄)2 = r2 s2 y(n − 1) = r2 SST .
  • 340. 338 CHAPTER 14. SIMPLE LINEAR REGRESSION The three sums of squares (SSR, SSE, SST ) are precisely analogous to the three sums of squares (SSB, SSW , SST ) that arise in the analysis of variance and they enjoy an identical property: SSR + SSE = r2 SST + ³ 1 − r2 ´ SST = SST This is the Pythagorean Theorem in n-dimensional Euclidean space! The points A =    ȳ . . . ȳ    , B =     ȳ − r sy sx (x1 − x̄) . . . ȳ − r sy sx (xn − x̄)     , C =    y1 . . . yn    are the vertices of a right triangle in ℜn. The right angle occurs at vertex B. The squared Euclidean distances of the sides that meet at B are d2 (A, B) = SSR and d2 (B, C) = SSE and the squared Euclidean distance of the hypotenuse is d2 (A, C) = SST , so d2 (A, B) + d2 (B, C) = SSR + SSE = SST = d2 (A, C). To quantify the extent to which knowledge of x improves our ability to predict y, we measure the proportion by which the squared error of prediction is reduced when we use the sample regression line instead of the constant prediction ŷ = ȳ. This proportion is just SS (ȳ, 0) − SS(a, b) SS (ȳ, 0) = SST − SSE SST = SSR SST = r2SST SST = r2 , the sample coefficient of determination. Again, we conclude that the square of Pearson’s product-moment correlation coefficient measures the proportion of variation “explained” by simple linear regression. Example 14.2 (continued) For the bivariate sample displayed in Fig- ure 14.2, the total sum of squares is SST = (n − 1)s2 y = 99 · 14.1754248 = 1403.3671
  • 341. 14.3. COMPUTATION 339 and the coefficient of determination is r2 = 0.47073092 = 0.2215876. Hence, the regression sum of squares is SSR = r2 SST = 0.2215876 · 1403.367 = 310.9688 and the error sum of squares is SSE = SST − SSR = 1403.3671 − 310.9688 = 1092.3983. 14.3 Computation A bivariate sample consists of 2n numbers. However, all of the quantities used in the preceding sections can be computed from just six fundamental quantities: n Pn i=1 xi Pn i=1 yi Pn i=1 x2 i Pn i=1 y2 i Pn i=1 xiyi These quantities are used by many calculators. One reason that they are so convenient is that they are easily incremented as new (x, y) pairs are observed. Example 14.2 (continued) For the bivariate sample displayed in Fig- ure 14.2, the six fundamental quantities are as follows: n = 100 Pn i=1 xi = 1000.068 n X i=1 yi = 1939.859 n X i=1 x2 i = 10442.04 Pn i=1 y2 i = 39033.91 n X i=1 xiyi = 19770.1 Now suppose that we draw another (x, y) pair from the same population, say (8.9, 13.5). Then the new sample has the following fundamental quantities: n = 100 + 1 n X i=1 x2 i = 10442.04 + 8.92 n X i=1 xi = 1000.068 + 8.9 n X i=1 y2 i = 39033.91 + 13.52 n X i=1 yi = 1939.859 + 13.5 n X i=1 xiyi = 19770.1 + 8.9 · 13.5
  • 342. 340 CHAPTER 14. SIMPLE LINEAR REGRESSION Three useful quantities are easily computed from the six fundamental quantities: txx = n X i=1 (xi − x̄) (xi − x̄) = n X i=1 ³ x2 i − 2x̄xi + x̄2 ´ = n X i=1 x2 i − 2x̄ n X i=1 xi + nx̄2 = n X i=1 x2 i − 2nx̄2 + nx̄2 = n X i=1 x2 i − 1 n à n X i=1 xi !2 tyy = n X i=1 (yi − ȳ) (yi − ȳ) = n X i=1 y2 i − 1 n à n X i=1 yi !2 txy = n X i=1 (xi − x̄) (yi − ȳ) = n X i=1 ³ y2 i − ȳxi − x̄yi + x̄ȳ ´ = n X i=1 xiyi − ȳ n X i=1 xi − x̄ n X i=1 yi + nx̄ȳ = n X i=1 xiyi − nx̄ȳ = n X i=1 xiyi − 1 n à n X i=1 xi ! à n X i=1 yi ! These quantities are useful because all of the important quantities derived in the preceding sections are easily computed from them. Here are the formulas: 1. Sample variances: s2 x = txx n − 1 s2 y = tyy n − 1 2. Pearson’s correlation coefficient: r = 1 n−1 Pn i=1 (xi − x̄) (yi − ȳ) sxsy = txy √ txx √ tyy r2 = t2 xy txxtyy 3. Sample regression coefficients: b∗ = r sy sx = 1 n−1 Pn i=1 (xi − x̄) (yi − ȳ) s2 x = txy txx a∗ = ȳ − b∗ x̄ = 1 n n X i=1 yi − txy txx 1 n n X i=1 xi
  • 343. 14.4. THE SIMPLE LINEAR REGRESSION MODEL 341 4. Sums of squares: SST = n X i=1 (yi − ȳ)2 = tyy SSR = r2 SST = t2 xy txxtyy tyy = t2 xy txx SSE = SST − SSR = tyy − t2 xy txx 14.4 The Simple Linear Regression Model Let x1, . . . , xn be a list of real numbers for which sx > 0. Suppose that: 1. Associated with each xi is a random variable Yi ∼ Normal ³ µi, σ2 ´ . Notice that the Yi have a common population variance σ2 > 0. This is analogous to the homoscedasticity assumption of the analysis of variance. 2. The population means µi satisfy the linear relation µi = β0 + β1xi for some β0, β1 ∈ ℜ. The population parameters (β0, β1) are called the population regression coeffcients. These assumptions define the simple linear regression model. Suppose that we sample from a bivariate normal distribution, then condition on the ob- served values x1, . . . , xn. It follows from Theorem 14.1 that this is a special case of the simple linear regression model in which β1 = ρ σy σx , β0 = µy − ρ σy σx µx = µy − β1µx, σ2 = ³ 1 − ρ2 ´ σ2 y.
  • 344. 342 CHAPTER 14. SIMPLE LINEAR REGRESSION The simple linear regression model has three unknown parameters. The method of least squares estimates (β0, β1) by β̂1 = b∗ = r sy sx = txy txx , β̂0 = a∗ = ȳ − β̂1x̄. These are also the plug-in estimates of (β0, β1), and the plug-in estimate of σ2 is c σ2 = 1 n n X i=1 ³ yi − β̂0 − β̂1xi ´2 = 1 n SSE. We proceed to explore some properties of the corresponding estimators. These properties are consequences of the following key facts: Theorem 14.4 Under the assumptions of the simple linear regression model, the random variables β̂1 and SSE are independent and satisfy β̂1 ∼ Normal à β1, σ2 txx ! (14.4) SSE σ2 ∼ χ2 (n − 2). (14.5) It follows from (14.4) that Eβ̂1 = β1, and consequently that Eβ̂0 = E à 1 n n X i=1 Yi − β̂1 1 n n X i=1 xi ! = 1 n n X i=1 E ³ Yi − β̂1xi ´ = 1 n n X i=1 (β0 + β1xi − β1xi) = 1 n n X i=1 β0 = β0. Thus, (β̂0, β̂1) are unbiased estimators of (β0, β1). Furthermore, it follows from (14.5) and Corollary 5.1 that E(SSE/σ2) = n − 2. Hence, E[SSE/(n − 2)] = σ2 and MSE = 1 n − 2 SSE is an unbiased estimator of σ2. Converting (14.4) to standard units results in β̂1 − β1 p σ2/txx ∼ Normal(0, 1). (14.6)
  • 345. 14.4. THE SIMPLE LINEAR REGRESSION MODEL 343 Dividing (14.6) by (14.5), it follows from Definition 5.7 that ³ β̂1 − β1 ´ / p σ2/txx q SSE σ2 /(n − 2) = β̂1 − β1 p MSE/txx ∼ t(n − 2). This fact allows us to construct confidence intervals for β1. Given α, we first compute the critical value qt = qt(1 − α/2, n − 2). Then β̂1 ± qt s MSE txx is a (1 − α)-level confidence interval for β1. Remark: It may be helpful to write MSE txx = ¡ 1 − r2 ¢ SST /(n − 2) (n − 1)s2 x = ¡ 1 − r2 ¢ (n − 1)s2 y/(n − 2) (n − 1)s2 x = ³ 1 − r2 ´ s2 y s2 x /(n − 2). Example 14.3 Suppose that n = 100 bivariate observations produce the following estimates: x̄ = 97.255564 ȳ = 103.872210 s2 x = 425.062476 s2 y = 872.229230 r = −0.485857 To construct a 0.95-level confidence interval for β1, we first compute β̂1 = r sy sx = −0.5070697 · r 414.7388683 434.9825540 = −0.695981, qt = qt(.975, df = 98) = 1.984467, and MSE txx = 1 − r2 n − 2 · s2 y s2 x = 1 − 0.4858572 98 · 872.2292302 425.0624762 = 0.01599605.
  • 346. 344 CHAPTER 14. SIMPLE LINEAR REGRESSION The desired confidence interval is then β̂1 ± qt s MSE txx = −0.695981 ± 1.984467 · √ 0.01599605 = (−0.9469675, −0.4449945). Next we consider how to test H0 : β1 = 0 versus H1 : β1 6= 0. This is an important decision because rejecting H0 : β1 = 0 means that we are convinced that values of x help us to predict values of y. Furthermore, if we sampled from a bivariate normal population, then β1 = ρ σy σx = 0 if and only if ρ = 0. Because normal random variables X and Y are inde- pendent if and only if they are uncorrelated, the null hypothesis H0 : β1 = 0 is equivalent to the null hypothesis that X and Y are independent. If β1 = 0, then β̂1 p MSE/txx ∼ t(n − 2). Hence, the significance probability for testing H0 : β1 = 0 is p = P Ã |T| ≥ ¯ ¯ ¯ ¯ ¯ β̂1 p MSE/txx ¯ ¯ ¯ ¯ ¯ ! , where the random variable T ∼ t(n − 2), and we reject H0 : β1 = 0 if and only if p ≤ α. Equivalently, we reject H0 : β1 = 0 if and only if we observe ¯ ¯ ¯ ¯ ¯ β̂1 p MSE/txx ¯ ¯ ¯ ¯ ¯ ≥ qt, where qt is the critical value defined above. Notice that β̂1 p MSE/txx = txy/txx p MSE/txx = txy √ txx 1 p SSE/(n − 2) = txy √ txx √ tyy √ tyy √ n − 2 q tyy − t2 xy/txx = r √ n − 2 q 1 − t2 xy/(txxtyy) = r √ n − 2 √ 1 − r2 ,
  • 347. 14.4. THE SIMPLE LINEAR REGRESSION MODEL 345 so this is the same t-test that we described in Section 13.2.3 for testing H0 : ρ = 0 versus H1 : ρ 6= 0. It follows from Theorem 5.5 that à β̂1 p MSE/txx !2 ∼ F(1, n − 2). Hence, an F-test that is equivalent to the t-test derived in the preceding paragraph rejects H0 : β1 = 0 if and only if we observe (n − 2) r2 1 − r2 ≥ qF , where the critical value qF is defined by qF = qf(1 − α, 1, n − 2). Equivalently, we reject H0 : β1 = 0 if and only if the significance probability p = P à F ≥ (n − 2) r2 1 − r2 ! ≤ α, where the random variable F ∼ F(1, n − 2). The results of the F-test of H0 : β1 = 0 are traditionally presented in the form of an ANOVA table: Source of Sum of Degrees of Mean F-Test p- Variation Squares Freedom Square Statistic Value Regression r2SST 1 r2SST (n − 2) r2 1−r2 p Error (1 − r2)SST n − 2 1−r2 n−2 SST Total SST Example 14.3 (continued) Let us now test H0 : β1 = 0 versus H1 : β1 6= 0 at a significance level of α = 0.05. Of course, we know that we will reject H0 because the 0.95-level confidence interval constructed from these data did not contain the hypothesized slope β1 = 0. The t-test statistic is t = β̂1 p MSE/txx = −0.695981 √ 0.01599605 = −5.502893, which results in a significance probability of p = 2 ∗ pt(−5.502893, df = 98) = 2.989589 × 10−7 .
  • 348. 346 CHAPTER 14. SIMPLE LINEAR REGRESSION Because p < α, we reject H0 : β1 = 0. Equivalently, we can compute SST = (n − 1)s2 x = 99 · 425.062476 . = 42081.19 and r2 = 0.236057, then construct the following ANOVA table: Source of Sum of DF Mean F-Test p- Variation Squares Square Statistic Value Regression 20383.689 1 20383.6885 30.28183 2.989589 × 10−7 Error 65967.005 98 673.1327 Total 86350.694 Again, we reject H0 : β1 = 0 because p < α. Notice that we obtain the same significance probability with either test. Although equivalent, the t-test and F-test of H0 : β1 = 0 each enjoy certain advantages. The former is more flexible, as it is easily adapted to test 1-sided hypotheses. The F-test is more readily generalized to testing a variety of hypotheses that naturally arise when studying more complicated regression models. 14.5 Regression Diagnostics
  • 349. 14.6. EXERCISES 347 14.6 Exercises 1. According to Stanford University Professor Claude M. Steele (Not just a test, The Nation, May 3, 2004, page 40), “The SAT, for example, correlates .42 with freshman grades. . . This means that it measures about 18 percent of the charac- teristics, whatever they are, that determine freshman grades.” Comment on this passage. Do you agree with Professor Steele’s inter- pretation of what r = 0.42 means? 2. Suppose that (X, Y ) have a bivariate normal distribution with param- eters (5, 3, 1, 4, 0.5). Compute the following quantities: (a) P(Y > 6) (b) E(Y |X = 6.5) (c) P(Y > 6|X = 6.5) 3. Assume that the population of all sister-brother heights has a bivariate normal distribution and that the data in Exercise 13.5.3 were sampled from this population. Use these data in the following: (a) Consider the population of all sister-brother heights. Estimate the proportion of all brothers who are at least 5′ 10′′. (b) Suppose that Carol is 5′ 1′′. Predict her brother’s height. (c) Consider the population of all sister-brother heights for which the sister is 5′ 1′′. Estimate the proportion of these brothers who are at least 5′ 10′′. 4. Assume that the population of all sister-brother heights has a bivariate normal distribution and that the data in Exercise 13.5.2 were sampled from this population. Use these data in the following: (a) Compute the sample coefficient of determination, the proportion of variation “explained” by simple linear regression. (b) Let α = 0.05. Do these data provide convincing evidence that knowing a sister’s height (x) helps one predict her brother’s height (y)? (c) Construct a 0.90-level confidence interval for the slope of the pop- ulation regression line for predicting y from x.
  • 350. 348 CHAPTER 14. SIMPLE LINEAR REGRESSION (d) Suppose that you are planning to conduct a more comprehensive study of sibling heights. Your goal is to better estimate the slope of the population regression line for predicting y from x. If you want to construct a 0.95-level confidence interval of length 0.1, then how many sister-brother pairs should you plan to observe? Hint: MSE txx = ³ 1 − r2 ´ s2 y s2 x /(n − 2). 5. A class of 35 students took two midterm tests. Jack missed the first test and Jill missed the second test. The 33 students who took both tests scored an average of 75 points on the first test, with a standard deviation of 10 points, and an average of 64 points on the second test, with a standard deviation of 12 points. The scatter diagram of their scores is roughly ellipsoidal, with a correlation coefficient of r = 0.5. Because Jack and Jill each missed one of the tests, their professor needs to guess how each would have performed on the missing test in order to compute their semester grades. (a) Jill scored 80 points on Test 1. She suggests that her missing score on Test 2 be replaced with her score on Test 1, 80 points. What do you think of this suggestion? What score would you advise the professor to assign? (b) Jack scored 76 points on Test 2, precisely one standard deviation above the Test 2 mean. He suggests that his missing score on Test 1 be replaced with a score of 85 points, precisely one standard deviation above the Test 1 mean. What do you think of this suggestion? What score would you advise the professor to assign? 6. In a study of “Heredity of head form in man” (Genetica, 3:193–384, 1921), G.P. Frets reported two head measurements (in millimeters) for each of the first two adult sons of 25 families. These data are reproduced as Data Set 111 in A Handbook of Small Data Sets, and
  • 351. 14.6. EXERCISES 349 can also be downloaded from the web page for this couse. First Son Second Son Length Breadth Length Breadth 191 155 179 145 195 149 201 152 181 148 185 149 183 153 188 149 176 144 171 142 208 157 192 152 189 150 190 149 197 159 189 152 188 152 197 159 192 150 187 151 179 158 186 148 183 147 174 147 174 150 185 152 190 159 195 157 188 151 187 158 163 137 161 130 195 155 183 158 186 153 173 148 181 145 182 146 175 140 165 137 192 154 185 152 174 143 178 147 176 139 176 143 197 167 200 158 190 163 187 150 For each head, we will compute two variables: size <- length+breadth shape <- length-breadth (a) Consider head size. Investigate the relation between first son head size and second son head size. Can we reject the null hypothesis that these variables are uncorrelated? Of the variation in second son head size, what proportion is explained by variation in first son head size?
  • 352. 350 CHAPTER 14. SIMPLE LINEAR REGRESSION (b) Consider head shape. Investigate the relation between first son head shape and second son head shape. Can we reject the null hypothesis that these variables are uncorrelated? Of the varia- tion in second son head shape, what proportion is explained by variation in first son head shape? (c) In another family from the same era, the first adult son’s head had a length of 195 millimeters and a breadth of 160 millimeters. Use this information to guess the size of the second adult son’s head. 7. In the athletics event known as the shot put, male competitors “put” the “shot,” a 16-pound metal ball. (Female competitors use a smaller shot.) In the United States, high school male competitors put a 12- pound shot, then graduate to the 16-pound shot used in NCAA, US- ATF, and IAAF competition. In its August 2002 “Stat Corner,” the respected athletics periodical Track & Field News proclaimed an “In- verse Relationship Between 12 & 16lb Shots:” “A look at the accompanying all-time Top 11 lists for high schoolers with the 12lb shot—11 because there have been 11 of them over 70 [feet]—and for U.S. men with the 16 sends two messages to aspiring prep putters: • If you’re not very good in high school, don’t worry about it; few of the big guys were either. • If you’re great in high school, that may be about as good as you’ll ever get. “The numbers are astounding. We’ll leave it to a technical expert to figure out why. . . ” The numbers follow.1 Do you agree with T&FN’s two messages? 1 Perhaps the most astounding number is Michael Carter’s prodigious heave of 81-3.50, arguably the most formidable record in all of track and field. Carter broke an 11-year-old record by nine feet! He went on to a sensational college career at SMU, winning the NCAA championship and a silver medal at the 1984 Olympic Games. He then opted for a career in professional football, becoming an All-Pro defensive lineman for the NFL Champion San Francisco 49er’s.
  • 353. 14.6. EXERCISES 351 ALL-TIME HIGH SCHOOL 70-FOOTERS 12 16 16–12 1. Michael Carter ’79 81-3.5 71-4.75 –9-10.75 2. Brent Noon ’90 76-2 70-5.75 –5-8.25 3. Arnold Campbell ’84 74-10.5 64-3 –10-7.5 4. Charles Moye ’87 72-8 57-1 –15-7 5. Sam Walker ’68 72-3.25 66-9.5 –5-5.75 6. Jesse Stuart ’70 71-11i 68-11.5i –2-11.5 7. Roger Roesler ’96 71-2 61-6.25 –11-7.75 8. Kevin Bookout ’02 71-1.5 (too early still) 9. Doug Lane ’68 70-11 66-11.25 –3-11.75 10. Dennis Black ’91 70-7 68-10 –1-9 11. Ron Semkiw ’72 70-1.75 70-0.5 –0-1.25 ALL-TIME U.S. TOP 11 16 12 16–12 1. Randy Barnes ’90 75-10.25 66-9.5 +9-0.75 2. Brian Oldfield ’75 75-0 58-10 +16-2 3. John Brenner ’87 73-10.75 64-5.5 +9-5.25 4. Adam Nelson ’02 73-10.25 63-2.25 +10-8 5. Kevin Toth ’02 72-9.75 58-11 +13-10.75 6. George Woods ’74 72-3i 60-11 +11-4 6. Dave Laut ’82 72-3 65-9 +6-6 6. John Godina ’99 72-3 64-1.25 +8-1.75 9. Gregg Trafalis ’92 72-1.5 57-0 +15-1.5 10. Terry Albritton ’76 71-8.5 67-9 +3-11.5 11. Andy Bloom ’00 71-7.25 64-2.5 +7-4.74
  • 354. 352 CHAPTER 14. SIMPLE LINEAR REGRESSION
  • 355. Chapter 15 Simulation-Based Inference 15.1 Termite Foraging Revisited 353
  • 356. 354 CHAPTER 15. SIMULATION-BASED INFERENCE
  • 357. Appendix R A Statistical Programming Language R.1 Introduction R.1.1 What is R? In the 1970s, researchers at AT&T Bell Laboratories developed S, a high- level statistical programming language that became popular with academic statisticians. Bell Labs subsequently licensed S to a company that added a variety of capabilities, creating the commercial product S-Plus. R is yet another implementation of S. The R Project for Statistical Computing is an ongoing effort by a group of statisticians to extend and improve R. R is free, Open Source software, that can be downloaded in compiled or source code form. It runs on a variety of UNIX platforms and similar systems (including FreeBSD and Linux), Windows, and MacOS. The primary web site for information about R is: http://www.r-project.org/ R.1.2 Why Use R? This question encompasses several issues. First, there is the question of what role statistical software is to play in the course. Introductory statistics courses may use software in different ways. Once upon a time, many in- structors (myself included) avoided using software in the first semester. The rationale for this approach is that one should begin one’s study of statis- tics by focussing on basic concepts and learn what the computer is doing 355
  • 358. 356 APPENDIX R. A STATISTICAL PROGRAMMING LANGUAGE before one uses the computer to do it. Unfortunately, this approach con- demns one to analyzing fairly trivial data sets, and even then calculating by hand and/or calculator quickly becomes extremely tedious. As a result, this approach has fallen from favor. At the other end of the spectrum, many introductory statistics courses use statistics packages like Minitab, SPSS, or SAS to analyze data. Such packages are extremely useful and every statistician should have some fa- miliarity with at least one such package. However, if one begins to rely on such packages too quickly, the package may be viewed as a black box and the student may never really learn what that black box is doing. There are many different ways to introduce the subject of statistics, and no one way is best for all students. This book is intended for students who want to understand what is going on inside the black box procedures available in so many statistics packages. This intention determines the use that we shall make of the computer. We will strive for an intermediate approach, in which the computer is used to relieve the tedium of calculation, but in which the student is obliged to tell the computer what intermediate steps need to be performed in order to obtain the desired output. Such an approach requires a high-level, interactive programming language. Several such languages are available, but S-Plus and R have achieved the greatest popularity within the statistics community. Acquiring some familiarity with S-Plus and/or R will benefit students who continue to study statistics and/or analyze data in the future. Why R instead of S-Plus? For most of the examples in this book, R and S-Plus are interchangeable—the same commands work for both. But R has two compelling advantages. First, R is available for certain operating systems for which S-Plus is not, e.g., MacOS. Second, R is free! As a result, students who begin using R in this course can be confident that they will always have access to R. R.1.3 Installing R To efficiently download software, documentation, etc., you should use a nearby CRAN (Comprehensive R Archive Network) mirror site, e.g., Statlib at Carnegie Mellon University: http://lib.stat.cmu.edu/R/CRAN/ Most students will want to install R in compiled form by downloading ex- ecutable binary files. On-line documentation and several manuals are in-
  • 359. R.2. USING R 357 cluded, although you may find it easier to get started using the examples provided in this book. R.1.4 Learning About R R is far too complicated to learn in one (or even several) lessons. I doubt that any one person—including the R developers—knows everything about R! But don’t be intimidated: the best way to learn R is to just start using R. And, the best time to use R is when you’re trying to accomplish a specific task. Try to learn bits and pieces of R as they’re introduced in the text and/or you develop an interest in a specific capability. Of course, it’s hard to learn anything without documentation. The ma- terial in this book, both the examples scattered throughout various chapters to illustrate various statistical methods and the tutorial material in this ap- pendix, is a good way to get started. Once you know the name of one R function, you can learn more about it and discover related functions using various utilities included in your R installation. If you’re using the Windows version of R then you can start by exploring the Help menu in RGui, which will lead you to manuals, search utilities, and web pages. I tend to use R functions (text) for help on specific functions. R.2 Using R R is an interpreted language, designed to be used interactively. The user is prompted to issue a command as follows: > The cursor-up key allows the user to recall previous commands. Except for a few standard arithmetic operations, R accomplishes things by executing various functions. For example, to exit R one executes the quit function: > q() When you quit, R will inquire if you want to “Save workspace image?” If you answer yes (y), then all of the objects in your current workspace, e.g., any data sets and functions that you created, will be saved and restored the next time that you start R.
  • 360. 358 APPENDIX R. A STATISTICAL PROGRAMMING LANGUAGE R.2.1 Vectors R can store and manipulate a variety of data objects, the most basic of which is a vector. In R a vector is an ordered list of numbers, i.e., a list of numbers with a designated first element, second element, etc. Vectors can be created in various ways. In each of the following examples, the created vector is assigned the name x. Note that R has a large number of built-in functions. Assigning their names to user-created objects will mask the built-in functions. For this reason certain simple names, e.g., c and t, should be avoided. Example R.1 To enter a list of numbers from the keyboard, use the concatenate function: > x <- c(20,5,15,18,5,13,1) Notice that this can be done recursively, e.g., > x <- c(20,5,15) > x <- c(x,18,5) > x <- c(x,13,1) To display the vector, type its name: > x Just typing > c(20,5,15,18,5,13,1) causes R to display the vector without saving it for future use. Example R.2 To read a list of numbers from an ascii text file, say data.txt, use the scan function. In most situations, you will need to spec- ify the complete path of data.txt. How one does this depends on which operating system your computer uses. For example, suppose that you are using the Windows version of R and data.txt resides in the directory c:CoursesMath351. Then the following command will read the contents of data.txt into the vector x: > x <- scan("c:CoursesMath351data.txt") Notice that the single slashes in the path name must be entered as double slashes in R.
  • 361. R.2. USING R 359 Example R.3 Several functions are useful for creating sequences of numbers, e.g., > x <- seq(from=1,to=15,by=2) > x <- rep(1,times=10) Consecutive integers are especially easy, e.g., x <- 11:20 Example R.4 R has a variety of functions for generating pseudoran- dom samples.1 To draw 10 numbers from a uniform distribution on (0, π): > x <- runif(10,min=0,max=pi) To draw 20 numbers from a normal distribution with mean 5 and stan- dard deviation 1.5: > x <- rnorm(20,mean=5,sd=1.5) To simulate rolling a fair die 30 times: > die <- 1:6 > x <- sample(x=die,size=30,replace=T) A subset of a vector can be identified by a vector of index values. For example, to extract the 2nd, 3rd, and 5th elements of the vector x, one might type: > k <- c(2,3,5) > x[k] To extract the other elements, just type: > x[-k] One may wish to rearrange the elements, e.g., > y <- sort(x) The preceding command is equivalent to > y <- x[order(x)] 1 The precise meanings of the phrases that follow are explained in Chapters 3–5.
  • 362. 360 APPENDIX R. A STATISTICAL PROGRAMMING LANGUAGE R.2.2 R is a Calculator! R provides a variety of arithmetical operations and mathematical functions. These operations/functions have been vectorized, i.e., they work on entire vectors, not just individual numbers. Several examples follow. First, let’s create two vectors: > x <- 10:20 > y <- seq(from=1.8,to=2.2,length=length(x)) Now, each of the following is a valid R command: > x+100 > x-20 > x*10 > x/10 > x^2 > sqrt(x) > exp(x) > log(x) > x+y > x-y > x*y > x/y > x^y R.2.3 Some Statistics Functions R provides hundreds of functions that perform or facilitate a variety of statis- tical analyses. Most R functions are not used in this book. (You may enjoy discovering and using some of them on your own initiative.) Tables R.1 and R.2 list some of the R functions that are used. R.2.4 Creating New Functions The full power of R emerges when one writes one’s own functions. To il- lustrate, I’ve written a short function named Edist that computes the Eu- clidean distance between two vectors. When I type Edist, R displays the function: > Edist
  • 363. R.2. USING R 361 Function Distribution Section pgeom Geometric 4.2 phyper Hypergeometric 4.2 pbinom Binomial 4.4 punif Uniform 5.3 pnorm Normal 5.4 pchisq Chi-Squared 5.5 pt Student’s t 5.5 pf Fisher’s F 5.5 Table R.1: Some R functions that evaluate the cumulative distribution func- tion (cdf) for various families of probability distributions. The prefix p designates a cdf function; the remainder of the function name specifies the distribution. For the analogous quantile functions, use the prefix q, e.g., qnorm. To evaluate the analogous probability mass function (pmf) or prob- ability density function (pdf), use the prefix d, e.g., dnorm. To generate a pseudorandom sample, use the prefix r, e.g., rnorm. function(u,v){ return(sqrt(sum((u-v)^2))) } > Edist has two arguments, u and v, which it interprets as vectors of equal length. Edist computes the vector of differences, squares each difference, sums the squares, then takes the square root of the sum to obtain the dis- tance. Finally, it returns the computed distance. I could have written Edist as a sequence of intermediate steps, but there’s no need to do so. I might have created Edist in any of the following ways: Example R.5 > Edist <- function(u,v){ return(sqrt(sum((u-v)^2))) } > Example R.6 > Edist <- function(u,v){
  • 364. 362 APPENDIX R. A STATISTICAL PROGRAMMING LANGUAGE Function Used to Compute/Display sum sample sum mean sample mean median sample median var sample variance quantile sample quantile(s) summary several useful quantities plot.ecdf empirical cdf boxplot box plot(s) qqnorm normal probability plot plot, density kernel estimate of pdf Table R.2: Some R functions that compute or display useful information about one or more univariate samples. See Chapter 7. + return(sqrt(sum((u-v)^2))) + } > Notice that R recognizes that the command creating Edist is not complete and provides continuation prompts (+) until it is. Examples R.5 and R.6 are useful for very short functions, but not for anything complicated. Be warned: ff you mistype and R cannot interpret what you did type, then R ignores the command and you have to retype it. Using the cursor-up key to recall what you typed may help, but for anything complicated it is best to create a permanent file that you can edit. This can be done within R or outside of R. Example R.7 To create moderately complicated functions in R, use the edit function. For example, I might start by typing > Edist <- function(u,v){u-v}
  • 365. R.2. USING R 363 This creates an R object called Edist, but not the Edist that we want—this Edist returns the vector of differences.2 So, I use edit to modify Edist.3 This process is initiated with the command > Edist <- edit(Edist) After making and saving the desired changes to Edist, I close the editor, thereby returning control to R. R checks the edited version of Edist: if R can interpret the edited version, then R replaces the previous version with the edited version; if R cannot interpret the edited version, e.g., because of typographical errors, then R issues an error message and retains the previous version. Fortunately, R also retains a temporary version of whatever modi- fications I attempted to make, so I have another chance at getting it right. To access the temporary version, I type > Edist <- edit() Note that I should not retype > Edist <- edit(Edist) as this command returns to the original unedited version and discards what- ever changes I attempted to make. Example R.8 Objects created in R can be lost, e.g., if one forgets to save one’s workspace image when one quits R. For this reason, I prefer to create my R functions outside of R. To accomplish this, I first use a text editor to create an ascii text file that contains whatever R commands I want to execute, e.g., the command that creates Edist. For example, I might use the Windows notepad editor to create an ascii text file that contains the following: Edist <- function(u,v) { return(sqrt(sum((u-v)^2))) } 2 Using the return function is good practice, but often unnecessary. An R function will automatically return the last quantity that it computes. 3 Each installation has a default editor. For the Windows operating system, the default editor is the Windows notepad editor.
  • 366. 364 APPENDIX R. A STATISTICAL PROGRAMMING LANGUAGE Let’s suppose that I call this file myRfcns.txt and save it in the directory c:CoursesMath351. Then, I can start R and use the source function to execute the commands in myRfcns.txt: > source("c:CoursesMath351myRfcns.txt") To check that I succeeded in creating Edist, I can produce a list of all the objects in my workspace by typing > objects() R.2.5 Exploring Bivariate Normal Data In Sections 13.2 and 14.1, we explored the structure of bivariate normal data using five R functions: binorm.ellipse binorm.sample binorm.estimate binorm.scatter binorm.regress These functions are not part of your R installation—I created them for this book/course. To obtain them, download the ascii text file binorm.R from the web page for this book/course, then source its contents into your R workspace. For example, suppose that you have a Windows operating system and that you save binorm.R in the directory c:CoursesMath351. Then the following command instructs R to execute the commands in binorm.R that create the five binorm functions: > source("c:CoursesMath351binorm.R") Tables R.3–R.7 reproduce the commands in binorm.R. Notice that the # symbol is used to insert comments, as R ignores lines that begin with #. R.2.6 Simulating Termite Foraging Sections 1.1.3 and 15.1 describe a study of termite foraging behavior. A test statistic, T, assumes a small value when each subsequently attacked roll is near a previously attacked roll. Thus, small values of T are evidence against a null hypothesis of random foraging, under which each unattacked roll is equally likely to be attacked next. To compute a significance probability
  • 367. R.2. USING R 365 for a particular plot, e.g., Plot 20 depicted in Figure 1.1, we require the probability distribution of T. This discrete distribution cannot be calculated by the methods of Chapter 4; instead, we resort to computer simulation. Dana Ranschaert (a former student) and I created an R function, forage, that approximates the pmf of T by simulation. To obtain forage, down- load the ascii text file termites.R from the web page for this book/course, then source its contents into your R workspace. Table R.8 reproduces the commands in termites.R. To use forage, you must specify four arguments: 1. initial, a vector that contains the numbers of the initially attacked rolls. The 5 × 5 rolls are numbered as follows: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 In Figure 1.1, the vector of initially attacked rolls is c(3,5). 2. nsubsequent, the number of subsequently attacked rolls. In Figure 1.1, there are 13 such rolls. 3. nsim, the number of simulated foraging histories. In the original study, each plot was simulated 1 million times. 4. maxT, the largest value of T to be tabulated. For example, the command > pmf20 <- forage(c(3,5),13,10000,30) computes a matrix with 30−13+1 rows and 2 columns. The first column of pmf20 contains values of T, from 13 to 30. Corresponding to each value of T, the corresponding number in the second column of pmf20 tabulates how many of the 10000 simulated foraging histories produced that value of T.
  • 368. 366 APPENDIX R. A STATISTICAL PROGRAMMING LANGUAGE binorm.ellipse <- function(pop) { # # This function plots the concentration ellipse of a bivariate # normal distribution. The 5 bivariate normal parameters are # specified in the vector pop in the following order: # mean of X, mean of Y, variance of X, variance of Y, # correlation of (X,Y). # For example: pop <- c(0,0,1,4,.5) # n <- 628 m <- matrix(pop[1:2],nrow=2) off <- pop[5] * sqrt(pop[3]*pop[4]) C <- matrix(c(pop[3],off,off,pop[4]),nrow=2) E <- eigen(C,symmetric=T) a <- 0:n/100 X <- cbind(cos(a),sin(a)) X <- X %*% diag(sqrt(E$values)) %*% t(E$vectors) X <- X + matrix(rep(1,n+1),ncol=1) %*% t(m) xmin <- min(X[,1]) xmax <- max(X[,1]) ymin <- min(X[,2]) ymax <- max(X[,2]) dif <- max(xmax-xmin,ymax-ymin) xlim <- c(m[1]-dif,m[1]+dif) ylim <- c(m[2]-dif,m[2]+dif) par(pty="s") plot(X,type="l",xlab="x",ylab="y",xlim=xlim,ylim=ylim) title("Concentration Ellipse") } Table R.3: The command that creates the R function binorm.ellipse, de- scribed in Section 13.2. This command is included in the file binorm.R.
  • 369. R.2. USING R 367 binorm.sample <- function(pop,n) { # # This function returns a sample of n observations drawn from a # bivariate normal distribution. The 5 bivariate normal # parameters are specified in the vector pop in the following # order: mean of X, mean of Y, variance of X, variance of Y, # correlation of (X,Y). For example: pop <- c(0,0,1,4,.5) # The sample is returned in the form of an n-by-2 data matrix, # each row of which is an observed value of (X,Y). # m <- matrix(pop[1:2],nrow=2) off <- pop[5] * sqrt(pop[3]*pop[4]) C <- matrix(c(pop[3],off,off,pop[4]),nrow=2) E <- eigen(C,symmetric=T) Data <- matrix(rnorm(2*n),nrow=n) Data <- Data %*% diag(sqrt(E$values)) %*% t(E$vectors) Data + matrix(rep(1,n),nrow=n) %*% t(m) } Table R.4: The command that creates the R function binorm.sample, de- scribed in Section 13.2. This command is included in the file binorm.R.
  • 370. 368 APPENDIX R. A STATISTICAL PROGRAMMING LANGUAGE binorm.estimate <- function(Data) { # # This function estimates bivariate normal parameters from a # bivariate data matrix. Each row of the n-by-2 matrix Data # contains a single observation of (X,Y). The function returns # a vector of 5 estimated parameters: mean of X, mean of Y, # variance of X, variance of Y, correlation of (X,Y). # n <- nrow(Data) m <- c(sum(Data[,1]),sum(Data[,2]))/n v <- c(var(Data[,1]),var(Data[,2])) z1 <- (Data[,1]-m[1])/sqrt(v[1]) z2 <- (Data[,2]-m[2])/sqrt(v[2]) r <- sum(z1*z2)/(n-1) c(m,v,r) } Table R.5: The command that creates the R function binorm.estimate, described in Section 13.2. This command is included in the file binorm.R.
  • 371. R.2. USING R 369 binorm.scatter <- function(Data) { # # This function produces a scatter diagram of the bivariate data # contained in the n-by-2 data matrix Data. It also superimposes # the sample concentration ellipse. # n <- 628 xmin <- min(Data[,1]) xmax <- max(Data[,1]) xmid <- (xmin+xmax)/2 ymin <- min(Data[,2]) ymax <- max(Data[,2]) ymid <- (ymin+ymax)/2 dif <- max(xmax-xmin,ymax-ymin)/2 xlim <- c(xmid-dif,xmid+dif) ylim <- c(ymid-dif,ymid+dif) par(pty="s") plot(Data,xlab="x",ylab="y",xlim=xlim,ylim=ylim) title("Scatter Diagram") v <- binorm.estimate(Data) m <- matrix(v[1:2],nrow=2) off <- v[5] * sqrt(v[3]*v[4]) C <- matrix(c(v[3],off,off,v[4]),nrow=2) E <- eigen(C,symmetric=T) a <- 1:n/100 Y <- cbind(cos(a),sin(a)) Y <- Y %*% diag(sqrt(E$values)) %*% t(E$vectors) Y <- Y + matrix(rep(1,n),nrow=n) %*% t(m) lines(Y) } Table R.6: The command that creates the R function binorm.scatter, de- scribed in Section 13.2. This command is included in the file binorm.R.
  • 372. 370 APPENDIX R. A STATISTICAL PROGRAMMING LANGUAGE binorm.regress <- function(Data) { # # This function produces a scatter diagram of the bivariate data # contained in the n-by-2 data matrix Data. It also superimposes # the sample concentration ellipse and the regression line. # n <- 628 xmin <- min(Data[,1]) xmax <- max(Data[,1]) xmid <- (xmin+xmax)/2 ymin <- min(Data[,2]) ymax <- max(Data[,2]) ymid <- (ymin+ymax)/2 dif <- max(xmax-xmin,ymax-ymin)/2 xlim <- c(xmid-dif,xmid+dif) ylim <- c(ymid-dif,ymid+dif) par(pty="s") plot(Data,xlab="x",ylab="y",xlim=xlim,ylim=ylim) title("Regression Line") v <- binorm.estimate(Data) m <- matrix(v[1:2],nrow=2) off <- v[5] * sqrt(v[3]*v[4]) C <- matrix(c(v[3],off,off,v[4]),nrow=2) E <- eigen(C,symmetric=T) a <- 0:n/100 Y <- cbind(cos(a),sin(a)) Y <- Y %*% diag(sqrt(E$values)) %*% t(E$vectors) Y <- Y + matrix(rep(1,n+1),ncol=1) %*% t(m) lines(Y) x <- xlim[1] + (2*dif*(0:n))/n slope <- v[5] * sqrt(v[4]/v[3]) y <- v[2] + slope*(x-v[1]) Y <- cbind(x,y) Y <- Y[Y[,2] < ymax,] Y <- Y[Y[,2] > ymin,] lines(Y) } Table R.7: The command that creates the R function binorm.regress, de- scribed in Section 14.1. This command is included in the file binorm.R.
  • 373. R.2. USING R 371 forage <- function(initial,nsubsequent,nsim,maxT) { # # This function simulates nsim termite foraging histories. # initial is the vector of initially attacked rolls; # nsim is the number of subsequently attacked rolls. # The function returns a matrix in which the first column # contains values of the test statistic T (from nsubsequent # to maxT) and the second column contains the corresponding # number of histories that produced that value of T. # v <- rep(1:5,5) w <- rep(1:5,rep(5,5)) D <- cbind(v,w) D <- (diag(25) - matrix(1/25,25,25)) %*% D D <- D %*% t(D) v <- diag(D) H <- diag(v) %*% matrix(1,25,25) D <- H+t(H)-2*D D[D<0] <- 0 H <- matrix(100,25,25) for (rowi in 2:25) for (colj in 1:(rowi-1)) { H[rowi,colj] <- 0 } v <- 1:length(initial) w <- 1:(length(initial)+nsubsequent) pmf <- rep(0,maxT) for (isim in 1:nsim){ rolls <- c(initial, sample(x=(1:25)[-initial], size=nsubsequent, replace=F)) D0 <- D[rolls,rolls] + H[w,w] distance <- apply(D0,1,min) total <- round(sum(distance[-v])) if (total < maxT+0.5) { pmf[total] <- pmf[total]+1 } return(cbind(nsubsequent:maxT,pmf[-(1:(nsubsequent-1))])) } Table R.8: The command that creates the R function forage, described in Section 15.1. This command is included in the file termites.R.