SlideShare a Scribd company logo
statistics and Probability Analysis for beginners
Probability & Statistics
for Engineers & Scientists
This page intentionally left blank
Probability & Statistics for
Engineers & Scientists
N I N T H E D I T I O N
Ronald E. Walpole
Roanoke College
Raymond H. Myers
Virginia Tech
Sharon L. Myers
Radford University
Keying Ye
University of Texas at San Antonio
Prentice Hall
Editor in Chief: Deirdre Lynch
Acquisitions Editor: Christopher Cummings
Executive Content Editor: Christine O’Brien
Associate Editor: Christina Lepre
Senior Managing Editor: Karen Wernholm
Senior Production Project Manager: Tracy Patruno
Design Manager: Andrea Nix
Cover Designer: Heather Scott
Digital Assets Manager: Marianne Groth
Associate Media Producer: Vicki Dreyfus
Marketing Manager: Alex Gay
Marketing Assistant: Kathleen DeChavez
Senior Author Support/Technology Specialist: Joe Vetere
Rights and Permissions Advisor: Michael Joyce
Senior Manufacturing Buyer: Carol Melville
Production Coordination: Lifland et al. Bookmakers
Composition: Keying Ye
Cover photo: Marjory Dressler/Dressler Photo-Graphics
Many of the designations used by manufacturers and sellers to distinguish their products are claimed as
trademarks. Where those designations appear in this book, and Pearson was aware of a trademark claim, the
designations have been printed in initial caps or all caps.
Library of Congress Cataloging-in-Publication Data
Probability & statistics for engineers & scientists/Ronald E. Walpole . . . [et al.] — 9th ed.
p. cm.
ISBN 978-0-321-62911-1
1. Engineering—Statistical methods. 2. Probabilities. I. Walpole, Ronald E.
TA340.P738 2011
519.02’462–dc22
2010004857
Copyright c
 2012, 2007, 2002 Pearson Education, Inc. All rights reserved. No part of this publication may be
reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical,
photocopying, recording, or otherwise, without the prior written permission of the publisher. Printed in the
United States of America. For information on obtaining permission for use of material in this work, please submit
a written request to Pearson Education, Inc., Rights and Contracts Department, 501 Boylston Street, Suite 900,
Boston, MA 02116, fax your request to 617-671-3447, or e-mail at http://www.pearsoned.com/legal/permissions.htm.
1 2 3 4 5 6 7 8 9 10—EB—14 13 12 11 10
ISBN 10: 0-321-62911-6
ISBN 13: 978-0-321-62911-1
This book is dedicated to
Billy and Julie
R.H.M. and S.L.M.
Limin, Carolyn and Emily
K.Y.
This page intentionally left blank
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
1 Introduction to Statistics and Data Analysis . . . . . . . . . . . 1
1.1 Overview: Statistical Inference, Samples, Populations, and the
Role of Probability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Sampling Procedures; Collection of Data . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Measures of Location: The Sample Mean and Median . . . . . . . . . . . 11
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4 Measures of Variability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.5 Discrete and Continuous Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6 Statistical Modeling, Scientific Inspection, and Graphical Diag-
nostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.7 General Types of Statistical Studies: Designed Experiment,
Observational Study, and Retrospective Study . . . . . . . . . . . . . . . . . . 27
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2 Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.1 Sample Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.2 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.3 Counting Sample Points. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.4 Probability of an Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.5 Additive Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.6 Conditional Probability, Independence, and the Product Rule . . . 62
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.7 Bayes’ Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
viii Contents
2.8 Potential Misconceptions and Hazards; Relationship to Material
in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3 Random Variables and Probability Distributions . . . . . . 81
3.1 Concept of a Random Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.2 Discrete Probability Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.3 Continuous Probability Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.4 Joint Probability Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.5 Potential Misconceptions and Hazards; Relationship to Material
in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4 Mathematical Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.1 Mean of a Random Variable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.2 Variance and Covariance of Random Variables. . . . . . . . . . . . . . . . . . . 119
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.3 Means and Variances of Linear Combinations of Random Variables 128
4.4 Chebyshev’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.5 Potential Misconceptions and Hazards; Relationship to Material
in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
5 Some Discrete Probability Distributions . . . . . . . . . . . . . . . . 143
5.1 Introduction and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
5.2 Binomial and Multinomial Distributions. . . . . . . . . . . . . . . . . . . . . . . . . 143
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
5.3 Hypergeometric Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
5.4 Negative Binomial and Geometric Distributions . . . . . . . . . . . . . . . . . 158
5.5 Poisson Distribution and the Poisson Process. . . . . . . . . . . . . . . . . . . . 161
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
5.6 Potential Misconceptions and Hazards; Relationship to Material
in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Contents ix
6 Some Continuous Probability Distributions. . . . . . . . . . . . . 171
6.1 Continuous Uniform Distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
6.2 Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.3 Areas under the Normal Curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
6.4 Applications of the Normal Distribution. . . . . . . . . . . . . . . . . . . . . . . . . 182
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
6.5 Normal Approximation to the Binomial . . . . . . . . . . . . . . . . . . . . . . . . . 187
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
6.6 Gamma and Exponential Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . 194
6.7 Chi-Squared Distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
6.8 Beta Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
6.9 Lognormal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
6.10 Weibull Distribution (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
6.11 Potential Misconceptions and Hazards; Relationship to Material
in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
7 Functions of Random Variables (Optional). . . . . . . . . . . . . . 211
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
7.2 Transformations of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
7.3 Moments and Moment-Generating Functions . . . . . . . . . . . . . . . . . . . . 218
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
8 Fundamental Sampling Distributions and
Data Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
8.1 Random Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
8.2 Some Important Statistics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
8.3 Sampling Distributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
8.4 Sampling Distribution of Means and the Central Limit Theorem. 233
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
8.5 Sampling Distribution of S2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
8.6 t-Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
8.7 F-Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
8.8 Quantile and Probability Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
8.9 Potential Misconceptions and Hazards; Relationship to Material
in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
x Contents
9 One- and Two-Sample Estimation Problems. . . . . . . . . . . . 265
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
9.2 Statistical Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
9.3 Classical Methods of Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
9.4 Single Sample: Estimating the Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
9.5 Standard Error of a Point Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
9.6 Prediction Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
9.7 Tolerance Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
9.8 Two Samples: Estimating the Difference between Two Means . . . 285
9.9 Paired Observations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
9.10 Single Sample: Estimating a Proportion . . . . . . . . . . . . . . . . . . . . . . . . . 296
9.11 Two Samples: Estimating the Difference between Two Proportions 300
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
9.12 Single Sample: Estimating the Variance . . . . . . . . . . . . . . . . . . . . . . . . . 303
9.13 Two Samples: Estimating the Ratio of Two Variances . . . . . . . . . . . 305
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
9.14 Maximum Likelihood Estimation (Optional). . . . . . . . . . . . . . . . . . . . . 307
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
9.15 Potential Misconceptions and Hazards; Relationship to Material
in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
10 One- and Two-Sample Tests of Hypotheses . . . . . . . . . . . . . 319
10.1 Statistical Hypotheses: General Concepts . . . . . . . . . . . . . . . . . . . . . . . 319
10.2 Testing a Statistical Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
10.3 The Use of P-Values for Decision Making in Testing Hypotheses. 331
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
10.4 Single Sample: Tests Concerning a Single Mean . . . . . . . . . . . . . . . . . 336
10.5 Two Samples: Tests on Two Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
10.6 Choice of Sample Size for Testing Means . . . . . . . . . . . . . . . . . . . . . . . . 349
10.7 Graphical Methods for Comparing Means . . . . . . . . . . . . . . . . . . . . . . . 354
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10.8 One Sample: Test on a Single Proportion. . . . . . . . . . . . . . . . . . . . . . . . 360
10.9 Two Samples: Tests on Two Proportions . . . . . . . . . . . . . . . . . . . . . . . . 363
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
10.10 One- and Two-Sample Tests Concerning Variances . . . . . . . . . . . . . . 366
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
10.11 Goodness-of-Fit Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
10.12 Test for Independence (Categorical Data) . . . . . . . . . . . . . . . . . . . . . . . 373
Contents xi
10.13 Test for Homogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
10.14 Two-Sample Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
10.15 Potential Misconceptions and Hazards; Relationship to Material
in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
11 Simple Linear Regression and Correlation . . . . . . . . . . . . . . 389
11.1 Introduction to Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
11.2 The Simple Linear Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
11.3 Least Squares and the Fitted Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
11.4 Properties of the Least Squares Estimators . . . . . . . . . . . . . . . . . . . . . . 400
11.5 Inferences Concerning the Regression Coefficients. . . . . . . . . . . . . . . . 403
11.6 Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
11.7 Choice of a Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
11.8 Analysis-of-Variance Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
11.9 Test for Linearity of Regression: Data with Repeated Observations 416
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
11.10 Data Plots and Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
11.11 Simple Linear Regression Case Study. . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
11.12 Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
11.13 Potential Misconceptions and Hazards; Relationship to Material
in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
12 Multiple Linear Regression and Certain
Nonlinear Regression Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
12.2 Estimating the Coefficients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
12.3 Linear Regression Model Using Matrices . . . . . . . . . . . . . . . . . . . . . . . . 447
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
12.4 Properties of the Least Squares Estimators . . . . . . . . . . . . . . . . . . . . . . 453
12.5 Inferences in Multiple Linear Regression. . . . . . . . . . . . . . . . . . . . . . . . . 455
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
12.6 Choice of a Fitted Model through Hypothesis Testing . . . . . . . . . . . 462
12.7 Special Case of Orthogonality (Optional) . . . . . . . . . . . . . . . . . . . . . . . . 467
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
12.8 Categorical or Indicator Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
xii Contents
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
12.9 Sequential Methods for Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . 476
12.10 Study of Residuals and Violation of Assumptions (Model Check-
ing) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
12.11 Cross Validation, Cp, and Other Criteria for Model Selection. . . . 487
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
12.12 Special Nonlinear Models for Nonideal Conditions . . . . . . . . . . . . . . . 496
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
12.13 Potential Misconceptions and Hazards; Relationship to Material
in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
13 One-Factor Experiments: General. . . . . . . . . . . . . . . . . . . . . . . . 507
13.1 Analysis-of-Variance Technique. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
13.2 The Strategy of Experimental Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
13.3 One-Way Analysis of Variance: Completely Randomized Design
(One-Way ANOVA). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
13.4 Tests for the Equality of Several Variances . . . . . . . . . . . . . . . . . . . . . . 516
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
13.5 Single-Degree-of-Freedom Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . 520
13.6 Multiple Comparisons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
13.7 Comparing a Set of Treatments in Blocks . . . . . . . . . . . . . . . . . . . . . . . 532
13.8 Randomized Complete Block Designs. . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
13.9 Graphical Methods and Model Checking . . . . . . . . . . . . . . . . . . . . . . . . 540
13.10 Data Transformations in Analysis of Variance . . . . . . . . . . . . . . . . . . . 543
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
13.11 Random Effects Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
13.12 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
13.13 Potential Misconceptions and Hazards; Relationship to Material
in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
14 Factorial Experiments (Two or More Factors). . . . . . . . . . 561
14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
14.2 Interaction in the Two-Factor Experiment. . . . . . . . . . . . . . . . . . . . . . . 562
14.3 Two-Factor Analysis of Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
14.4 Three-Factor Experiments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
Contents xiii
14.5 Factorial Experiments for Random Effects and Mixed Models. . . . 588
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
14.6 Potential Misconceptions and Hazards; Relationship to Material
in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
15 2k
Factorial Experiments and Fractions . . . . . . . . . . . . . . . . . 597
15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
15.2 The 2k
Factorial: Calculation of Effects and Analysis of Variance 598
15.3 Nonreplicated 2k
Factorial Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . 604
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
15.4 Factorial Experiments in a Regression Setting . . . . . . . . . . . . . . . . . . . 612
15.5 The Orthogonal Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
15.6 Fractional Factorial Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
15.7 Analysis of Fractional Factorial Experiments . . . . . . . . . . . . . . . . . . . . 632
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
15.8 Higher Fractions and Screening Designs . . . . . . . . . . . . . . . . . . . . . . . . . 636
15.9 Construction of Resolution III and IV Designs with 8, 16, and 32
Design Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
15.10 Other Two-Level Resolution III Designs; The Plackett-Burman
Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
15.11 Introduction to Response Surface Methodology . . . . . . . . . . . . . . . . . . 639
15.12 Robust Parameter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653
15.13 Potential Misconceptions and Hazards; Relationship to Material
in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
16 Nonparametric Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
16.1 Nonparametric Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
16.2 Signed-Rank Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
16.3 Wilcoxon Rank-Sum Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
16.4 Kruskal-Wallis Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
16.5 Runs Test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
16.6 Tolerance Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
16.7 Rank Correlation Coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677
Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
xiv Contents
17 Statistical Quality Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
17.2 Nature of the Control Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
17.3 Purposes of the Control Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
17.4 Control Charts for Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
17.5 Control Charts for Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
17.6 Cusum Control Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
18 Bayesian Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
18.1 Bayesian Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
18.2 Bayesian Inferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
18.3 Bayes Estimates Using Decision Theory Framework . . . . . . . . . . . . . 717
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
Appendix A: Statistical Tables and Proofs. . . . . . . . . . . . . . . . . . 725
Appendix B: Answers to Odd-Numbered Non-Review
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
Preface
General Approach and Mathematical Level
Our emphasis in creating the ninth edition is less on adding new material and more
on providing clarity and deeper understanding. This objective was accomplished in
part by including new end-of-chapter material that adds connective tissue between
chapters. We affectionately call these comments at the end of the chapter “Pot
Holes.” They are very useful to remind students of the big picture and how each
chapter fits into that picture, and they aid the student in learning about limitations
and pitfalls that may result if procedures are misused. A deeper understanding
of real-world use of statistics is made available through class projects, which were
added in several chapters. These projects provide the opportunity for students
alone, or in groups, to gather their own experimental data and draw inferences. In
some cases, the work involves a problem whose solution will illustrate the meaning
of a concept or provide an empirical understanding of an important statistical
result. Some existing examples were expanded and new ones were introduced to
create “case studies,” in which commentary is provided to give the student a clear
understanding of a statistical concept in the context of a practical situation.
In this edition, we continue to emphasize a balance between theory and appli-
cations. Calculus and other types of mathematical support (e.g., linear algebra)
are used at about the same level as in previous editions. The coverage of an-
alytical tools in statistics is enhanced with the use of calculus when discussion
centers on rules and concepts in probability. Probability distributions and sta-
tistical inference are highlighted in Chapters 2 through 10. Linear algebra and
matrices are very lightly applied in Chapters 11 through 15, where linear regres-
sion and analysis of variance are covered. Students using this text should have
had the equivalent of one semester of differential and integral calculus. Linear
algebra is helpful but not necessary so long as the section in Chapter 12 on mul-
tiple linear regression using matrix algebra is not covered by the instructor. As
in previous editions, a large number of exercises that deal with real-life scientific
and engineering applications are available to challenge the student. The many
data sets associated with the exercises are available for download from the website
http://www.pearsonhighered.com/datasets.
xv
xvi Preface
Summary of the Changes in the Ninth Edition
• Class projects were added in several chapters to provide a deeper understand-
ing of the real-world use of statistics. Students are asked to produce or gather
their own experimental data and draw inferences from these data.
• More case studies were added and others expanded to help students under-
stand the statistical methods being presented in the context of a real-life situ-
ation. For example, the interpretation of confidence limits, prediction limits,
and tolerance limits is given using a real-life situation.
• “Pot Holes” were added at the end of some chapters and expanded in others.
These comments are intended to present each chapter in the context of the
big picture and discuss how the chapters relate to one another. They also
provide cautions about the possible misuse of statistical techniques presented
in the chapter.
• Chapter 1 has been enhanced to include more on single-number statistics as
well as graphical techniques. New fundamental material on sampling and
experimental design is presented.
• Examples added to Chapter 8 on sampling distributions are intended to moti-
vate P-values and hypothesis testing. This prepares the student for the more
challenging material on these topics that will be presented in Chapter 10.
• Chapter 12 contains additional development regarding the effect of a single
regression variable in a model in which collinearity with other variables is
severe.
• Chapter 15 now introduces material on the important topic of response surface
methodology (RSM). The use of noise variables in RSM allows the illustration
of mean and variance (dual response surface) modeling.
• The central composite design (CCD) is introduced in Chapter 15.
• More examples are given in Chapter 18, and the discussion of using Bayesian
methods for statistical decision making has been enhanced.
Content and Course Planning
This text is designed for either a one- or a two-semester course. A reasonable
plan for a one-semester course might include Chapters 1 through 10. This would
result in a curriculum that concluded with the fundamentals of both estimation
and hypothesis testing. Instructors who desire that students be exposed to simple
linear regression may wish to include a portion of Chapter 11. For instructors
who desire to have analysis of variance included rather than regression, the one-
semester course may include Chapter 13 rather than Chapters 11 and 12. Chapter
13 features one-factor analysis of variance. Another option is to eliminate portions
of Chapters 5 and/or 6 as well as Chapter 7. With this option, one or more of
the discrete or continuous distributions in Chapters 5 and 6 may be eliminated.
These distributions include the negative binomial, geometric, gamma, Weibull,
beta, and log normal distributions. Other features that one might consider re-
moving from a one-semester curriculum include maximum likelihood estimation,
Preface xvii
prediction, and/or tolerance limits in Chapter 9. A one-semester curriculum has
built-in flexibility, depending on the relative interest of the instructor in regression,
analysis of variance, experimental design, and response surface methods (Chapter
15). There are several discrete and continuous distributions (Chapters 5 and 6)
that have applications in a variety of engineering and scientific areas.
Chapters 11 through 18 contain substantial material that can be added for the
second semester of a two-semester course. The material on simple and multiple
linear regression is in Chapters 11 and 12, respectively. Chapter 12 alone offers a
substantial amount of flexibility. Multiple linear regression includes such “special
topics” as categorical or indicator variables, sequential methods of model selection
such as stepwise regression, the study of residuals for the detection of violations
of assumptions, cross validation and the use of the PRESS statistic as well as
Cp, and logistic regression. The use of orthogonal regressors, a precursor to the
experimental design in Chapter 15, is highlighted. Chapters 13 and 14 offer a
relatively large amount of material on analysis of variance (ANOVA) with fixed,
random, and mixed models. Chapter 15 highlights the application of two-level
designs in the context of full and fractional factorial experiments (2k
). Special
screening designs are illustrated. Chapter 15 also features a new section on response
surface methodology (RSM) to illustrate the use of experimental design for finding
optimal process conditions. The fitting of a second order model through the use of
a central composite design is discussed. RSM is expanded to cover the analysis of
robust parameter design type problems. Noise variables are used to accommodate
dual response surface models. Chapters 16, 17, and 18 contain a moderate amount
of material on nonparametric statistics, quality control, and Bayesian inference.
Chapter 1 is an overview of statistical inference presented on a mathematically
simple level. It has been expanded from the eighth edition to more thoroughly
cover single-number statistics and graphical techniques. It is designed to give
students a preliminary presentation of elementary concepts that will allow them to
understand more involved details that follow. Elementary concepts in sampling,
data collection, and experimental design are presented, and rudimentary aspects
of graphical tools are introduced, as well as a sense of what is garnered from a
data set. Stem-and-leaf plots and box-and-whisker plots have been added. Graphs
are better organized and labeled. The discussion of uncertainty and variation in
a system is thorough and well illustrated. There are examples of how to sort
out the important characteristics of a scientific process or system, and these ideas
are illustrated in practical settings such as manufacturing processes, biomedical
studies, and studies of biological and other scientific systems. A contrast is made
between the use of discrete and continuous data. Emphasis is placed on the use
of models and the information concerning statistical models that can be obtained
from graphical tools.
Chapters 2, 3, and 4 deal with basic probability as well as discrete and contin-
uous random variables. Chapters 5 and 6 focus on specific discrete and continuous
distributions as well as relationships among them. These chapters also highlight
examples of applications of the distributions in real-life scientific and engineering
studies. Examples, case studies, and a large number of exercises edify the student
concerning the use of these distributions. Projects bring the practical use of these
distributions to life through group work. Chapter 7 is the most theoretical chapter
xviii Preface
in the text. It deals with transformation of random variables and will likely not be
used unless the instructor wishes to teach a relatively theoretical course. Chapter
8 contains graphical material, expanding on the more elementary set of graphi-
cal tools presented and illustrated in Chapter 1. Probability plotting is discussed
and illustrated with examples. The very important concept of sampling distribu-
tions is presented thoroughly, and illustrations are given that involve the central
limit theorem and the distribution of a sample variance under normal, independent
(i.i.d.) sampling. The t and F distributions are introduced to motivate their use
in chapters to follow. New material in Chapter 8 helps the student to visualize the
importance of hypothesis testing, motivating the concept of a P-value.
Chapter 9 contains material on one- and two-sample point and interval esti-
mation. A thorough discussion with examples points out the contrast between the
different types of intervals—confidence intervals, prediction intervals, and toler-
ance intervals. A case study illustrates the three types of statistical intervals in the
context of a manufacturing situation. This case study highlights the differences
among the intervals, their sources, and the assumptions made in their develop-
ment, as well as what type of scientific study or question requires the use of each
one. A new approximation method has been added for the inference concerning a
proportion. Chapter 10 begins with a basic presentation on the pragmatic mean-
ing of hypothesis testing, with emphasis on such fundamental concepts as null and
alternative hypotheses, the role of probability and the P-value, and the power of
a test. Following this, illustrations are given of tests concerning one and two sam-
ples under standard conditions. The two-sample t-test with paired observations
is also described. A case study helps the student to develop a clear picture of
what interaction among factors really means as well as the dangers that can arise
when interaction between treatments and experimental units exists. At the end of
Chapter 10 is a very important section that relates Chapters 9 and 10 (estimation
and hypothesis testing) to Chapters 11 through 16, where statistical modeling is
prominent. It is important that the student be aware of the strong connection.
Chapters 11 and 12 contain material on simple and multiple linear regression,
respectively. Considerably more attention is given in this edition to the effect that
collinearity among the regression variables plays. A situation is presented that
shows how the role of a single regression variable can depend in large part on what
regressors are in the model with it. The sequential model selection procedures (for-
ward, backward, stepwise, etc.) are then revisited in regard to this concept, and
the rationale for using certain P-values with these procedures is provided. Chap-
ter 12 offers material on nonlinear modeling with a special presentation of logistic
regression, which has applications in engineering and the biological sciences. The
material on multiple regression is quite extensive and thus provides considerable
flexibility for the instructor, as indicated earlier. At the end of Chapter 12 is com-
mentary relating that chapter to Chapters 14 and 15. Several features were added
that provide a better understanding of the material in general. For example, the
end-of-chapter material deals with cautions and difficulties one might encounter.
It is pointed out that there are types of responses that occur naturally in practice
(e.g. proportion responses, count responses, and several others) with which stan-
dard least squares regression should not be used because standard assumptions do
not hold and violation of assumptions may induce serious errors. The suggestion is
Preface xix
made that data transformation on the response may alleviate the problem in some
cases. Flexibility is again available in Chapters 13 and 14, on the topic of analysis
of variance. Chapter 13 covers one-factor ANOVA in the context of a completely
randomized design. Complementary topics include tests on variances and multiple
comparisons. Comparisons of treatments in blocks are highlighted, along with the
topic of randomized complete blocks. Graphical methods are extended to ANOVA
to aid the student in supplementing the formal inference with a pictorial type of in-
ference that can aid scientists and engineers in presenting material. A new project
is given in which students incorporate the appropriate randomization into each
plan and use graphical techniques and P-values in reporting the results. Chapter
14 extends the material in Chapter 13 to accommodate two or more factors that
are in a factorial structure. The ANOVA presentation in Chapter 14 includes work
in both random and fixed effects models. Chapter 15 offers material associated
with 2k
factorial designs; examples and case studies present the use of screening
designs and special higher fractions of the 2k
. Two new and special features are
the presentations of response surface methodology (RSM) and robust parameter
design. These topics are linked in a case study that describes and illustrates a
dual response surface design and analysis featuring the use of process mean and
variance response surfaces.
Computer Software
Case studies, beginning in Chapter 8, feature computer printout and graphical
material generated using both SAS and MINITAB. The inclusion of the computer
reflects our belief that students should have the experience of reading and inter-
preting computer printout and graphics, even if the software in the text is not that
which is used by the instructor. Exposure to more than one type of software can
broaden the experience base for the student. There is no reason to believe that
the software used in the course will be that which the student will be called upon
to use in practice following graduation. Examples and case studies in the text are
supplemented, where appropriate, by various types of residual plots, quantile plots,
normal probability plots, and other plots. Such plots are particularly prevalent in
Chapters 11 through 15.
Supplements
Instructor’s Solutions Manual. This resource contains worked-out solutions to all
text exercises and is available for download from Pearson Education’s Instructor
Resource Center.
Student Solutions Manual ISBN-10: 0-321-64013-6; ISBN-13: 978-0-321-64013-0.
Featuring complete solutions to selected exercises, this is a great tool for students
as they study and work through the problem material.
PowerPoint R
 Lecture Slides ISBN-10: 0-321-73731-8; ISBN-13: 978-0-321-73731-
1. These slides include most of the figures and tables from the text. Slides are
available to download from Pearson Education’s Instructor Resource Center.
xx Preface
StatCrunch eText. This interactive, online textbook includes StatCrunch, a pow-
erful, web-based statistical software. Embedded StatCrunch buttons allow users
to open all data sets and tables from the book with the click of a button and
immediately perform an analysis using StatCrunch.
StatCrunchTM
. StatCrunch is web-based statistical software that allows users to
perform complex analyses, share data sets, and generate compelling reports of
their data. Users can upload their own data to StatCrunch or search the library
of over twelve thousand publicly shared data sets, covering almost any topic of
interest. Interactive graphical outputs help users understand statistical concepts
and are available for export to enrich reports with visual representations of data.
Additional features include
• A full range of numerical and graphical methods that allow users to analyze
and gain insights from any data set.
• Reporting options that help users create a wide variety of visually appealing
representations of their data.
• An online survey tool that allows users to quickly build and administer surveys
via a web form.
StatCrunch is available to qualified adopters. For more information, visit our
website at www.statcrunch.com or contact your Pearson representative.
Acknowledgments
We are indebted to those colleagues who reviewed the previous editions of this book
and provided many helpful suggestions for this edition. They are David Groggel,
Miami University; Lance Hemlow, Raritan Valley Community College; Ying Ji,
University of Texas at San Antonio; Thomas Kline, University of Northern Iowa;
Sheila Lawrence, Rutgers University; Luis Moreno, Broome County Community
College; Donald Waldman, University of Colorado—Boulder; and Marlene Will,
Spalding University. We would also like to thank Delray Schulz, Millersville Uni-
versity; Roxane Burrows, Hocking College; and Frank Chmely for ensuring the
accuracy of this text.
We would like to thank the editorial and production services provided by nu-
merous people from Pearson/Prentice Hall, especially the editor in chief Deirdre
Lynch, acquisitions editor Christopher Cummings, executive content editor Chris-
tine O’Brien, production editor Tracy Patruno, and copyeditor Sally Lifland. Many
useful comments and suggestions by proofreader Gail Magin are greatly appreci-
ated. We thank the Virginia Tech Statistical Consulting Center, which was the
source of many real-life data sets.
R.H.M.
S.L.M.
K.Y.
Chapter 1
Introduction to Statistics
and Data Analysis
1.1 Overview: Statistical Inference, Samples, Populations,
and the Role of Probability
Beginning in the 1980s and continuing into the 21st century, an inordinate amount
of attention has been focused on improvement of quality in American industry.
Much has been said and written about the Japanese “industrial miracle,” which
began in the middle of the 20th century. The Japanese were able to succeed where
we and other countries had failed–namely, to create an atmosphere that allows
the production of high-quality products. Much of the success of the Japanese has
been attributed to the use of statistical methods and statistical thinking among
management personnel.
Use of Scientific Data
The use of statistical methods in manufacturing, development of food products,
computer software, energy sources, pharmaceuticals, and many other areas involves
the gathering of information or scientific data. Of course, the gathering of data
is nothing new. It has been done for well over a thousand years. Data have
been collected, summarized, reported, and stored for perusal. However, there is a
profound distinction between collection of scientific information and inferential
statistics. It is the latter that has received rightful attention in recent decades.
The offspring of inferential statistics has been a large “toolbox” of statistical
methods employed by statistical practitioners. These statistical methods are de-
signed to contribute to the process of making scientific judgments in the face of
uncertainty and variation. The product density of a particular material from a
manufacturing process will not always be the same. Indeed, if the process involved
is a batch process rather than continuous, there will be not only variation in ma-
terial density among the batches that come off the line (batch-to-batch variation),
but also within-batch variation. Statistical methods are used to analyze data from
a process such as this one in order to gain more sense of where in the process
changes may be made to improve the quality of the process. In this process, qual-
1
2 Chapter 1 Introduction to Statistics and Data Analysis
ity may well be defined in relation to closeness to a target density value in harmony
with what portion of the time this closeness criterion is met. An engineer may be
concerned with a specific instrument that is used to measure sulfur monoxide in
the air during pollution studies. If the engineer has doubts about the effectiveness
of the instrument, there are two sources of variation that must be dealt with.
The first is the variation in sulfur monoxide values that are found at the same
locale on the same day. The second is the variation between values observed and
the true amount of sulfur monoxide that is in the air at the time. If either of these
two sources of variation is exceedingly large (according to some standard set by
the engineer), the instrument may need to be replaced. In a biomedical study of a
new drug that reduces hypertension, 85% of patients experienced relief, while it is
generally recognized that the current drug, or “old” drug, brings relief to 80% of pa-
tients that have chronic hypertension. However, the new drug is more expensive to
make and may result in certain side effects. Should the new drug be adopted? This
is a problem that is encountered (often with much more complexity) frequently by
pharmaceutical firms in conjunction with the FDA (Federal Drug Administration).
Again, the consideration of variation needs to be taken into account. The “85%”
value is based on a certain number of patients chosen for the study. Perhaps if the
study were repeated with new patients the observed number of “successes” would
be 75%! It is the natural variation from study to study that must be taken into
account in the decision process. Clearly this variation is important, since variation
from patient to patient is endemic to the problem.
Variability in Scientific Data
In the problems discussed above the statistical methods used involve dealing with
variability, and in each case the variability to be studied is that encountered in
scientific data. If the observed product density in the process were always the
same and were always on target, there would be no need for statistical methods.
If the device for measuring sulfur monoxide always gives the same value and the
value is accurate (i.e., it is correct), no statistical analysis is needed. If there
were no patient-to-patient variability inherent in the response to the drug (i.e.,
it either always brings relief or not), life would be simple for scientists in the
pharmaceutical firms and FDA and no statistician would be needed in the decision
process. Statistics researchers have produced an enormous number of analytical
methods that allow for analysis of data from systems like those described above.
This reflects the true nature of the science that we call inferential statistics, namely,
using techniques that allow us to go beyond merely reporting data to drawing
conclusions (or inferences) about the scientific system. Statisticians make use of
fundamental laws of probability and statistical inference to draw conclusions about
scientific systems. Information is gathered in the form of samples, or collections
of observations. The process of sampling is introduced in Chapter 2, and the
discussion continues throughout the entire book.
Samples are collected from populations, which are collections of all individ-
uals or individual items of a particular type. At times a population signifies a
scientific system. For example, a manufacturer of computer boards may wish to
eliminate defects. A sampling process may involve collecting information on 50
computer boards sampled randomly from the process. Here, the population is all
1.1 Overview: Statistical Inference, Samples, Populations, and the Role of Probability 3
computer boards manufactured by the firm over a specific period of time. If an
improvement is made in the computer board process and a second sample of boards
is collected, any conclusions drawn regarding the effectiveness of the change in pro-
cess should extend to the entire population of computer boards produced under
the “improved process.” In a drug experiment, a sample of patients is taken and
each is given a specific drug to reduce blood pressure. The interest is focused on
drawing conclusions about the population of those who suffer from hypertension.
Often, it is very important to collect scientific data in a systematic way, with
planning being high on the agenda. At times the planning is, by necessity, quite
limited. We often focus only on certain properties or characteristics of the items or
objects in the population. Each characteristic has particular engineering or, say,
biological importance to the “customer,” the scientist or engineer who seeks to learn
about the population. For example, in one of the illustrations above the quality
of the process had to do with the product density of the output of a process. An
engineer may need to study the effect of process conditions, temperature, humidity,
amount of a particular ingredient, and so on. He or she can systematically move
these factors to whatever levels are suggested according to whatever prescription
or experimental design is desired. However, a forest scientist who is interested
in a study of factors that influence wood density in a certain kind of tree cannot
necessarily design an experiment. This case may require an observational study
in which data are collected in the field but factor levels can not be preselected.
Both of these types of studies lend themselves to methods of statistical inference.
In the former, the quality of the inferences will depend on proper planning of the
experiment. In the latter, the scientist is at the mercy of what can be gathered.
For example, it is sad if an agronomist is interested in studying the effect of rainfall
on plant yield and the data are gathered during a drought.
The importance of statistical thinking by managers and the use of statistical
inference by scientific personnel is widely acknowledged. Research scientists gain
much from scientific data. Data provide understanding of scientific phenomena.
Product and process engineers learn a great deal in their off-line efforts to improve
the process. They also gain valuable insight by gathering production data (on-
line monitoring) on a regular basis. This allows them to determine necessary
modifications in order to keep the process at a desired level of quality.
There are times when a scientific practitioner wishes only to gain some sort of
summary of a set of data represented in the sample. In other words, inferential
statistics is not required. Rather, a set of single-number statistics or descriptive
statistics is helpful. These numbers give a sense of center of the location of
the data, variability in the data, and the general nature of the distribution of
observations in the sample. Though no specific statistical methods leading to
statistical inference are incorporated, much can be learned. At times, descriptive
statistics are accompanied by graphics. Modern statistical software packages allow
for computation of means, medians, standard deviations, and other single-
number statistics as well as production of graphs that show a “footprint” of the
nature of the sample. Definitions and illustrations of the single-number statistics
and graphs, including histograms, stem-and-leaf plots, scatter plots, dot plots, and
box plots, will be given in sections that follow.
4 Chapter 1 Introduction to Statistics and Data Analysis
The Role of Probability
In this book, Chapters 2 to 6 deal with fundamental notions of probability. A
thorough grounding in these concepts allows the reader to have a better under-
standing of statistical inference. Without some formalism of probability theory,
the student cannot appreciate the true interpretation from data analysis through
modern statistical methods. It is quite natural to study probability prior to study-
ing statistical inference. Elements of probability allow us to quantify the strength
or “confidence” in our conclusions. In this sense, concepts in probability form a
major component that supplements statistical methods and helps us gauge the
strength of the statistical inference. The discipline of probability, then, provides
the transition between descriptive statistics and inferential methods. Elements of
probability allow the conclusion to be put into the language that the science or
engineering practitioners require. An example follows that will enable the reader
to understand the notion of a P-value, which often provides the “bottom line” in
the interpretation of results from the use of statistical methods.
Example 1.1: Suppose that an engineer encounters data from a manufacturing process in which
100 items are sampled and 10 are found to be defective. It is expected and antic-
ipated that occasionally there will be defective items. Obviously these 100 items
represent the sample. However, it has been determined that in the long run, the
company can only tolerate 5% defective in the process. Now, the elements of prob-
ability allow the engineer to determine how conclusive the sample information is
regarding the nature of the process. In this case, the population conceptually
represents all possible items from the process. Suppose we learn that if the process
is acceptable, that is, if it does produce items no more than 5% of which are de-
fective, there is a probability of 0.0282 of obtaining 10 or more defective items in
a random sample of 100 items from the process. This small probability suggests
that the process does, indeed, have a long-run rate of defective items that exceeds
5%. In other words, under the condition of an acceptable process, the sample in-
formation obtained would rarely occur. However, it did occur! Clearly, though, it
would occur with a much higher probability if the process defective rate exceeded
5% by a significant amount.
From this example it becomes clear that the elements of probability aid in the
translation of sample information into something conclusive or inconclusive about
the scientific system. In fact, what was learned likely is alarming information to
the engineer or manager. Statistical methods, which we will actually detail in
Chapter 10, produced a P-value of 0.0282. The result suggests that the process
very likely is not acceptable. The concept of a P-value is dealt with at length
in succeeding chapters. The example that follows provides a second illustration.
Example 1.2: Often the nature of the scientific study will dictate the role that probability and
deductive reasoning play in statistical inference. Exercise 9.40 on page 294 provides
data associated with a study conducted at the Virginia Polytechnic Institute and
State University on the development of a relationship between the roots of trees and
the action of a fungus. Minerals are transferred from the fungus to the trees and
sugars from the trees to the fungus. Two samples of 10 northern red oak seedlings
were planted in a greenhouse, one containing seedlings treated with nitrogen and
1.1 Overview: Statistical Inference, Samples, Populations, and the Role of Probability 5
the other containing seedlings with no nitrogen. All other environmental conditions
were held constant. All seedlings contained the fungus Pisolithus tinctorus. More
details are supplied in Chapter 9. The stem weights in grams were recorded after
the end of 140 days. The data are given in Table 1.1.
Table 1.1: Data Set for Example 1.2
No Nitrogen Nitrogen
0.32 0.26
0.53 0.43
0.28 0.47
0.37 0.49
0.47 0.52
0.43 0.75
0.36 0.79
0.42 0.86
0.38 0.62
0.43 0.46
0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90
Figure 1.1: A dot plot of stem weight data.
In this example there are two samples from two separate populations. The
purpose of the experiment is to determine if the use of nitrogen has an influence
on the growth of the roots. The study is a comparative study (i.e., we seek to
compare the two populations with regard to a certain important characteristic). It
is instructive to plot the data as shown in the dot plot of Figure 1.1. The ◦ values
represent the “nitrogen” data and the × values represent the “no-nitrogen” data.
Notice that the general appearance of the data might suggest to the reader
that, on average, the use of nitrogen increases the stem weight. Four nitrogen ob-
servations are considerably larger than any of the no-nitrogen observations. Most
of the no-nitrogen observations appear to be below the center of the data. The
appearance of the data set would seem to indicate that nitrogen is effective. But
how can this be quantified? How can all of the apparent visual evidence be summa-
rized in some sense? As in the preceding example, the fundamentals of probability
can be used. The conclusions may be summarized in a probability statement or
P-value. We will not show here the statistical inference that produces the summary
probability. As in Example 1.1, these methods will be discussed in Chapter 10.
The issue revolves around the “probability that data like these could be observed”
given that nitrogen has no effect, in other words, given that both samples were
generated from the same population. Suppose that this probability is small, say
0.03. That would certainly be strong evidence that the use of nitrogen does indeed
influence (apparently increases) average stem weight of the red oak seedlings.
6 Chapter 1 Introduction to Statistics and Data Analysis
How Do Probability and Statistical Inference Work Together?
It is important for the reader to understand the clear distinction between the
discipline of probability, a science in its own right, and the discipline of inferen-
tial statistics. As we have already indicated, the use or application of concepts in
probability allows real-life interpretation of the results of statistical inference. As a
result, it can be said that statistical inference makes use of concepts in probability.
One can glean from the two examples above that the sample information is made
available to the analyst and, with the aid of statistical methods and elements of
probability, conclusions are drawn about some feature of the population (the pro-
cess does not appear to be acceptable in Example 1.1, and nitrogen does appear
to influence average stem weights in Example 1.2). Thus for a statistical problem,
the sample along with inferential statistics allows us to draw conclu-
sions about the population, with inferential statistics making clear use
of elements of probability. This reasoning is inductive in nature. Now as we
move into Chapter 2 and beyond, the reader will note that, unlike what we do in
our two examples here, we will not focus on solving statistical problems. Many
examples will be given in which no sample is involved. There will be a population
clearly described with all features of the population known. Then questions of im-
portance will focus on the nature of data that might hypothetically be drawn from
the population. Thus, one can say that elements in probability allow us to
draw conclusions about characteristics of hypothetical data taken from
the population, based on known features of the population. This type of
reasoning is deductive in nature. Figure 1.2 shows the fundamental relationship
between probability and inferential statistics.
Population Sample
Probability
Statistical Inference
Figure 1.2: Fundamental relationship between probability and inferential statistics.
Now, in the grand scheme of things, which is more important, the field of
probability or the field of statistics? They are both very important and clearly are
complementary. The only certainty concerning the pedagogy of the two disciplines
lies in the fact that if statistics is to be taught at more than merely a “cookbook”
level, then the discipline of probability must be taught first. This rule stems from
the fact that nothing can be learned about a population from a sample until the
analyst learns the rudiments of uncertainty in that sample. For example, consider
Example 1.1. The question centers around whether or not the population, defined
by the process, is no more than 5% defective. In other words, the conjecture is that
on the average 5 out of 100 items are defective. Now, the sample contains 100
items and 10 are defective. Does this support the conjecture or refute it? On the
1.2 Sampling Procedures; Collection of Data 7
surface it would appear to be a refutation of the conjecture because 10 out of 100
seem to be “a bit much.” But without elements of probability, how do we know?
Only through the study of material in future chapters will we learn the conditions
under which the process is acceptable (5% defective). The probability of obtaining
10 or more defective items in a sample of 100 is 0.0282.
We have given two examples where the elements of probability provide a sum-
mary that the scientist or engineer can use as evidence on which to build a decision.
The bridge between the data and the conclusion is, of course, based on foundations
of statistical inference, distribution theory, and sampling distributions discussed in
future chapters.
1.2 Sampling Procedures; Collection of Data
In Section 1.1 we discussed very briefly the notion of sampling and the sampling
process. While sampling appears to be a simple concept, the complexity of the
questions that must be answered about the population or populations necessitates
that the sampling process be very complex at times. While the notion of sampling
is discussed in a technical way in Chapter 8, we shall endeavor here to give some
common-sense notions of sampling. This is a natural transition to a discussion of
the concept of variability.
Simple Random Sampling
The importance of proper sampling revolves around the degree of confidence with
which the analyst is able to answer the questions being asked. Let us assume that
only a single population exists in the problem. Recall that in Example 1.2 two
populations were involved. Simple random sampling implies that any particular
sample of a specified sample size has the same chance of being selected as any
other sample of the same size. The term sample size simply means the number of
elements in the sample. Obviously, a table of random numbers can be utilized in
sample selection in many instances. The virtue of simple random sampling is that
it aids in the elimination of the problem of having the sample reflect a different
(possibly more confined) population than the one about which inferences need to be
made. For example, a sample is to be chosen to answer certain questions regarding
political preferences in a certain state in the United States. The sample involves
the choice of, say, 1000 families, and a survey is to be conducted. Now, suppose it
turns out that random sampling is not used. Rather, all or nearly all of the 1000
families chosen live in an urban setting. It is believed that political preferences
in rural areas differ from those in urban areas. In other words, the sample drawn
actually confined the population and thus the inferences need to be confined to the
“limited population,” and in this case confining may be undesirable. If, indeed,
the inferences need to be made about the state as a whole, the sample of size 1000
described here is often referred to as a biased sample.
As we hinted earlier, simple random sampling is not always appropriate. Which
alternative approach is used depends on the complexity of the problem. Often, for
example, the sampling units are not homogeneous and naturally divide themselves
into nonoverlapping groups that are homogeneous. These groups are called strata,
8 Chapter 1 Introduction to Statistics and Data Analysis
and a procedure called stratified random sampling involves random selection of a
sample within each stratum. The purpose is to be sure that each of the strata
is neither over- nor underrepresented. For example, suppose a sample survey is
conducted in order to gather preliminary opinions regarding a bond referendum
that is being considered in a certain city. The city is subdivided into several ethnic
groups which represent natural strata. In order not to disregard or overrepresent
any group, separate random samples of families could be chosen from each group.
Experimental Design
The concept of randomness or random assignment plays a huge role in the area of
experimental design, which was introduced very briefly in Section 1.1 and is an
important staple in almost any area of engineering or experimental science. This
will be discussed at length in Chapters 13 through 15. However, it is instructive to
give a brief presentation here in the context of random sampling. A set of so-called
treatments or treatment combinations becomes the populations to be studied
or compared in some sense. An example is the nitrogen versus no-nitrogen treat-
ments in Example 1.2. Another simple example would be “placebo” versus “active
drug,” or in a corrosion fatigue study we might have treatment combinations that
involve specimens that are coated or uncoated as well as conditions of low or high
humidity to which the specimens are exposed. In fact, there are four treatment
or factor combinations (i.e., 4 populations), and many scientific questions may be
asked and answered through statistical and inferential methods. Consider first the
situation in Example 1.2. There are 20 diseased seedlings involved in the exper-
iment. It is easy to see from the data themselves that the seedlings are different
from each other. Within the nitrogen group (or the no-nitrogen group) there is
considerable variability in the stem weights. This variability is due to what is
generally called the experimental unit. This is a very important concept in in-
ferential statistics, in fact one whose description will not end in this chapter. The
nature of the variability is very important. If it is too large, stemming from a
condition of excessive nonhomogeneity in experimental units, the variability will
“wash out” any detectable difference between the two populations. Recall that in
this case that did not occur.
The dot plot in Figure 1.1 and P-value indicated a clear distinction between
these two conditions. What role do those experimental units play in the data-
taking process itself? The common-sense and, indeed, quite standard approach is
to assign the 20 seedlings or experimental units randomly to the two treat-
ments or conditions. In the drug study, we may decide to use a total of 200
available patients, patients that clearly will be different in some sense. They are
the experimental units. However, they all may have the same chronic condition
for which the drug is a potential treatment. Then in a so-called completely ran-
domized design, 100 patients are assigned randomly to the placebo and 100 to
the active drug. Again, it is these experimental units within a group or treatment
that produce the variability in data results (i.e., variability in the measured result),
say blood pressure, or whatever drug efficacy value is important. In the corrosion
fatigue study, the experimental units are the specimens that are the subjects of
the corrosion.
1.2 Sampling Procedures; Collection of Data 9
Why Assign Experimental Units Randomly?
What is the possible negative impact of not randomly assigning experimental units
to the treatments or treatment combinations? This is seen most clearly in the
case of the drug study. Among the characteristics of the patients that produce
variability in the results are age, gender, and weight. Suppose merely by chance
the placebo group contains a sample of people that are predominately heavier than
those in the treatment group. Perhaps heavier individuals have a tendency to have
a higher blood pressure. This clearly biases the result, and indeed, any result
obtained through the application of statistical inference may have little to do with
the drug and more to do with differences in weights among the two samples of
patients.
We should emphasize the attachment of importance to the term variability.
Excessive variability among experimental units “camouflages” scientific findings.
In future sections, we attempt to characterize and quantify measures of variability.
In sections that follow, we introduce and discuss specific quantities that can be
computed in samples; the quantities give a sense of the nature of the sample with
respect to center of location of the data and variability in the data. A discussion
of several of these single-number measures serves to provide a preview of what
statistical information will be important components of the statistical methods
that are used in future chapters. These measures that help characterize the nature
of the data set fall into the category of descriptive statistics. This material is
a prelude to a brief presentation of pictorial and graphical methods that go even
further in characterization of the data set. The reader should understand that the
statistical methods illustrated here will be used throughout the text. In order to
offer the reader a clearer picture of what is involved in experimental design studies,
we offer Example 1.3.
Example 1.3: A corrosion study was made in order to determine whether coating an aluminum
metal with a corrosion retardation substance reduced the amount of corrosion.
The coating is a protectant that is advertised to minimize fatigue damage in this
type of material. Also of interest is the influence of humidity on the amount of
corrosion. A corrosion measurement can be expressed in thousands of cycles to
failure. Two levels of coating, no coating and chemical corrosion coating, were
used. In addition, the two relative humidity levels are 20% relative humidity and
80% relative humidity.
The experiment involves four treatment combinations that are listed in the table
that follows. There are eight experimental units used, and they are aluminum
specimens prepared; two are assigned randomly to each of the four treatment
combinations. The data are presented in Table 1.2.
The corrosion data are averages of two specimens. A plot of the averages is
pictured in Figure 1.3. A relatively large value of cycles to failure represents a
small amount of corrosion. As one might expect, an increase in humidity appears
to make the corrosion worse. The use of the chemical corrosion coating procedure
appears to reduce corrosion.
In this experimental design illustration, the engineer has systematically selected
the four treatment combinations. In order to connect this situation to concepts
with which the reader has been exposed to this point, it should be assumed that the
10 Chapter 1 Introduction to Statistics and Data Analysis
Table 1.2: Data for Example 1.3
Average Corrosion in
Coating Humidity Thousands of Cycles to Failure
Uncoated
20% 975
80% 350
Chemical Corrosion
20% 1750
80% 1550
0
1000
2000
0 20% 80%
Humidity
Average
Corrosion
Uncoated
Chemical Corrosion Coating
Figure 1.3: Corrosion results for Example 1.3.
conditions representing the four treatment combinations are four separate popula-
tions and that the two corrosion values observed for each population are important
pieces of information. The importance of the average in capturing and summariz-
ing certain features in the population will be highlighted in Section 1.3. While we
might draw conclusions about the role of humidity and the impact of coating the
specimens from the figure, we cannot truly evaluate the results from an analyti-
cal point of view without taking into account the variability around the average.
Again, as we indicated earlier, if the two corrosion values for each treatment com-
bination are close together, the picture in Figure 1.3 may be an accurate depiction.
But if each corrosion value in the figure is an average of two values that are widely
dispersed, then this variability may, indeed, truly “wash away” any information
that appears to come through when one observes averages only. The foregoing
example illustrates these concepts:
(1) random assignment of treatment combinations (coating, humidity) to experi-
mental units (specimens)
(2) the use of sample averages (average corrosion values) in summarizing sample
information
(3) the need for consideration of measures of variability in the analysis of any
sample or sets of samples
1.3 Measures of Location: The Sample Mean and Median 11
This example suggests the need for what follows in Sections 1.3 and 1.4, namely,
descriptive statistics that indicate measures of center of location in a set of data,
and those that measure variability.
1.3 Measures of Location: The Sample Mean and Median
Measures of location are designed to provide the analyst with some quantitative
values of where the center, or some other location, of data is located. In Example
1.2, it appears as if the center of the nitrogen sample clearly exceeds that of the
no-nitrogen sample. One obvious and very useful measure is the sample mean.
The mean is simply a numerical average.
Definition 1.1: Suppose that the observations in a sample are x1, x2, . . . , xn. The sample mean,
denoted by x̄, is
x̄ =
n

i=1
xi
n
=
x1 + x2 + · · · + xn
n
.
There are other measures of central tendency that are discussed in detail in
future chapters. One important measure is the sample median. The purpose of
the sample median is to reflect the central tendency of the sample in such a way
that it is uninfluenced by extreme values or outliers.
Definition 1.2: Given that the observations in a sample are x1, x2, . . . , xn, arranged in increasing
order of magnitude, the sample median is
x̃ =

x(n+1)/2, if n is odd,
1
2 (xn/2 + xn/2+1), if n is even.
As an example, suppose the data set is the following: 1.7, 2.2, 3.9, 3.11, and
14.7. The sample mean and median are, respectively,
x̄ = 5.12, x̃ = 3.9.
Clearly, the mean is influenced considerably by the presence of the extreme obser-
vation, 14.7, whereas the median places emphasis on the true “center” of the data
set. In the case of the two-sample data set of Example 1.2, the two measures of
central tendency for the individual samples are
x̄ (no nitrogen) = 0.399 gram,
x̃ (no nitrogen) =
0.38 + 0.42
2
= 0.400 gram,
x̄ (nitrogen) = 0.565 gram,
x̃ (nitrogen) =
0.49 + 0.52
2
= 0.505 gram.
Clearly there is a difference in concept between the mean and median. It may
be of interest to the reader with an engineering background that the sample mean
12 Chapter 1 Introduction to Statistics and Data Analysis
is the centroid of the data in a sample. In a sense, it is the point at which a
fulcrum can be placed to balance a system of “weights” which are the locations of
the individual data. This is shown in Figure 1.4 with regard to the with-nitrogen
sample.
0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90
x  0.565
Figure 1.4: Sample mean as a centroid of the with-nitrogen stem weight.
In future chapters, the basis for the computation of x̄ is that of an estimate
of the population mean. As we indicated earlier, the purpose of statistical infer-
ence is to draw conclusions about population characteristics or parameters and
estimation is a very important feature of statistical inference.
The median and mean can be quite different from each other. Note, however,
that in the case of the stem weight data the sample mean value for no-nitrogen is
quite similar to the median value.
Other Measures of Locations
There are several other methods of quantifying the center of location of the data
in the sample. We will not deal with them at this point. For the most part,
alternatives to the sample mean are designed to produce values that represent
compromises between the mean and the median. Rarely do we make use of these
other measures. However, it is instructive to discuss one class of estimators, namely
the class of trimmed means. A trimmed mean is computed by “trimming away”
a certain percent of both the largest and the smallest set of values. For example,
the 10% trimmed mean is found by eliminating the largest 10% and smallest 10%
and computing the average of the remaining values. For example, in the case of
the stem weight data, we would eliminate the largest and smallest since the sample
size is 10 for each sample. So for the without-nitrogen group the 10% trimmed
mean is given by
x̄tr(10) =
0.32 + 0.37 + 0.47 + 0.43 + 0.36 + 0.42 + 0.38 + 0.43
8
= 0.39750,
and for the 10% trimmed mean for the with-nitrogen group we have
x̄tr(10) =
0.43 + 0.47 + 0.49 + 0.52 + 0.75 + 0.79 + 0.62 + 0.46
8
= 0.56625.
Note that in this case, as expected, the trimmed means are close to both the mean
and the median for the individual samples. The trimmed mean is, of course, more
insensitive to outliers than the sample mean but not as insensitive as the median.
On the other hand, the trimmed mean approach makes use of more information
than the sample median. Note that the sample median is, indeed, a special case of
the trimmed mean in which all of the sample data are eliminated apart from the
middle one or two observations.
/ /
Exercises 13
Exercises
1.1 The following measurements were recorded for
the drying time, in hours, of a certain brand of latex
paint.
3.4 2.5 4.8 2.9 3.6
2.8 3.3 5.6 3.7 2.8
4.4 4.0 5.2 3.0 4.8
Assume that the measurements are a simple random
sample.
(a) What is the sample size for the above sample?
(b) Calculate the sample mean for these data.
(c) Calculate the sample median.
(d) Plot the data by way of a dot plot.
(e) Compute the 20% trimmed mean for the above
data set.
(f) Is the sample mean for these data more or less de-
scriptive as a center of location than the trimmed
mean?
1.2 According to the journal Chemical Engineering,
an important property of a fiber is its water ab-
sorbency. A random sample of 20 pieces of cotton fiber
was taken and the absorbency on each piece was mea-
sured. The following are the absorbency values:
18.71 21.41 20.72 21.81 19.29 22.43 20.17
23.71 19.44 20.50 18.92 20.33 23.00 22.85
19.25 21.77 22.11 19.77 18.04 21.12
(a) Calculate the sample mean and median for the
above sample values.
(b) Compute the 10% trimmed mean.
(c) Do a dot plot of the absorbency data.
(d) Using only the values of the mean, median, and
trimmed mean, do you have evidence of outliers in
the data?
1.3 A certain polymer is used for evacuation systems
for aircraft. It is important that the polymer be re-
sistant to the aging process. Twenty specimens of the
polymer were used in an experiment. Ten were as-
signed randomly to be exposed to an accelerated batch
aging process that involved exposure to high tempera-
tures for 10 days. Measurements of tensile strength of
the specimens were made, and the following data were
recorded on tensile strength in psi:
No aging: 227 222 218 217 225
218 216 229 228 221
Aging: 219 214 215 211 209
218 203 204 201 205
(a) Do a dot plot of the data.
(b) From your plot, does it appear as if the aging pro-
cess has had an effect on the tensile strength of this
polymer? Explain.
(c) Calculate the sample mean tensile strength of the
two samples.
(d) Calculate the median for both. Discuss the simi-
larity or lack of similarity between the mean and
median of each group.
1.4 In a study conducted by the Department of Me-
chanical Engineering at Virginia Tech, the steel rods
supplied by two different companies were compared.
Ten sample springs were made out of the steel rods
supplied by each company, and a measure of flexibility
was recorded for each. The data are as follows:
Company A: 9.3 8.8 6.8 8.7 8.5
6.7 8.0 6.5 9.2 7.0
Company B: 11.0 9.8 9.9 10.2 10.1
9.7 11.0 11.1 10.2 9.6
(a) Calculate the sample mean and median for the data
for the two companies.
(b) Plot the data for the two companies on the same
line and give your impression regarding any appar-
ent differences between the two companies.
1.5 Twenty adult males between the ages of 30 and
40 participated in a study to evaluate the effect of a
specific health regimen involving diet and exercise on
the blood cholesterol. Ten were randomly selected to
be a control group, and ten others were assigned to
take part in the regimen as the treatment group for a
period of 6 months. The following data show the re-
duction in cholesterol experienced for the time period
for the 20 subjects:
Control group: 7 3 −4 14 2
5 22 −7 9 5
Treatment group: −6 5 9 4 4
12 37 5 3 3
(a) Do a dot plot of the data for both groups on the
same graph.
(b) Compute the mean, median, and 10% trimmed
mean for both groups.
(c) Explain why the difference in means suggests one
conclusion about the effect of the regimen, while
the difference in medians or trimmed means sug-
gests a different conclusion.
1.6 The tensile strength of silicone rubber is thought
to be a function of curing temperature. A study was
carried out in which samples of 12 specimens of the rub-
ber were prepared using curing temperatures of 20◦
C
and 45◦
C. The data below show the tensile strength
values in megapascals.
14 Chapter 1 Introduction to Statistics and Data Analysis
20◦
C: 2.07 2.14 2.22 2.03 2.21 2.03
2.05 2.18 2.09 2.14 2.11 2.02
45◦
C: 2.52 2.15 2.49 2.03 2.37 2.05
1.99 2.42 2.08 2.42 2.29 2.01
(a) Show a dot plot of the data with both low and high
temperature tensile strength values.
(b) Compute sample mean tensile strength for both
samples.
(c) Does it appear as if curing temperature has an
influence on tensile strength, based on the plot?
Comment further.
(d) Does anything else appear to be influenced by an
increase in curing temperature? Explain.
1.4 Measures of Variability
Sample variability plays an important role in data analysis. Process and product
variability is a fact of life in engineering and scientific systems: The control or
reduction of process variability is often a source of major difficulty. More and
more process engineers and managers are learning that product quality and, as
a result, profits derived from manufactured products are very much a function
of process variability. As a result, much of Chapters 9 through 15 deals with
data analysis and modeling procedures in which sample variability plays a major
role. Even in small data analysis problems, the success of a particular statistical
method may depend on the magnitude of the variability among the observations in
the sample. Measures of location in a sample do not provide a proper summary of
the nature of a data set. For instance, in Example 1.2 we cannot conclude that the
use of nitrogen enhances growth without taking sample variability into account.
While the details of the analysis of this type of data set are deferred to Chap-
ter 9, it should be clear from Figure 1.1 that variability among the no-nitrogen
observations and variability among the nitrogen observations are certainly of some
consequence. In fact, it appears that the variability within the nitrogen sample
is larger than that of the no-nitrogen sample. Perhaps there is something about
the inclusion of nitrogen that not only increases the stem height (x̄ of 0.565 gram
compared to an x̄ of 0.399 gram for the no-nitrogen sample) but also increases the
variability in stem height (i.e., renders the stem height more inconsistent).
As another example, contrast the two data sets below. Each contains two
samples and the difference in the means is roughly the same for the two samples, but
data set B seems to provide a much sharper contrast between the two populations
from which the samples were taken. If the purpose of such an experiment is to
detect differences between the two populations, the task is accomplished in the case
of data set B. However, in data set A the large variability within the two samples
creates difficulty. In fact, it is not clear that there is a distinction between the two
populations.
Data set A: X X X X X X 0 X X 0 0 X X X 0 0 0 0 0 0 0 0
Data set B: X X X X X X X X X X X 0 0 0 0 0 0 0 0 0 0 0
xX
x0
xX
x0
1.4 Measures of Variability 15
Sample Range and Sample Standard Deviation
Just as there are many measures of central tendency or location, there are many
measures of spread or variability. Perhaps the simplest one is the sample range
Xmax − Xmin. The range can be very useful and is discussed at length in Chapter
17 on statistical quality control. The sample measure of spread that is used most
often is the sample standard deviation. We again let x1, x2, . . . , xn denote
sample values.
Definition 1.3: The sample variance, denoted by s2
, is given by
s2
=
n

i=1
(xi − x̄)2
n − 1
.
The sample standard deviation, denoted by s, is the positive square root of
s2
, that is,
s =
√
s2.
It should be clear to the reader that the sample standard deviation is, in fact,
a measure of variability. Large variability in a data set produces relatively large
values of (x − x̄)2
and thus a large sample variance. The quantity n − 1 is often
called the degrees of freedom associated with the variance estimate. In this
simple example, the degrees of freedom depict the number of independent pieces
of information available for computing variability. For example, suppose that we
wish to compute the sample variance and standard deviation of the data set (5,
17, 6, 4). The sample average is x̄ = 8. The computation of the variance involves
(5 − 8)2
+ (17 − 8)2
+ (6 − 8)2
+ (4 − 8)2
= (−3)2
+ 92
+ (−2)2
+ (−4)2
.
The quantities inside parentheses sum to zero. In general,
n

i=1
(xi − x̄) = 0 (see
Exercise 1.16 on page 31). Then the computation of a sample variance does not
involve n independent squared deviations from the mean x̄. In fact, since the
last value of x − x̄ is determined by the initial n − 1 of them, we say that these
are n − 1 “pieces of information” that produce s2
. Thus, there are n − 1 degrees
of freedom rather than n degrees of freedom for computing a sample variance.
Example 1.4: In an example discussed extensively in Chapter 10, an engineer is interested in
testing the “bias” in a pH meter. Data are collected on the meter by measuring
the pH of a neutral substance (pH = 7.0). A sample of size 10 is taken, with results
given by
7.07 7.00 7.10 6.97 7.00 7.03 7.01 7.01 6.98 7.08.
The sample mean x̄ is given by
x̄ =
7.07 + 7.00 + 7.10 + · · · + 7.08
10
= 7.0250.
16 Chapter 1 Introduction to Statistics and Data Analysis
The sample variance s2
is given by
s2
=
1
9
[(7.07 − 7.025)2
+ (7.00 − 7.025)2
+ (7.10 − 7.025)2
+ · · · + (7.08 − 7.025)2
] = 0.001939.
As a result, the sample standard deviation is given by
s =
√
0.001939 = 0.044.
So the sample standard deviation is 0.0440 with n − 1 = 9 degrees of freedom.
Units for Standard Deviation and Variance
It should be apparent from Definition 1.3 that the variance is a measure of the
average squared deviation from the mean x̄. We use the term average squared
deviation even though the definition makes use of a division by degrees of freedom
n − 1 rather than n. Of course, if n is large, the difference in the denominator
is inconsequential. As a result, the sample variance possesses units that are the
square of the units in the observed data whereas the sample standard deviation
is found in linear units. As an example, consider the data of Example 1.2. The
stem weights are measured in grams. As a result, the sample standard deviations
are in grams and the variances are measured in grams2. In fact, the individual
standard deviations are 0.0728 gram for the no-nitrogen case and 0.1867 gram for
the nitrogen group. Note that the standard deviation does indicate considerably
larger variability in the nitrogen sample. This condition was displayed in Figure
1.1.
Which Variability Measure Is More Important?
As we indicated earlier, the sample range has applications in the area of statistical
quality control. It may appear to the reader that the use of both the sample
variance and the sample standard deviation is redundant. Both measures reflect the
same concept in measuring variability, but the sample standard deviation measures
variability in linear units whereas the sample variance is measured in squared
units. Both play huge roles in the use of statistical methods. Much of what is
accomplished in the context of statistical inference involves drawing conclusions
about characteristics of populations. Among these characteristics are constants
which are called population parameters. Two important parameters are the
population mean and the population variance. The sample variance plays an
explicit role in the statistical methods used to draw inferences about the population
variance. The sample standard deviation has an important role along with the
sample mean in inferences that are made about the population mean. In general,
the variance is considered more in inferential theory, while the standard deviation
is used more in applications.
1.5 Discrete and Continuous Data 17
Exercises
1.7 Consider the drying time data for Exercise 1.1
on page 13. Compute the sample variance and sample
standard deviation.
1.8 Compute the sample variance and standard devi-
ation for the water absorbency data of Exercise 1.2 on
page 13.
1.9 Exercise 1.3 on page 13 showed tensile strength
data for two samples, one in which specimens were ex-
posed to an aging process and one in which there was
no aging of the specimens.
(a) Calculate the sample variance as well as standard
deviation in tensile strength for both samples.
(b) Does there appear to be any evidence that aging
affects the variability in tensile strength? (See also
the plot for Exercise 1.3 on page 13.)
1.10 For the data of Exercise 1.4 on page 13, com-
pute both the mean and the variance in “flexibility”
for both company A and company B. Does there ap-
pear to be a difference in flexibility between company
A and company B?
1.11 Consider the data in Exercise 1.5 on page 13.
Compute the sample variance and the sample standard
deviation for both control and treatment groups.
1.12 For Exercise 1.6 on page 13, compute the sample
standard deviation in tensile strength for the samples
separately for the two temperatures. Does it appear as
if an increase in temperature influences the variability
in tensile strength? Explain.
1.5 Discrete and Continuous Data
Statistical inference through the analysis of observational studies or designed ex-
periments is used in many scientific areas. The data gathered may be discrete
or continuous, depending on the area of application. For example, a chemical
engineer may be interested in conducting an experiment that will lead to condi-
tions where yield is maximized. Here, of course, the yield may be in percent or
grams/pound, measured on a continuum. On the other hand, a toxicologist con-
ducting a combination drug experiment may encounter data that are binary in
nature (i.e., the patient either responds or does not).
Great distinctions are made between discrete and continuous data in the prob-
ability theory that allow us to draw statistical inferences. Often applications of
statistical inference are found when the data are count data. For example, an en-
gineer may be interested in studying the number of radioactive particles passing
through a counter in, say, 1 millisecond. Personnel responsible for the efficiency
of a port facility may be interested in the properties of the number of oil tankers
arriving each day at a certain port city. In Chapter 5, several distinct scenarios,
leading to different ways of handling data, are discussed for situations with count
data.
Special attention even at this early stage of the textbook should be paid to some
details associated with binary data. Applications requiring statistical analysis of
binary data are voluminous. Often the measure that is used in the analysis is
the sample proportion. Obviously the binary situation involves two categories.
If there are n units involved in the data and x is defined as the number that
fall into category 1, then n − x fall into category 2. Thus, x/n is the sample
proportion in category 1, and 1 − x/n is the sample proportion in category 2. In
the biomedical application, 50 patients may represent the sample units, and if 20
out of 50 experienced an improvement in a stomach ailment (common to all 50)
after all were given the drug, then 20
50 = 0.4 is the sample proportion for which
18 Chapter 1 Introduction to Statistics and Data Analysis
the drug was a success and 1 − 0.4 = 0.6 is the sample proportion for which the
drug was not successful. Actually the basic numerical measurement for binary
data is generally denoted by either 0 or 1. For example, in our medical example,
a successful result is denoted by a 1 and a nonsuccess a 0. As a result, the sample
proportion is actually a sample mean of the ones and zeros. For the successful
category,
x1 + x2 + · · · + x50
50
=
1 + 1 + 0 + · · · + 0 + 1
50
=
20
50
= 0.4.
What Kinds of Problems Are Solved in Binary Data Situations?
The kinds of problems facing scientists and engineers dealing in binary data are
not a great deal unlike those seen where continuous measurements are of interest.
However, different techniques are used since the statistical properties of sample
proportions are quite different from those of the sample means that result from
averages taken from continuous populations. Consider the example data in Ex-
ercise 1.6 on page 13. The statistical problem underlying this illustration focuses
on whether an intervention, say, an increase in curing temperature, will alter the
population mean tensile strength associated with the silicone rubber process. On
the other hand, in a quality control area, suppose an automobile tire manufacturer
reports that a shipment of 5000 tires selected randomly from the process results
in 100 of them showing blemishes. Here the sample proportion is 100
5000 = 0.02.
Following a change in the process designed to reduce blemishes, a second sample of
5000 is taken and 90 tires are blemished. The sample proportion has been reduced
to 90
5000 = 0.018. The question arises, “Is the decrease in the sample proportion
from 0.02 to 0.018 substantial enough to suggest a real improvement in the pop-
ulation proportion?” Both of these illustrations require the use of the statistical
properties of sample averages—one from samples from a continuous population,
and the other from samples from a discrete (binary) population. In both cases,
the sample mean is an estimate of a population parameter, a population mean
in the first illustration (i.e., mean tensile strength), and a population proportion
in the second case (i.e., proportion of blemished tires in the population). So here
we have sample estimates used to draw scientific conclusions regarding population
parameters. As we indicated in Section 1.3, this is the general theme in many
practical problems using statistical inference.
1.6 Statistical Modeling, Scientific Inspection, and Graphical
Diagnostics
Often the end result of a statistical analysis is the estimation of parameters of a
postulated model. This is natural for scientists and engineers since they often
deal in modeling. A statistical model is not deterministic but, rather, must entail
some probabilistic aspects. A model form is often the foundation of assumptions
that are made by the analyst. For example, in Example 1.2 the scientist may wish
to draw some level of distinction between the nitrogen and no-nitrogen populations
through the sample information. The analysis may require a certain model for
1.6 Statistical Modeling, Scientific Inspection, and Graphical Diagnostics 19
the data, for example, that the two samples come from normal or Gaussian
distributions. See Chapter 6 for a discussion of the normal distribution.
Obviously, the user of statistical methods cannot generate sufficient informa-
tion or experimental data to characterize the population totally. But sets of data
are often used to learn about certain properties of the population. Scientists and
engineers are accustomed to dealing with data sets. The importance of character-
izing or summarizing the nature of collections of data should be obvious. Often a
summary of a collection of data via a graphical display can provide insight regard-
ing the system from which the data were taken. For instance, in Sections 1.1 and
1.3, we have shown dot plots.
In this section, the role of sampling and the display of data for enhancement of
statistical inference is explored in detail. We merely introduce some simple but
often effective displays that complement the study of statistical populations.
Scatter Plot
At times the model postulated may take on a somewhat complicated form. Con-
sider, for example, a textile manufacturer who designs an experiment where cloth
specimen that contain various percentages of cotton are produced. Consider the
data in Table 1.3.
Table 1.3: Tensile Strength
Cotton Percentage Tensile Strength
15 7, 7, 9, 8, 10
20 19, 20, 21, 20, 22
25 21, 21, 17, 19, 20
30 8, 7, 8, 9, 10
Five cloth specimens are manufactured for each of the four cotton percentages.
In this case, both the model for the experiment and the type of analysis used
should take into account the goal of the experiment and important input from
the textile scientist. Some simple graphics can shed important light on the clear
distinction between the samples. See Figure 1.5; the sample means and variability
are depicted nicely in the scatter plot. One possible goal of this experiment is
simply to determine which cotton percentages are truly distinct from the others.
In other words, as in the case of the nitrogen/no-nitrogen data, for which cotton
percentages are there clear distinctions between the populations or, more specifi-
cally, between the population means? In this case, perhaps a reasonable model is
that each sample comes from a normal distribution. Here the goal is very much
like that of the nitrogen/no-nitrogen data except that more samples are involved.
The formalism of the analysis involves notions of hypothesis testing discussed in
Chapter 10. Incidentally, this formality is perhaps not necessary in light of the
diagnostic plot. But does this describe the real goal of the experiment and hence
the proper approach to data analysis? It is likely that the scientist anticipates
the existence of a maximum population mean tensile strength in the range of cot-
ton concentration in the experiment. Here the analysis of the data should revolve
20 Chapter 1 Introduction to Statistics and Data Analysis
around a different type of model, one that postulates a type of structure relating
the population mean tensile strength to the cotton concentration. In other words,
a model may be written
μt,c = β0 + β1C + β2C2
,
where μt,c is the population mean tensile strength, which varies with the amount
of cotton in the product C. The implication of this model is that for a fixed cotton
level, there is a population of tensile strength measurements and the population
mean is μt,c. This type of model, called a regression model, is discussed in
Chapters 11 and 12. The functional form is chosen by the scientist. At times
the data analysis may suggest that the model be changed. Then the data analyst
“entertains” a model that may be altered after some analysis is done. The use
of an empirical model is accompanied by estimation theory, where β0, β1, and
β2 are estimated by the data. Further, statistical inference can then be used to
determine model adequacy.
5
10
15
20
25
15 20 25 30
Tensile
Strength
Cotton Percentages
Figure 1.5: Scatter plot of tensile strength and cotton percentages.
Two points become evident from the two data illustrations here: (1) The type
of model used to describe the data often depends on the goal of the experiment;
and (2) the structure of the model should take advantage of nonstatistical scientific
input. A selection of a model represents a fundamental assumption upon which
the resulting statistical inference is based. It will become apparent throughout the
book how important graphics can be. Often, plots can illustrate information that
allows the results of the formal statistical inference to be better communicated to
the scientist or engineer. At times, plots or exploratory data analysis can teach
the analyst something not retrieved from the formal analysis. Almost any formal
analysis requires assumptions that evolve from the model of the data. Graphics can
nicely highlight violation of assumptions that would otherwise go unnoticed.
Throughout the book, graphics are used extensively to supplement formal data
analysis. The following sections reveal some graphical tools that are useful in
exploratory or descriptive data analysis.
1.6 Statistical Modeling, Scientific Inspection, and Graphical Diagnostics 21
Stem-and-Leaf Plot
Statistical data, generated in large masses, can be very useful for studying the
behavior of the distribution if presented in a combined tabular and graphic display
called a stem-and-leaf plot.
To illustrate the construction of a stem-and-leaf plot, consider the data of Table
1.4, which specifies the “life” of 40 similar car batteries recorded to the nearest tenth
of a year. The batteries are guaranteed to last 3 years. First, split each observation
into two parts consisting of a stem and a leaf such that the stem represents the
digit preceding the decimal and the leaf corresponds to the decimal part of the
number. In other words, for the number 3.7, the digit 3 is designated the stem and
the digit 7 is the leaf. The four stems 1, 2, 3, and 4 for our data are listed vertically
on the left side in Table 1.5; the leaves are recorded on the right side opposite the
appropriate stem value. Thus, the leaf 6 of the number 1.6 is recorded opposite
the stem 1; the leaf 5 of the number 2.5 is recorded opposite the stem 2; and so
forth. The number of leaves recorded opposite each stem is summarized under the
frequency column.
Table 1.4: Car Battery Life
2.2 4.1 3.5 4.5 3.2 3.7 3.0 2.6
3.4 1.6 3.1 3.3 3.8 3.1 4.7 3.7
2.5 4.3 3.4 3.6 2.9 3.3 3.9 3.1
3.3 3.1 3.7 4.4 3.2 4.1 1.9 3.4
4.7 3.8 3.2 2.6 3.9 3.0 4.2 3.5
Table 1.5: Stem-and-Leaf Plot of Battery Life
Stem Leaf Frequency
1
2
3
4
69
25669
0011112223334445567778899
11234577
2
5
25
8
The stem-and-leaf plot of Table 1.5 contains only four stems and consequently
does not provide an adequate picture of the distribution. To remedy this problem,
we need to increase the number of stems in our plot. One simple way to accomplish
this is to write each stem value twice and then record the leaves 0, 1, 2, 3, and 4
opposite the appropriate stem value where it appears for the first time, and the
leaves 5, 6, 7, 8, and 9 opposite this same stem value where it appears for the second
time. This modified double-stem-and-leaf plot is illustrated in Table 1.6, where the
stems corresponding to leaves 0 through 4 have been coded by the symbol  and
the stems corresponding to leaves 5 through 9 by the symbol ·.
In any given problem, we must decide on the appropriate stem values. This
decision is made somewhat arbitrarily, although we are guided by the size of our
sample. Usually, we choose between 5 and 20 stems. The smaller the number of
data available, the smaller is our choice for the number of stems. For example, if
22 Chapter 1 Introduction to Statistics and Data Analysis
the data consist of numbers from 1 to 21 representing the number of people in a
cafeteria line on 40 randomly selected workdays and we choose a double-stem-and-
leaf plot, the stems will be 0, 0·, 1, 1·, and 2 so that the smallest observation
1 has stem 0 and leaf 1, the number 18 has stem 1· and leaf 8, and the largest
observation 21 has stem 2 and leaf 1. On the other hand, if the data consist of
numbers from $18,800 to $19,600 representing the best possible deals on 100 new
automobiles from a certain dealership and we choose a single-stem-and-leaf plot,
the stems will be 188, 189, 190, . . . , 196 and the leaves will now each contain two
digits. A car that sold for $19,385 would have a stem value of 193 and the two-digit
leaf 85. Multiple-digit leaves belonging to the same stem are usually separated by
commas in the stem-and-leaf plot. Decimal points in the data are generally ignored
when all the digits to the right of the decimal represent the leaf. Such was the
case in Tables 1.5 and 1.6. However, if the data consist of numbers ranging from
21.8 to 74.9, we might choose the digits 2, 3, 4, 5, 6, and 7 as our stems so that a
number such as 48.3 would have a stem value of 4 and a leaf of 8.3.
Table 1.6: Double-Stem-and-Leaf Plot of Battery Life
Stem Leaf Frequency
1·
2
2·
3
3·
4
4·
69
2
5669
001111222333444
5567778899
11234
577
2
1
4
15
10
5
3
The stem-and-leaf plot represents an effective way to summarize data. Another
way is through the use of the frequency distribution, where the data, grouped
into different classes or intervals, can be constructed by counting the leaves be-
longing to each stem and noting that each stem defines a class interval. In Table
1.5, the stem 1 with 2 leaves defines the interval 1.0–1.9 containing 2 observations;
the stem 2 with 5 leaves defines the interval 2.0–2.9 containing 5 observations; the
stem 3 with 25 leaves defines the interval 3.0–3.9 with 25 observations; and the
stem 4 with 8 leaves defines the interval 4.0–4.9 containing 8 observations. For the
double-stem-and-leaf plot of Table 1.6, the stems define the seven class intervals
1.5–1.9, 2.0–2.4, 2.5–2.9, 3.0–3.4, 3.5–3.9, 4.0–4.4, and 4.5–4.9 with frequencies 2,
1, 4, 15, 10, 5, and 3, respectively.
Histogram
Dividing each class frequency by the total number of observations, we obtain the
proportion of the set of observations in each of the classes. A table listing relative
frequencies is called a relative frequency distribution. The relative frequency
distribution for the data of Table 1.4, showing the midpoint of each class interval,
is given in Table 1.7.
The information provided by a relative frequency distribution in tabular form is
easier to grasp if presented graphically. Using the midpoint of each interval and the
1.6 Statistical Modeling, Scientific Inspection, and Graphical Diagnostics 23
Table 1.7: Relative Frequency Distribution of Battery Life
Class Class Frequency, Relative
Interval Midpoint f Frequency
1.5–1.9 1.7 2 0.050
2.0–2.4 2.2 1 0.025
2.5–2.9 2.7 4 0.100
3.0–3.4 3.2 15 0.375
3.5–3.9 3.7 10 0.250
4.0–4.4 4.2 5 0.125
4.5–4.9 4.7 3 0.075
0.375
0.250
0.125
1.7 2.2 2.7 3.2 3.7 4.2 4.7
Relativ
e
Frequencty
Battery Life (years)
Figure 1.6: Relative frequency histogram.
corresponding relative frequency, we construct a relative frequency histogram
(Figure 1.6).
Many continuous frequency distributions can be represented graphically by the
characteristic bell-shaped curve of Figure 1.7. Graphical tools such as what we see
in Figures 1.6 and 1.7 aid in the characterization of the nature of the population. In
Chapters 5 and 6 we discuss a property of the population called its distribution.
While a more rigorous definition of a distribution or probability distribution
will be given later in the text, at this point one can view it as what would be seen
in Figure 1.7 in the limit as the size of the sample becomes larger.
A distribution is said to be symmetric if it can be folded along a vertical axis
so that the two sides coincide. A distribution that lacks symmetry with respect to
a vertical axis is said to be skewed. The distribution illustrated in Figure 1.8(a)
is said to be skewed to the right since it has a long right tail and a much shorter
left tail. In Figure 1.8(b) we see that the distribution is symmetric, while in Figure
1.8(c) it is skewed to the left.
If we rotate a stem-and-leaf plot counterclockwise through an angle of 90◦
,
we observe that the resulting columns of leaves form a picture that is similar
to a histogram. Consequently, if our primary purpose in looking at the data is to
determine the general shape or form of the distribution, it will seldom be necessary
24 Chapter 1 Introduction to Statistics and Data Analysis
0
f (x)
Battery Life (years)
1 2 3 4 5 6
Figure 1.7: Estimating frequency distribution.
(a) (b) (c)
Figure 1.8: Skewness of data.
to construct a relative frequency histogram.
Box-and-Whisker Plot or Box Plot
Another display that is helpful for reflecting properties of a sample is the box-
and-whisker plot. This plot encloses the interquartile range of the data in a box
that has the median displayed within. The interquartile range has as its extremes
the 75th percentile (upper quartile) and the 25th percentile (lower quartile). In
addition to the box, “whiskers” extend, showing extreme observations in the sam-
ple. For reasonably large samples, the display shows center of location, variability,
and the degree of asymmetry.
In addition, a variation called a box plot can provide the viewer with infor-
mation regarding which observations may be outliers. Outliers are observations
that are considered to be unusually far from the bulk of the data. There are many
statistical tests that are designed to detect outliers. Technically, one may view
an outlier as being an observation that represents a “rare event” (there is a small
probability of obtaining a value that far from the bulk of the data). The concept
of outliers resurfaces in Chapter 12 in the context of regression analysis.
1.6 Statistical Modeling, Scientific Inspection, and Graphical Diagnostics 25
The visual information in the box-and-whisker plot or box plot is not intended
to be a formal test for outliers. Rather, it is viewed as a diagnostic tool. While the
determination of which observations are outliers varies with the type of software
that is used, one common procedure is to use a multiple of the interquartile
range. For example, if the distance from the box exceeds 1.5 times the interquartile
range (in either direction), the observation may be labeled an outlier.
Example 1.5: Nicotine content was measured in a random sample of 40 cigarettes. The data are
displayed in Table 1.8.
Table 1.8: Nicotine Data for Example 1.5
1.09 1.92 2.31 1.79 2.28 1.74 1.47 1.97
0.85 1.24 1.58 2.03 1.70 2.17 2.55 2.11
1.86 1.90 1.68 1.51 1.64 0.72 1.69 1.85
1.82 1.79 2.46 1.88 2.08 1.67 1.37 1.93
1.40 1.64 2.09 1.75 1.63 2.37 1.75 1.69
1.0 1.5 2.0 2.5
Nicotine
Figure 1.9: Box-and-whisker plot for Example 1.5.
Figure 1.9 shows the box-and-whisker plot of the data, depicting the observa-
tions 0.72 and 0.85 as mild outliers in the lower tail, whereas the observation 2.55
is a mild outlier in the upper tail. In this example, the interquartile range is 0.365,
and 1.5 times the interquartile range is 0.5475. Figure 1.10, on the other hand,
provides a stem-and-leaf plot.
Example 1.6: Consider the data in Table 1.9, consisting of 30 samples measuring the thickness of
paint can “ears” (see the work by Hogg and Ledolter, 1992, in the Bibliography).
Figure 1.11 depicts a box-and-whisker plot for this asymmetric set of data. Notice
that the left block is considerably larger than the block on the right. The median
is 35. The lower quartile is 31, while the upper quartile is 36. Notice also that the
extreme observation on the right is farther away from the box than the extreme
observation on the left. There are no outliers in this data set.
26 Chapter 1 Introduction to Statistics and Data Analysis
The decimal point is 1 digit(s) to the left of the |
7 | 2
8 | 5
9 |
10 | 9
11 |
12 | 4
13 | 7
14 | 07
15 | 18
16 | 3447899
17 | 045599
18 | 2568
19 | 0237
20 | 389
21 | 17
22 | 8
23 | 17
24 | 6
25 | 5
Figure 1.10: Stem-and-leaf plot for the nicotine data.
Table 1.9: Data for Example 1.6
Sample Measurements Sample Measurements
1 29 36 39 34 34 16 35 30 35 29 37
2 29 29 28 32 31 17 40 31 38 35 31
3 34 34 39 38 37 18 35 36 30 33 32
4 35 37 33 38 41 19 35 34 35 30 36
5 30 29 31 38 29 20 35 35 31 38 36
6 34 31 37 39 36 21 32 36 36 32 36
7 30 35 33 40 36 22 36 37 32 34 34
8 28 28 31 34 30 23 29 34 33 37 35
9 32 36 38 38 35 24 36 36 35 37 37
10 35 30 37 35 31 25 36 30 35 33 31
11 35 30 35 38 35 26 35 30 29 38 35
12 38 34 35 35 31 27 35 36 30 34 36
13 34 35 33 30 34 28 35 30 36 29 35
14 40 35 34 33 35 29 38 36 35 31 31
15 34 35 38 35 30 30 30 34 40 28 30
There are additional ways that box-and-whisker plots and other graphical dis-
plays can aid the analyst. Multiple samples can be compared graphically. Plots of
data can suggest relationships between variables. Graphs can aid in the detection
of anomalies or outlying observations in samples.
There are other types of graphical tools and plots that are used. These are
discussed in Chapter 8 after we introduce additional theoretical details.
1.7 General Types of Statistical Studies 27
28 30 32 34 36 38 40
Paint
Figure 1.11: Box-and-whisker plot for thickness of paint can “ears.”
Other Distinguishing Features of a Sample
There are features of the distribution or sample other than measures of center
of location and variability that further define its nature. For example, while the
median divides the data (or distribution) into two parts, there are other measures
that divide parts or pieces of the distribution that can be very useful. Separation
is made into four parts by quartiles, with the third quartile separating the upper
quarter of the data from the rest, the second quartile being the median, and the first
quartile separating the lower quarter of the data from the rest. The distribution can
be even more finely divided by computing percentiles of the distribution. These
quantities give the analyst a sense of the so-called tails of the distribution (i.e.,
values that are relatively extreme, either small or large). For example, the 95th
percentile separates the highest 5% from the bottom 95%. Similar definitions
prevail for extremes on the lower side or lower tail of the distribution. The 1st
percentile separates the bottom 1% from the rest of the distribution. The concept
of percentiles will play a major role in much that will be covered in future chapters.
1.7 General Types of Statistical Studies: Designed
Experiment, Observational Study, and Retrospective Study
In the foregoing sections we have emphasized the notion of sampling from a pop-
ulation and the use of statistical methods to learn or perhaps affirm important
information about the population. The information sought and learned through
the use of these statistical methods can often be influential in decision making and
problem solving in many important scientific and engineering areas. As an illustra-
tion, Example 1.3 describes a simple experiment in which the results may provide
an aid in determining the kinds of conditions under which it is not advisable to use
a particular aluminum alloy that may have a dangerous vulnerability to corrosion.
The results may be of use not only to those who produce the alloy, but also to the
customer who may consider using it. This illustration, as well as many more that
appear in Chapters 13 through 15, highlights the concept of designing or control-
ling experimental conditions (combinations of coating conditions and humidity) of
28 Chapter 1 Introduction to Statistics and Data Analysis
interest to learn about some characteristic or measurement (level of corrosion) that
results from these conditions. Statistical methods that make use of measures of
central tendency in the corrosion measure, as well as measures of variability, are
employed. As the reader will observe later in the text, these methods often lead to
a statistical model like that discussed in Section 1.6. In this case, the model may be
used to estimate (or predict) the corrosion measure as a function of humidity and
the type of coating employed. Again, in developing this kind of model, descriptive
statistics that highlight central tendency and variability become very useful.
The information supplied in Example 1.3 illustrates nicely the types of engi-
neering questions asked and answered by the use of statistical methods that are
employed through a designed experiment and presented in this text. They are
(i) What is the nature of the impact of relative humidity on the corrosion of the
aluminum alloy within the range of relative humidity in this experiment?
(ii) Does the chemical corrosion coating reduce corrosion levels and can the effect
be quantified in some fashion?
(iii) Is there interaction between coating type and relative humidity that impacts
their influence on corrosion of the alloy? If so, what is its interpretation?
What Is Interaction?
The importance of questions (i) and (ii) should be clear to the reader, as they
deal with issues important to both producers and users of the alloy. But what
about question (iii)? The concept of interaction will be discussed at length in
Chapters 14 and 15. Consider the plot in Figure 1.3. This is an illustration of
the detection of interaction between two factors in a simple designed experiment.
Note that the lines connecting the sample means are not parallel. Parallelism
would have indicated that the effect (seen as a result of the slope of the lines)
of relative humidity is the same, namely a negative effect, for both an uncoated
condition and the chemical corrosion coating. Recall that the negative slope implies
that corrosion becomes more pronounced as humidity rises. Lack of parallelism
implies an interaction between coating type and relative humidity. The nearly
“flat” line for the corrosion coating as opposed to a steeper slope for the uncoated
condition suggests that not only is the chemical corrosion coating beneficial (note
the displacement between the lines), but the presence of the coating renders the
effect of humidity negligible. Clearly all these questions are very important to the
effect of the two individual factors and to the interpretation of the interaction, if
it is present.
Statistical models are extremely useful in answering questions such as those
listed in (i), (ii), and (iii), where the data come from a designed experiment. But
one does not always have the luxury or resources to employ a designed experiment.
For example, there are many instances in which the conditions of interest to the
scientist or engineer cannot be implemented simply because the important factors
cannot be controlled. In Example 1.3, the relative humidity and coating type (or
lack of coating) are quite easy to control. This of course is the defining feature of
a designed experiment. In many fields, factors that need to be studied cannot be
controlled for any one of various reasons. Tight control as in Example 1.3 allows the
analyst to be confident that any differences found (for example, in corrosion levels)
1.7 General Types of Statistical Studies 29
are due to the factors under control. As a second illustration, consider Exercise
1.6 on page 13. Suppose in this case 24 specimens of silicone rubber are selected
and 12 assigned to each of the curing temperature levels. The temperatures are
controlled carefully, and thus this is an example of a designed experiment with a
single factor being curing temperature. Differences found in the mean tensile
strength would be assumed to be attributed to the different curing temperatures.
What If Factors Are Not Controlled?
Suppose there are no factors controlled and no random assignment of fixed treat-
ments to experimental units and yet there is a need to glean information from a
data set. As an illustration, consider a study in which interest centers around the
relationship between blood cholesterol levels and the amount of sodium measured
in the blood. A group of individuals were monitored over time for both blood
cholesterol and sodium. Certainly some useful information can be gathered from
such a data set. However, it should be clear that there certainly can be no strict
control of blood sodium levels. Ideally, the subjects should be divided randomly
into two groups, with one group assigned a specific high level of blood sodium and
the other a specific low level of blood sodium. Obviously this cannot be done.
Clearly changes in cholesterol can be experienced because of changes in one of
a number of other factors that were not controlled. This kind of study, without
factor control, is called an observational study. Much of the time it involves a
situation in which subjects are observed across time.
Biological and biomedical studies are often by necessity observational studies.
However, observational studies are not confined to those areas. For example, con-
sider a study that is designed to determine the influence of ambient temperature on
the electric power consumed by a chemical plant. Clearly, levels of ambient temper-
ature cannot be controlled, and thus the data structure can only be a monitoring
of the data from the plant over time.
It should be apparent that the striking difference between a well-designed ex-
periment and observational studies is the difficulty in determination of true cause
and effect with the latter. Also, differences found in the fundamental response
(e.g., corrosion levels, blood cholesterol, plant electric power consumption) may
be due to other underlying factors that were not controlled. Ideally, in a designed
experiment the nuisance factors would be equalized via the randomization process.
Certainly changes in blood cholesterol could be due to fat intake, exercise activity,
and so on. Electric power consumption could be affected by the amount of product
produced or even the purity of the product produced.
Another often ignored disadvantage of an observational study when compared
to carefully designed experiments is that, unlike the latter, observational studies
are at the mercy of nature, environmental or other uncontrolled circumstances
that impact the ranges of factors of interest. For example, in the biomedical study
regarding the influence of blood sodium levels on blood cholesterol, it is possible
that there is indeed a strong influence but the particular data set used did not
involve enough observed variation in sodium levels because of the nature of the
subjects chosen. Of course, in a designed experiment, the analyst chooses and
controls ranges of factors.
/ /
30 Chapter 1 Introduction to Statistics and Data Analysis
A third type of statistical study which can be very useful but has clear dis-
advantages when compared to a designed experiment is a retrospective study.
This type of study uses strictly historical data, data taken over a specific period
of time. One obvious advantage of retrospective data is that there is reduced cost
in collecting the data. However, as one might expect, there are clear disadvantages.
(i) Validity and reliability of historical data are often in doubt.
(ii) If time is an important aspect of the structure of the data, there may be data
missing.
(iii) There may be errors in collection of the data that are not known.
(iv) Again, as in the case of observational data, there is no control on the ranges
of the measured variables (the factors in a study). Indeed, the ranges found
in historical data may not be relevant for current studies.
In Section 1.6, some attention was given to modeling of relationships among vari-
ables. We introduced the notion of regression analysis, which is covered in Chapters
11 and 12 and is illustrated as a form of data analysis for designed experiments
discussed in Chapters 14 and 15. In Section 1.6, a model relating population mean
tensile strength of cloth to percentages of cotton was used for illustration, where
20 specimens of cloth represented the experimental units. In that case, the data
came from a simple designed experiment where the individual cotton percentages
were selected by the scientist.
Often both observational data and retrospective data are used for the purpose
of observing relationships among variables through model-building procedures dis-
cussed in Chapters 11 and 12. While the advantages of designed experiments
certainly apply when the goal is statistical model building, there are many areas
in which designing of experiments is not possible. Thus, observational or historical
data must be used. We refer here to a historical data set that is found in Exercise
12.5 on page 450. The goal is to build a model that will result in an equation
or relationship that relates monthly electric power consumed to average ambient
temperature x1, the number of days in the month x2, the average product purity
x3, and the tons of product produced x4. The data are the past year’s historical
data.
Exercises
1.13 A manufacturer of electronic components is in-
terested in determining the lifetime of a certain type
of battery. A sample, in hours of life, is as follows:
123, 116, 122, 110, 175, 126, 125, 111, 118, 117.
(a) Find the sample mean and median.
(b) What feature in this data set is responsible for the
substantial difference between the two?
1.14 A tire manufacturer wants to determine the in-
ner diameter of a certain grade of tire. Ideally, the
diameter would be 570 mm. The data are as follows:
572, 572, 573, 568, 569, 575, 565, 570.
(a) Find the sample mean and median.
(b) Find the sample variance, standard deviation, and
range.
(c) Using the calculated statistics in parts (a) and (b),
can you comment on the quality of the tires?
1.15 Five independent coin tosses result in
HHHHH. It turns out that if the coin is fair the
probability of this outcome is (1/2)5
= 0.03125. Does
this produce strong evidence that the coin is not fair?
Comment and use the concept of P-value discussed in
Section 1.1.
/ /
Exercises 31
1.16 Show that the n pieces of information in
n

i=1
(xi − x̄)2
are not independent; that is, show that
n

i=1
(xi − x̄) = 0.
1.17 A study of the effects of smoking on sleep pat-
terns is conducted. The measure observed is the time,
in minutes, that it takes to fall asleep. These data are
obtained:
Smokers: 69.3 56.0 22.1 47.6
53.2 48.1 52.7 34.4
60.2 43.8 23.2 13.8
Nonsmokers: 28.6 25.1 26.4 34.9
29.8 28.4 38.5 30.2
30.6 31.8 41.6 21.1
36.0 37.9 13.9
(a) Find the sample mean for each group.
(b) Find the sample standard deviation for each group.
(c) Make a dot plot of the data sets A and B on the
same line.
(d) Comment on what kind of impact smoking appears
to have on the time required to fall asleep.
1.18 The following scores represent the final exami-
nation grades for an elementary statistics course:
23 60 79 32 57 74 52 70 82
36 80 77 81 95 41 65 92 85
55 76 52 10 64 75 78 25 80
98 81 67 41 71 83 54 64 72
88 62 74 43 60 78 89 76 84
48 84 90 15 79 34 67 17 82
69 74 63 80 85 61
(a) Construct a stem-and-leaf plot for the examination
grades in which the stems are 1, 2, 3, . . . , 9.
(b) Construct a relative frequency histogram, draw an
estimate of the graph of the distribution, and dis-
cuss the skewness of the distribution.
(c) Compute the sample mean, sample median, and
sample standard deviation.
1.19 The following data represent the length of life in
years, measured to the nearest tenth, of 30 similar fuel
pumps:
2.0 3.0 0.3 3.3 1.3 0.4
0.2 6.0 5.5 6.5 0.2 2.3
1.5 4.0 5.9 1.8 4.7 0.7
4.5 0.3 1.5 0.5 2.5 5.0
1.0 6.0 5.6 6.0 1.2 0.2
(a) Construct a stem-and-leaf plot for the life in years
of the fuel pumps, using the digit to the left of the
decimal point as the stem for each observation.
(b) Set up a relative frequency distribution.
(c) Compute the sample mean, sample range, and sam-
ple standard deviation.
1.20 The following data represent the length of life,
in seconds, of 50 fruit flies subject to a new spray in a
controlled laboratory experiment:
17 20 10 9 23 13 12 19 18 24
12 14 6 9 13 6 7 10 13 7
16 18 8 13 3 32 9 7 10 11
13 7 18 7 10 4 27 19 16 8
7 10 5 14 15 10 9 6 7 15
(a) Construct a double-stem-and-leaf plot for the life
span of the fruit flies using the stems 0, 0·, 1, 1·,
2, 2·, and 3 such that stems coded by the symbols
 and · are associated, respectively, with leaves 0
through 4 and 5 through 9.
(b) Set up a relative frequency distribution.
(c) Construct a relative frequency histogram.
(d) Find the median.
1.21 The lengths of power failures, in minutes, are
recorded in the following table.
22 18 135 15 90 78 69 98 102
83 55 28 121 120 13 22 124 112
70 66 74 89 103 24 21 112 21
40 98 87 132 115 21 28 43 37
50 96 118 158 74 78 83 93 95
(a) Find the sample mean and sample median of the
power-failure times.
(b) Find the sample standard deviation of the power-
failure times.
1.22 The following data are the measures of the di-
ameters of 36 rivet heads in 1/100 of an inch.
6.72 6.77 6.82 6.70 6.78 6.70 6.62 6.75
6.66 6.66 6.64 6.76 6.73 6.80 6.72 6.76
6.76 6.68 6.66 6.62 6.72 6.76 6.70 6.78
6.76 6.67 6.70 6.72 6.74 6.81 6.79 6.78
6.66 6.76 6.76 6.72
(a) Compute the sample mean and sample standard
deviation.
(b) Construct a relative frequency histogram of the
data.
(c) Comment on whether or not there is any clear in-
dication that the sample came from a population
that has a bell-shaped distribution.
1.23 The hydrocarbon emissions at idling speed in
parts per million (ppm) for automobiles of 1980 and
1990 model years are given for 20 randomly selected
cars.
/ /
32 Chapter 1 Introduction to Statistics and Data Analysis
1980 models:
141 359 247 940 882 494 306 210 105 880
200 223 188 940 241 190 300 435 241 380
1990 models:
140 160 20 20 223 60 20 95 360 70
220 400 217 58 235 380 200 175 85 65
(a) Construct a dot plot as in Figure 1.1.
(b) Compute the sample means for the two years and
superimpose the two means on the plots.
(c) Comment on what the dot plot indicates regarding
whether or not the population emissions changed
from 1980 to 1990. Use the concept of variability
in your comments.
1.24 The following are historical data on staff salaries
(dollars per pupil) for 30 schools sampled in the eastern
part of the United States in the early 1970s.
3.79 2.99 2.77 2.91 3.10 1.84 2.52 3.22
2.45 2.14 2.67 2.52 2.71 2.75 3.57 3.85
3.36 2.05 2.89 2.83 3.13 2.44 2.10 3.71
3.14 3.54 2.37 2.68 3.51 3.37
(a) Compute the sample mean and sample standard
deviation.
(b) Construct a relative frequency histogram of the
data.
(c) Construct a stem-and-leaf display of the data.
1.25 The following data set is related to that in Ex-
ercise 1.24. It gives the percentages of the families that
are in the upper income level, for the same individual
schools in the same order as in Exercise 1.24.
72.2 31.9 26.5 29.1 27.3 8.6 22.3 26.5
20.4 12.8 25.1 19.2 24.1 58.2 68.1 89.2
55.1 9.4 14.5 13.9 20.7 17.9 8.5 55.4
38.1 54.2 21.5 26.2 59.1 43.3
(a) Calculate the sample mean.
(b) Calculate the sample median.
(c) Construct a relative frequency histogram of the
data.
(d) Compute the 10% trimmed mean. Compare with
the results in (a) and (b) and comment.
1.26 Suppose it is of interest to use the data sets in
Exercises 1.24 and 1.25 to derive a model that would
predict staff salaries as a function of percentage of fam-
ilies in a high income level for current school systems.
Comment on any disadvantage in carrying out this type
of analysis.
1.27 A study is done to determine the influence of
the wear, y, of a bearing as a function of the load, x,
on the bearing. A designed experiment is used for this
study. Three levels of load were used, 700 lb, 1000 lb,
and 1300 lb. Four specimens were used at each level,
and the sample means were, respectively, 210, 325, and
375.
(a) Plot average wear against load.
(b) From the plot in (a), does it appear as if a relation-
ship exists between wear and load?
(c) Suppose we look at the individual wear values for
each of the four specimens at each load level (see
the data that follow). Plot the wear results for all
specimens against the three load values.
(d) From your plot in (c), does it appear as if a clear
relationship exists? If your answer is different from
that in (b), explain why.
x
700 1000 1300
y1 145 250 150
y2 105 195 180
y3 260 375 420
y4 330 480 750
ȳ1 = 210 ȳ2 = 325 ȳ3 = 375
1.28 Many manufacturing companies in the United
States and abroad use molded parts as components of
a process. Shrinkage is often a major problem. Thus, a
molded die for a part is built larger than nominal size
to allow for part shrinkage. In an injection molding
study it is known that the shrinkage is influenced by
many factors, among which are the injection velocity
in ft/sec and mold temperature in ◦
C. The following
two data sets show the results of a designed experiment
in which injection velocity was held at two levels (low
and high) and mold temperature was held constant at
a low level. The shrinkage is measured in cm × 104
.
Shrinkage values at low injection velocity:
72.68 72.62 72.58 72.48 73.07
72.55 72.42 72.84 72.58 72.92
Shrinkage values at high injection velocity:
71.62 71.68 71.74 71.48 71.55
71.52 71.71 71.56 71.70 71.50
(a) Construct a dot plot of both data sets on the same
graph. Indicate on the plot both shrinkage means,
that for low injection velocity and high injection
velocity.
(b) Based on the graphical results in (a), using the lo-
cation of the two means and your sense of variabil-
ity, what do you conclude regarding the effect of
injection velocity on shrinkage at low mold tem-
perature?
1.29 Use the data in Exercise 1.24 to construct a box
plot.
1.30 Below are the lifetimes, in hours, of fifty 40-watt,
110-volt internally frosted incandescent lamps, taken
from forced life tests:
Exercises 33
919 1196 785 1126 936 918
1156 920 948 1067 1092 1162
1170 929 950 905 972 1035
1045 855 1195 1195 1340 1122
938 970 1237 956 1102 1157
978 832 1009 1157 1151 1009
765 958 902 1022 1333 811
1217 1085 896 958 1311 1037
702 923
Construct a box plot for these data.
1.31 Consider the situation of Exercise 1.28. But now
use the following data set, in which shrinkage is mea-
sured once again at low injection velocity and high in-
jection velocity. However, this time the mold temper-
ature is raised to a high level and held constant.
Shrinkage values at low injection velocity:
76.20 76.09 75.98 76.15 76.17
75.94 76.12 76.18 76.25 75.82
Shrinkage values at high injection velocity:
93.25 93.19 92.87 93.29 93.37
92.98 93.47 93.75 93.89 91.62
(a) As in Exercise 1.28, construct a dot plot with both
data sets on the same graph and identify both
means (i.e., mean shrinkage for low injection ve-
locity and for high injection velocity).
(b) As in Exercise 1.28, comment on the influence of
injection velocity on shrinkage for high mold tem-
perature. Take into account the position of the two
means and the variability around each mean.
(c) Compare your conclusion in (b) with that in (b)
of Exercise 1.28 in which mold temperature was
held at a low level. Would you say that there is
an interaction between injection velocity and mold
temperature? Explain.
1.32 Use the results of Exercises 1.28 and 1.31 to cre-
ate a plot that illustrates the interaction evident from
the data. Use the plot in Figure 1.3 in Example 1.3 as
a guide. Could the type of information found in Exer-
cises 1.28 and 1.31 have been found in an observational
study in which there was no control on injection veloc-
ity and mold temperature by the analyst? Explain why
or why not.
1.33 Group Project: Collect the shoe size of every-
one in the class. Use the sample means and variances
and the types of plots presented in this chapter to sum-
marize any features that draw a distinction between the
distributions of shoe sizes for males and females. Do
the same for the height of everyone in the class.
This page intentionally left blank
Chapter 2
Probability
2.1 Sample Space
In the study of statistics, we are concerned basically with the presentation and
interpretation of chance outcomes that occur in a planned study or scientific
investigation. For example, we may record the number of accidents that occur
monthly at the intersection of Driftwood Lane and Royal Oak Drive, hoping to
justify the installation of a traffic light; we might classify items coming off an as-
sembly line as “defective” or “nondefective”; or we may be interested in the volume
of gas released in a chemical reaction when the concentration of an acid is varied.
Hence, the statistician is often dealing with either numerical data, representing
counts or measurements, or categorical data, which can be classified according
to some criterion.
We shall refer to any recording of information, whether it be numerical or
categorical, as an observation. Thus, the numbers 2, 0, 1, and 2, representing
the number of accidents that occurred for each month from January through April
during the past year at the intersection of Driftwood Lane and Royal Oak Drive,
constitute a set of observations. Similarly, the categorical data N, D, N, N, and
D, representing the items found to be defective or nondefective when five items are
inspected, are recorded as observations.
Statisticians use the word experiment to describe any process that generates
a set of data. A simple example of a statistical experiment is the tossing of a coin.
In this experiment, there are only two possible outcomes, heads or tails. Another
experiment might be the launching of a missile and observing of its velocity at
specified times. The opinions of voters concerning a new sales tax can also be
considered as observations of an experiment. We are particularly interested in the
observations obtained by repeating the experiment several times. In most cases, the
outcomes will depend on chance and, therefore, cannot be predicted with certainty.
If a chemist runs an analysis several times under the same conditions, he or she will
obtain different measurements, indicating an element of chance in the experimental
procedure. Even when a coin is tossed repeatedly, we cannot be certain that a given
toss will result in a head. However, we know the entire set of possibilities for each
toss.
Given the discussion in Section 1.7, we should deal with the breadth of the term
experiment. Three types of statistical studies were reviewed, and several examples
were given of each. In each of the three cases, designed experiments, observational
studies, and retrospective studies, the end result was a set of data that of course is
35
36 Chapter 2 Probability
subject to uncertainty. Though only one of these has the word experiment in its
description, the process of generating the data or the process of observing the data
is part of an experiment. The corrosion study discussed in Section 1.2 certainly
involves an experiment, with measures of corrosion representing the data. The ex-
ample given in Section 1.7 in which blood cholesterol and sodium were observed on
a group of individuals represented an observational study (as opposed to a designed
experiment), and yet the process generated data and the outcome is subject to un-
certainty. Thus, it is an experiment. A third example in Section 1.7 represented
a retrospective study in which historical data on monthly electric power consump-
tion and average monthly ambient temperature were observed. Even though the
data may have been in the files for decades, the process is still referred to as an
experiment.
Definition 2.1: The set of all possible outcomes of a statistical experiment is called the sample
space and is represented by the symbol S.
Each outcome in a sample space is called an element or a member of the
sample space, or simply a sample point. If the sample space has a finite number
of elements, we may list the members separated by commas and enclosed in braces.
Thus, the sample space S, of possible outcomes when a coin is flipped, may be
written
S = {H, T},
where H and T correspond to heads and tails, respectively.
Example 2.1: Consider the experiment of tossing a die. If we are interested in the number that
shows on the top face, the sample space is
S1 = {1, 2, 3, 4, 5, 6}.
If we are interested only in whether the number is even or odd, the sample space
is simply
S2 = {even, odd}.
Example 2.1 illustrates the fact that more than one sample space can be used to
describe the outcomes of an experiment. In this case, S1 provides more information
than S2. If we know which element in S1 occurs, we can tell which outcome in S2
occurs; however, a knowledge of what happens in S2 is of little help in determining
which element in S1 occurs. In general, it is desirable to use the sample space that
gives the most information concerning the outcomes of the experiment. In some
experiments, it is helpful to list the elements of the sample space systematically by
means of a tree diagram.
Example 2.2: An experiment consists of flipping a coin and then flipping it a second time if a
head occurs. If a tail occurs on the first flip, then a die is tossed once. To list
the elements of the sample space providing the most information, we construct the
tree diagram of Figure 2.1. The various paths along the branches of the tree give
the distinct sample points. Starting with the top left branch and moving to the
right along the first path, we get the sample point HH, indicating the possibility
that heads occurs on two successive flips of the coin. Likewise, the sample point
T3 indicates the possibility that the coin will show a tail followed by a 3 on the
toss of the die. By proceeding along all paths, we see that the sample space is
S = {HH, HT, T1, T2, T3, T4, T5, T6}.
2.1 Sample Space 37
H
T
H HH
T HT
T 1
T 2
T 3
T 4
T 5
T 6
1
2
3
4
5
6
First
Outcome
Second
Outcome
Sample
Point
Figure 2.1: Tree diagram for Example 2.2.
Many of the concepts in this chapter are best illustrated with examples involving
the use of dice and cards. These are particularly important applications to use early
in the learning process, to facilitate the flow of these new concepts into scientific
and engineering examples such as the following.
Example 2.3: Suppose that three items are selected at random from a manufacturing process.
Each item is inspected and classified defective, D, or nondefective, N. To list the
elements of the sample space providing the most information, we construct the tree
diagram of Figure 2.2. Now, the various paths along the branches of the tree give
the distinct sample points. Starting with the first path, we get the sample point
DDD, indicating the possibility that all three items inspected are defective. As we
proceed along the other paths, we see that the sample space is
S = {DDD, DDN, DND, DNN, NDD, NDN, NND, NNN}.
Sample spaces with a large or infinite number of sample points are best de-
scribed by a statement or rule method. For example, if the possible outcomes
of an experiment are the set of cities in the world with a population over 1 million,
our sample space is written
S = {x | x is a city with a population over 1 million},
which reads “S is the set of all x such that x is a city with a population over 1
million.” The vertical bar is read “such that.” Similarly, if S is the set of all points
(x, y) on the boundary or the interior of a circle of radius 2 with center at the
origin, we write the rule
S = {(x, y) | x2
+ y2
≤ 4}.
38 Chapter 2 Probability
D
N
D
N
D
N
D DDD
N DDN
D DND
N DNN
D NDD
N NDN
D NND
N NNN
First
Item
Second
Item
Third
Item
Sample
Point
Figure 2.2: Tree diagram for Example 2.3.
Whether we describe the sample space by the rule method or by listing the
elements will depend on the specific problem at hand. The rule method has practi-
cal advantages, particularly for many experiments where listing becomes a tedious
chore.
Consider the situation of Example 2.3 in which items from a manufacturing
process are either D, defective, or N, nondefective. There are many important
statistical procedures called sampling plans that determine whether or not a “lot”
of items is considered satisfactory. One such plan involves sampling until k defec-
tives are observed. Suppose the experiment is to sample items randomly until one
defective item is observed. The sample space for this case is
S = {D, ND, NND, NNND, . . . }.
2.2 Events
For any given experiment, we may be interested in the occurrence of certain events
rather than in the occurrence of a specific element in the sample space. For in-
stance, we may be interested in the event A that the outcome when a die is tossed is
divisible by 3. This will occur if the outcome is an element of the subset A = {3, 6}
of the sample space S1 in Example 2.1. As a further illustration, we may be inter-
ested in the event B that the number of defectives is greater than 1 in Example
2.3. This will occur if the outcome is an element of the subset
B = {DDN, DND, NDD, DDD}
of the sample space S.
To each event we assign a collection of sample points, which constitute a subset
of the sample space. That subset represents all of the elements for which the event
is true.
2.2 Events 39
Definition 2.2: An event is a subset of a sample space.
Example 2.4: Given the sample space S = {t | t ≥ 0}, where t is the life in years of a certain
electronic component, then the event A that the component fails before the end of
the fifth year is the subset A = {t | 0 ≤ t  5}.
It is conceivable that an event may be a subset that includes the entire sample
space S or a subset of S called the null set and denoted by the symbol φ, which
contains no elements at all. For instance, if we let A be the event of detecting a
microscopic organism by the naked eye in a biological experiment, then A = φ.
Also, if
B = {x | x is an even factor of 7},
then B must be the null set, since the only possible factors of 7 are the odd numbers
1 and 7.
Consider an experiment where the smoking habits of the employees of a man-
ufacturing firm are recorded. A possible sample space might classify an individual
as a nonsmoker, a light smoker, a moderate smoker, or a heavy smoker. Let the
subset of smokers be some event. Then all the nonsmokers correspond to a different
event, also a subset of S, which is called the complement of the set of smokers.
Definition 2.3: The complement of an event A with respect to S is the subset of all elements
of S that are not in A. We denote the complement of A by the symbol A
.
Example 2.5: Let R be the event that a red card is selected from an ordinary deck of 52 playing
cards, and let S be the entire deck. Then R
is the event that the card selected
from the deck is not a red card but a black card.
Example 2.6: Consider the sample space
S = {book, cell phone, mp3, paper, stationery, laptop}.
Let A = {book, stationery, laptop, paper}. Then the complement of A is A
=
{cell phone, mp3}.
We now consider certain operations with events that will result in the formation
of new events. These new events will be subsets of the same sample space as the
given events. Suppose that A and B are two events associated with an experiment.
In other words, A and B are subsets of the same sample space S. For example, in
the tossing of a die we might let A be the event that an even number occurs and
B the event that a number greater than 3 shows. Then the subsets A = {2, 4, 6}
and B = {4, 5, 6} are subsets of the same sample space
S = {1, 2, 3, 4, 5, 6}.
Note that both A and B will occur on a given toss if the outcome is an element of
the subset {4, 6}, which is just the intersection of A and B.
Definition 2.4: The intersection of two events A and B, denoted by the symbol A ∩ B, is the
event containing all elements that are common to A and B.
Example 2.7: Let E be the event that a person selected at random in a classroom is majoring in
engineering, and let F be the event that the person is female. Then E ∩ F is the
event of all female engineering students in the classroom.
40 Chapter 2 Probability
Example 2.8: Let V = {a, e, i, o, u} and C = {l, r, s, t}; then it follows that V ∩ C = φ. That is,
V and C have no elements in common and, therefore, cannot both simultaneously
occur.
For certain statistical experiments it is by no means unusual to define two
events, A and B, that cannot both occur simultaneously. The events A and B are
then said to be mutually exclusive. Stated more formally, we have the following
definition:
Definition 2.5: Two events A and B are mutually exclusive, or disjoint, if A ∩ B = φ, that
is, if A and B have no elements in common.
Example 2.9: A cable television company offers programs on eight different channels, three of
which are affiliated with ABC, two with NBC, and one with CBS. The other
two are an educational channel and the ESPN sports channel. Suppose that a
person subscribing to this service turns on a television set without first selecting
the channel. Let A be the event that the program belongs to the NBC network and
B the event that it belongs to the CBS network. Since a television program cannot
belong to more than one network, the events A and B have no programs in common.
Therefore, the intersection A ∩ B contains no programs, and consequently the
events A and B are mutually exclusive.
Often one is interested in the occurrence of at least one of two events associated
with an experiment. Thus, in the die-tossing experiment, if
A = {2, 4, 6} and B = {4, 5, 6},
we might be interested in either A or B occurring or both A and B occurring. Such
an event, called the union of A and B, will occur if the outcome is an element of
the subset {2, 4, 5, 6}.
Definition 2.6: The union of the two events A and B, denoted by the symbol A∪B, is the event
containing all the elements that belong to A or B or both.
Example 2.10: Let A = {a, b, c} and B = {b, c, d, e}; then A ∪ B = {a, b, c, d, e}.
Example 2.11: Let P be the event that an employee selected at random from an oil drilling com-
pany smokes cigarettes. Let Q be the event that the employee selected drinks
alcoholic beverages. Then the event P ∪ Q is the set of all employees who either
drink or smoke or do both.
Example 2.12: If M = {x | 3  x  9} and N = {y | 5  y  12}, then
M ∪ N = {z | 3  z  12}.
The relationship between events and the corresponding sample space can be
illustrated graphically by means of Venn diagrams. In a Venn diagram we let
the sample space be a rectangle and represent events by circles drawn inside the
rectangle. Thus, in Figure 2.3, we see that
A ∩ B = regions 1 and 2,
B ∩ C = regions 1 and 3,
2.2 Events 41
A B
C
S
1
4 3
2
7 6
5
Figure 2.3: Events represented by various regions.
A ∪ C = regions 1, 2, 3, 4, 5, and 7,
B
∩ A = regions 4 and 7,
A ∩ B ∩ C = region 1,
(A ∪ B) ∩ C
= regions 2, 6, and 7,
and so forth.
A
B C
S
Figure 2.4: Events of the sample space S.
In Figure 2.4, we see that events A, B, and C are all subsets of the sample
space S. It is also clear that event B is a subset of event A; event B ∩ C has no
elements and hence B and C are mutually exclusive; event A ∩ C has at least one
element; and event A ∪ B = A. Figure 2.4 might, therefore, depict a situation
where we select a card at random from an ordinary deck of 52 playing cards and
observe whether the following events occur:
A: the card is red,
/ /
42 Chapter 2 Probability
B: the card is the jack, queen, or king of diamonds,
C: the card is an ace.
Clearly, the event A ∩ C consists of only the two red aces.
Several results that follow from the foregoing definitions, which may easily be
verified by means of Venn diagrams, are as follows:
1. A ∩ φ = φ.
2. A ∪ φ = A.
3. A ∩ A
= φ.
4. A ∪ A
= S.
5. S
= φ.
6. φ
= S.
7. (A
)
= A.
8. (A ∩ B)
= A
∪ B
.
9. (A ∪ B)
= A
∩ B
.
Exercises
2.1 List the elements of each of the following sample
spaces:
(a) the set of integers between 1 and 50 divisible by 8;
(b) the set S = {x | x2
+ 4x − 5 = 0};
(c) the set of outcomes when a coin is tossed until a
tail or three heads appear;
(d) the set S = {x | x is a continent};
(e) the set S = {x | 2x − 4 ≥ 0 and x  1}.
2.2 Use the rule method to describe the sample space
S consisting of all points in the first quadrant inside a
circle of radius 3 with center at the origin.
2.3 Which of the following events are equal?
(a) A = {1, 3};
(b) B = {x | x is a number on a die};
(c) C = {x | x2
− 4x + 3 = 0};
(d) D = {x | x is the number of heads when six coins
are tossed}.
2.4 An experiment involves tossing a pair of dice, one
green and one red, and recording the numbers that
come up. If x equals the outcome on the green die
and y the outcome on the red die, describe the sample
space S
(a) by listing the elements (x, y);
(b) by using the rule method.
2.5 An experiment consists of tossing a die and then
flipping a coin once if the number on the die is even. If
the number on the die is odd, the coin is flipped twice.
Using the notation 4H, for example, to denote the out-
come that the die comes up 4 and then the coin comes
up heads, and 3HT to denote the outcome that the die
comes up 3 followed by a head and then a tail on the
coin, construct a tree diagram to show the 18 elements
of the sample space S.
2.6 Two jurors are selected from 4 alternates to serve
at a murder trial. Using the notation A1A3, for exam-
ple, to denote the simple event that alternates 1 and 3
are selected, list the 6 elements of the sample space S.
2.7 Four students are selected at random from a
chemistry class and classified as male or female. List
the elements of the sample space S1, using the letter
M for male and F for female. Define a second sample
space S2 where the elements represent the number of
females selected.
2.8 For the sample space of Exercise 2.4,
(a) list the elements corresponding to the event A that
the sum is greater than 8;
(b) list the elements corresponding to the event B that
a 2 occurs on either die;
(c) list the elements corresponding to the event C that
a number greater than 4 comes up on the green die;
(d) list the elements corresponding to the event A ∩ C;
(e) list the elements corresponding to the event A ∩ B;
(f) list the elements corresponding to the event B ∩ C;
(g) construct a Venn diagram to illustrate the intersec-
tions and unions of the events A, B, and C.
2.9 For the sample space of Exercise 2.5,
(a) list the elements corresponding to the event A that
a number less than 3 occurs on the die;
(b) list the elements corresponding to the event B that
two tails occur;
(c) list the elements corresponding to the event A
;
/ /
Exercises 43
(d) list the elements corresponding to the event A
∩B;
(e) list the elements corresponding to the event A ∪ B.
2.10 An engineering firm is hired to determine if cer-
tain waterways in Virginia are safe for fishing. Samples
are taken from three rivers.
(a) List the elements of a sample space S, using the
letters F for safe to fish and N for not safe to fish.
(b) List the elements of S corresponding to event E
that at least two of the rivers are safe for fishing.
(c) Define an event that has as its elements the points
{FFF, NFF, FFN, NFN}.
2.11 The resumés of two male applicants for a college
teaching position in chemistry are placed in the same
file as the resumés of two female applicants. Two po-
sitions become available, and the first, at the rank of
assistant professor, is filled by selecting one of the four
applicants at random. The second position, at the rank
of instructor, is then filled by selecting at random one
of the remaining three applicants. Using the notation
M2F1, for example, to denote the simple event that
the first position is filled by the second male applicant
and the second position is then filled by the first female
applicant,
(a) list the elements of a sample space S;
(b) list the elements of S corresponding to event A that
the position of assistant professor is filled by a male
applicant;
(c) list the elements of S corresponding to event B that
exactly one of the two positions is filled by a male
applicant;
(d) list the elements of S corresponding to event C that
neither position is filled by a male applicant;
(e) list the elements of S corresponding to the event
A ∩ B;
(f) list the elements of S corresponding to the event
A ∪ C;
(g) construct a Venn diagram to illustrate the intersec-
tions and unions of the events A, B, and C.
2.12 Exercise and diet are being studied as possi-
ble substitutes for medication to lower blood pressure.
Three groups of subjects will be used to study the ef-
fect of exercise. Group 1 is sedentary, while group 2
walks and group 3 swims for 1 hour a day. Half of each
of the three exercise groups will be on a salt-free diet.
An additional group of subjects will not exercise or re-
strict their salt, but will take the standard medication.
Use Z for sedentary, W for walker, S for swimmer, Y
for salt, N for no salt, M for medication, and F for
medication free.
(a) Show all of the elements of the sample space S.
(b) Given that A is the set of nonmedicated subjects
and B is the set of walkers, list the elements of
A ∪ B.
(c) List the elements of A ∩ B.
2.13 Construct a Venn diagram to illustrate the pos-
sible intersections and unions for the following events
relative to the sample space consisting of all automo-
biles made in the United States.
F : Four door, S : Sun roof, P : Power steering.
2.14 If S = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} and A =
{0, 2, 4, 6, 8}, B = {1, 3, 5, 7, 9}, C = {2, 3, 4, 5}, and
D = {1, 6, 7}, list the elements of the sets correspond-
ing to the following events:
(a) A ∪ C;
(b) A ∩ B;
(c) C
;
(d) (C
∩ D) ∪ B;
(e) (S ∩ C)
;
(f) A ∩ C ∩ D
.
2.15 Consider the sample space S = {copper, sodium,
nitrogen, potassium, uranium, oxygen, zinc} and the
events
A = {copper, sodium, zinc},
B = {sodium, nitrogen, potassium},
C = {oxygen}.
List the elements of the sets corresponding to the fol-
lowing events:
(a) A
;
(b) A ∪ C;
(c) (A ∩ B
) ∪ C
;
(d) B
∩ C
;
(e) A ∩ B ∩ C;
(f) (A
∪ B
) ∩ (A
∩ C).
2.16 If S = {x | 0  x  12}, M = {x | 1  x  9},
and N = {x | 0  x  5}, find
(a) M ∪ N;
(b) M ∩ N;
(c) M
∩ N
.
2.17 Let A, B, and C be events relative to the sam-
ple space S. Using Venn diagrams, shade the areas
representing the following events:
(a) (A ∩ B)
;
(b) (A ∪ B)
;
(c) (A ∩ C) ∪ B.
44 Chapter 2 Probability
2.18 Which of the following pairs of events are mutu-
ally exclusive?
(a) A golfer scoring the lowest 18-hole round in a 72-
hole tournament and losing the tournament.
(b) A poker player getting a flush (all cards in the same
suit) and 3 of a kind on the same 5-card hand.
(c) A mother giving birth to a baby girl and a set of
twin daughters on the same day.
(d) A chess player losing the last game and winning the
match.
2.19 Suppose that a family is leaving on a summer
vacation in their camper and that M is the event that
they will experience mechanical problems, T is the
event that they will receive a ticket for committing a
traffic violation, and V is the event that they will ar-
rive at a campsite with no vacancies. Referring to the
Venn diagram of Figure 2.5, state in words the events
represented by the following regions:
(a) region 5;
(b) region 3;
(c) regions 1 and 2 together;
(d) regions 4 and 7 together;
(e) regions 3, 6, 7, and 8 together.
2.20 Referring to Exercise 2.19 and the Venn diagram
of Figure 2.5, list the numbers of the regions that rep-
resent the following events:
(a) The family will experience no mechanical problems
and will not receive a ticket for a traffic violation
but will arrive at a campsite with no vacancies.
(b) The family will experience both mechanical prob-
lems and trouble in locating a campsite with a va-
cancy but will not receive a ticket for a traffic vio-
lation.
(c) The family will either have mechanical trouble or
arrive at a campsite with no vacancies but will not
receive a ticket for a traffic violation.
(d) The family will not arrive at a campsite with no
vacancies.
M T
V
1
2 3
4
5 7
6
8
Figure 2.5: Venn diagram for Exercises 2.19 and 2.20.
2.3 Counting Sample Points
One of the problems that the statistician must consider and attempt to evaluate
is the element of chance associated with the occurrence of certain events when
an experiment is performed. These problems belong in the field of probability, a
subject to be introduced in Section 2.4. In many cases, we shall be able to solve a
probability problem by counting the number of points in the sample space without
actually listing each element. The fundamental principle of counting, often referred
to as the multiplication rule, is stated in Rule 2.1.
2.3 Counting Sample Points 45
Rule 2.1: If an operation can be performed in n1 ways, and if for each of these ways a second
operation can be performed in n2 ways, then the two operations can be performed
together in n1n2 ways.
Example 2.13: How many sample points are there in the sample space when a pair of dice is
thrown once?
Solution: The first die can land face-up in any one of n1 = 6 ways. For each of these 6 ways,
the second die can also land face-up in n2 = 6 ways. Therefore, the pair of dice
can land in n1n2 = (6)(6) = 36 possible ways.
Example 2.14: A developer of a new subdivision offers prospective home buyers a choice of Tudor,
rustic, colonial, and traditional exterior styling in ranch, two-story, and split-level
floor plans. In how many different ways can a buyer order one of these homes?
Exterior Style Floor Plan
T
u
d
o
r
Rustic
Colonial
T
r
a
d
i
t
i
o
n
a
l
Split-Level
Split-Level
Two-Story
Two-Story
Ranch
Ranch
Split-Level
Split-Level
Two-Story
Two-Story
Ranch
Ranch
Figure 2.6: Tree diagram for Example 2.14.
Solution: Since n1 = 4 and n2 = 3, a buyer must choose from
n1n2 = (4)(3) = 12 possible homes.
The answers to the two preceding examples can be verified by constructing
tree diagrams and counting the various paths along the branches. For instance,
46 Chapter 2 Probability
in Example 2.14 there will be n1 = 4 branches corresponding to the different
exterior styles, and then there will be n2 = 3 branches extending from each of
these 4 branches to represent the different floor plans. This tree diagram yields the
n1n2 = 12 choices of homes given by the paths along the branches, as illustrated
in Figure 2.6.
Example 2.15: If a 22-member club needs to elect a chair and a treasurer, how many different
ways can these two to be elected?
Solution: For the chair position, there are 22 total possibilities. For each of those 22 pos-
sibilities, there are 21 possibilities to elect the treasurer. Using the multiplication
rule, we obtain n1 × n2 = 22 × 21 = 462 different ways.
The multiplication rule, Rule 2.1 may be extended to cover any number of
operations. Suppose, for instance, that a customer wishes to buy a new cell phone
and can choose from n1 = 5 brands, n2 = 5 sets of capability, and n3 = 4 colors.
These three classifications result in n1n2n3 = (5)(5)(4) = 100 different ways for
a customer to order one of these phones. The generalized multiplication rule
covering k operations is stated in the following.
Rule 2.2: If an operation can be performed in n1 ways, and if for each of these a second
operation can be performed in n2 ways, and for each of the first two a third
operation can be performed in n3 ways, and so forth, then the sequence of k
operations can be performed in n1n2 · · · nk ways.
Example 2.16: Sam is going to assemble a computer by himself. He has the choice of chips from
two brands, a hard drive from four, memory from three, and an accessory bundle
from five local stores. How many different ways can Sam order the parts?
Solution: Since n1 = 2, n2 = 4, n3 = 3, and n4 = 5, there are
nl × n2 × n3 × n4 = 2 × 4 × 3 × 5 = 120
different ways to order the parts.
Example 2.17: How many even four-digit numbers can be formed from the digits 0, 1, 2, 5, 6, and
9 if each digit can be used only once?
Solution: Since the number must be even, we have only n1 = 3 choices for the units position.
However, for a four-digit number the thousands position cannot be 0. Hence, we
consider the units position in two parts, 0 or not 0. If the units position is 0 (i.e.,
n1 = 1), we have n2 = 5 choices for the thousands position, n3 = 4 for the hundreds
position, and n4 = 3 for the tens position. Therefore, in this case we have a total
of
n1n2n3n4 = (1)(5)(4)(3) = 60
even four-digit numbers. On the other hand, if the units position is not 0 (i.e.,
n1 = 2), we have n2 = 4 choices for the thousands position, n3 = 4 for the hundreds
position, and n4 = 3 for the tens position. In this situation, there are a total of
n1n2n3n4 = (2)(4)(4)(3) = 96
2.3 Counting Sample Points 47
even four-digit numbers.
Since the above two cases are mutually exclusive, the total number of even
four-digit numbers can be calculated as 60 + 96 = 156.
Frequently, we are interested in a sample space that contains as elements all
possible orders or arrangements of a group of objects. For example, we may want
to know how many different arrangements are possible for sitting 6 people around
a table, or we may ask how many different orders are possible for drawing 2 lottery
tickets from a total of 20. The different arrangements are called permutations.
Definition 2.7: A permutation is an arrangement of all or part of a set of objects.
Consider the three letters a, b, and c. The possible permutations are abc, acb,
bac, bca, cab, and cba. Thus, we see that there are 6 distinct arrangements. Using
Rule 2.2, we could arrive at the answer 6 without actually listing the different
orders by the following arguments: There are n1 = 3 choices for the first position.
No matter which letter is chosen, there are always n2 = 2 choices for the second
position. No matter which two letters are chosen for the first two positions, there
is only n3 = 1 choice for the last position, giving a total of
n1n2n3 = (3)(2)(1) = 6 permutations
by Rule 2.2. In general, n distinct objects can be arranged in
n(n − 1)(n − 2) · · · (3)(2)(1) ways.
There is a notation for such a number.
Definition 2.8: For any non-negative integer n, n!, called “n factorial,” is defined as
n! = n(n − 1) · · · (2)(1),
with special case 0! = 1.
Using the argument above, we arrive at the following theorem.
Theorem 2.1: The number of permutations of n objects is n!.
The number of permutations of the four letters a, b, c, and d will be 4! = 24.
Now consider the number of permutations that are possible by taking two letters
at a time from four. These would be ab, ac, ad, ba, bc, bd, ca, cb, cd, da, db, and
dc. Using Rule 2.1 again, we have two positions to fill, with n1 = 4 choices for the
first and then n2 = 3 choices for the second, for a total of
n1n2 = (4)(3) = 12
permutations. In general, n distinct objects taken r at a time can be arranged in
n(n − 1)(n − 2) · · · (n − r + 1)
ways. We represent this product by the symbol
nPr =
n!
(n − r)!
.
48 Chapter 2 Probability
As a result, we have the theorem that follows.
Theorem 2.2: The number of permutations of n distinct objects taken r at a time is
nPr =
n!
(n − r)!
.
Example 2.18: In one year, three awards (research, teaching, and service) will be given to a class
of 25 graduate students in a statistics department. If each student can receive at
most one award, how many possible selections are there?
Solution: Since the awards are distinguishable, it is a permutation problem. The total
number of sample points is
25P3 =
25!
(25 − 3)!
=
25!
22!
= (25)(24)(23) = 13, 800.
Example 2.19: A president and a treasurer are to be chosen from a student club consisting of 50
people. How many different choices of officers are possible if
(a) there are no restrictions;
(b) A will serve only if he is president;
(c) B and C will serve together or not at all;
(d) D and E will not serve together?
Solution: (a) The total number of choices of officers, without any restrictions, is
50P2 =
50!
48!
= (50)(49) = 2450.
(b) Since A will serve only if he is president, we have two situations here: (i) A is
selected as the president, which yields 49 possible outcomes for the treasurer’s
position, or (ii) officers are selected from the remaining 49 people without A,
which has the number of choices 49P2 = (49)(48) = 2352. Therefore, the total
number of choices is 49 + 2352 = 2401.
(c) The number of selections when B and C serve together is 2. The number of
selections when both B and C are not chosen is 48P2 = 2256. Therefore, the
total number of choices in this situation is 2 + 2256 = 2258.
(d) The number of selections when D serves as an officer but not E is (2)(48) =
96, where 2 is the number of positions D can take and 48 is the number of
selections of the other officer from the remaining people in the club except
E. The number of selections when E serves as an officer but not D is also
(2)(48) = 96. The number of selections when both D and E are not chosen
is 48P2 = 2256. Therefore, the total number of choices is (2)(96) + 2256 =
2448. This problem also has another short solution: Since D and E can only
serve together in 2 ways, the answer is 2450 − 2 = 2448.
2.3 Counting Sample Points 49
Permutations that occur by arranging objects in a circle are called circular
permutations. Two circular permutations are not considered different unless
corresponding objects in the two arrangements are preceded or followed by a dif-
ferent object as we proceed in a clockwise direction. For example, if 4 people are
playing bridge, we do not have a new permutation if they all move one position in
a clockwise direction. By considering one person in a fixed position and arranging
the other three in 3! ways, we find that there are 6 distinct arrangements for the
bridge game.
Theorem 2.3: The number of permutations of n objects arranged in a circle is (n − 1)!.
So far we have considered permutations of distinct objects. That is, all the
objects were completely different or distinguishable. Obviously, if the letters b and
c are both equal to x, then the 6 permutations of the letters a, b, and c become
axx, axx, xax, xax, xxa, and xxa, of which only 3 are distinct. Therefore, with 3
letters, 2 being the same, we have 3!/2! = 3 distinct permutations. With 4 different
letters a, b, c, and d, we have 24 distinct permutations. If we let a = b = x and
c = d = y, we can list only the following distinct permutations: xxyy, xyxy, yxxy,
yyxx, xyyx, and yxyx. Thus, we have 4!/(2! 2!) = 6 distinct permutations.
Theorem 2.4: The number of distinct permutations of n things of which n1 are of one kind, n2
of a second kind, . . . , nk of a kth kind is
n!
n1!n2! · · · nk!
.
Example 2.20: In a college football training session, the defensive coordinator needs to have 10
players standing in a row. Among these 10 players, there are 1 freshman, 2 sopho-
mores, 4 juniors, and 3 seniors. How many different ways can they be arranged in
a row if only their class level will be distinguished?
Solution: Directly using Theorem 2.4, we find that the total number of arrangements is
10!
1! 2! 4! 3!
= 12, 600.
Often we are concerned with the number of ways of partitioning a set of n
objects into r subsets called cells. A partition has been achieved if the intersection
of every possible pair of the r subsets is the empty set φ and if the union of all
subsets gives the original set. The order of the elements within a cell is of no
importance. Consider the set {a, e, i, o, u}. The possible partitions into two cells
in which the first cell contains 4 elements and the second cell 1 element are
{(a, e, i, o), (u)}, {(a, i, o, u), (e)}, {(e, i, o, u), (a)}, {(a, e, o, u), (i)}, {(a, e, i, u), (o)}.
We see that there are 5 ways to partition a set of 4 elements into two subsets, or
cells, containing 4 elements in the first cell and 1 element in the second.
50 Chapter 2 Probability
The number of partitions for this illustration is denoted by the symbol

5
4, 1

=
5!
4! 1!
= 5,
where the top number represents the total number of elements and the bottom
numbers represent the number of elements going into each cell. We state this more
generally in Theorem 2.5.
Theorem 2.5: The number of ways of partitioning a set of n objects into r cells with n1 elements
in the first cell, n2 elements in the second, and so forth, is

n
n1, n2, . . . , nr

=
n!
n1!n2! · · · nr!
,
where n1 + n2 + · · · + nr = n.
Example 2.21: In how many ways can 7 graduate students be assigned to 1 triple and 2 double
hotel rooms during a conference?
Solution: The total number of possible partitions would be

7
3, 2, 2

=
7!
3! 2! 2!
= 210.
In many problems, we are interested in the number of ways of selecting r objects
from n without regard to order. These selections are called combinations. A
combination is actually a partition with two cells, the one cell containing the r
objects selected and the other cell containing the (n − r) objects that are left. The
number of such combinations, denoted by

n
r, n − r

, is usually shortened to

n
r

,
since the number of elements in the second cell must be n − r.
Theorem 2.6: The number of combinations of n distinct objects taken r at a time is

n
r

=
n!
r!(n − r)!
.
Example 2.22: A young boy asks his mother to get 5 Game-BoyTM
cartridges from his collection
of 10 arcade and 5 sports games. How many ways are there that his mother can
get 3 arcade and 2 sports games?
Solution: The number of ways of selecting 3 cartridges from 10 is

10
3

=
10!
3! (10 − 3)!
= 120.
The number of ways of selecting 2 cartridges from 5 is

5
2

=
5!
2! 3!
= 10.
/ /
Exercises 51
Using the multiplication rule (Rule 2.1) with n1 = 120 and n2 = 10, we have
(120)(10) = 1200 ways.
Example 2.23: How many different letter arrangements can be made from the letters in the word
STATISTICS?
Solution: Using the same argument as in the discussion for Theorem 2.6, in this example we
can actually apply Theorem 2.5 to obtain

10
3, 3, 2, 1, 1

=
10!
3! 3! 2! 1! 1!
= 50, 400.
Here we have 10 total letters, with 2 letters (S, T) appearing 3 times each, letter
I appearing twice, and letters A and C appearing once each. On the other hand,
this result can be directly obtained by using Theorem 2.4.
Exercises
2.21 Registrants at a large convention are offered 6
sightseeing tours on each of 3 days. In how many
ways can a person arrange to go on a sightseeing tour
planned by this convention?
2.22 In a medical study, patients are classified in 8
ways according to whether they have blood type AB+
,
AB−
, A+
, A−
, B+
, B−
, O+
, or O−
, and also accord-
ing to whether their blood pressure is low, normal, or
high. Find the number of ways in which a patient can
be classified.
2.23 If an experiment consists of throwing a die and
then drawing a letter at random from the English
alphabet, how many points are there in the sample
space?
2.24 Students at a private liberal arts college are clas-
sified as being freshmen, sophomores, juniors, or se-
niors, and also according to whether they are male or
female. Find the total number of possible classifica-
tions for the students of that college.
2.25 A certain brand of shoes comes in 5 different
styles, with each style available in 4 distinct colors. If
the store wishes to display pairs of these shoes showing
all of its various styles and colors, how many different
pairs will the store have on display?
2.26 A California study concluded that following 7
simple health rules can extend a man’s life by 11 years
on the average and a woman’s life by 7 years. These
7 rules are as follows: no smoking, get regular exer-
cise, use alcohol only in moderation, get 7 to 8 hours
of sleep, maintain proper weight, eat breakfast, and do
not eat between meals. In how many ways can a person
adopt 5 of these rules to follow
(a) if the person presently violates all 7 rules?
(b) if the person never drinks and always eats break-
fast?
2.27 A developer of a new subdivision offers a
prospective home buyer a choice of 4 designs, 3 differ-
ent heating systems, a garage or carport, and a patio or
screened porch. How many different plans are available
to this buyer?
2.28 A drug for the relief of asthma can be purchased
from 5 different manufacturers in liquid, tablet, or
capsule form, all of which come in regular and extra
strength. How many different ways can a doctor pre-
scribe the drug for a patient suffering from asthma?
2.29 In a fuel economy study, each of 3 race cars is
tested using 5 different brands of gasoline at 7 test sites
located in different regions of the country. If 2 drivers
are used in the study, and test runs are made once un-
der each distinct set of conditions, how many test runs
are needed?
2.30 In how many different ways can a true-false test
consisting of 9 questions be answered?
2.31 A witness to a hit-and-run accident told the po-
lice that the license number contained the letters RLH
followed by 3 digits, the first of which was a 5. If
the witness cannot recall the last 2 digits, but is cer-
tain that all 3 digits are different, find the maximum
number of automobile registrations that the police may
have to check.
52 Chapter 2 Probability
2.32 (a) In how many ways can 6 people be lined up
to get on a bus?
(b) If 3 specific persons, among 6, insist on following
each other, how many ways are possible?
(c) If 2 specific persons, among 6, refuse to follow each
other, how many ways are possible?
2.33 If a multiple-choice test consists of 5 questions,
each with 4 possible answers of which only 1 is correct,
(a) in how many different ways can a student check off
one answer to each question?
(b) in how many ways can a student check off one
answer to each question and get all the answers
wrong?
2.34 (a) How many distinct permutations can be
made from the letters of the word COLUMNS?
(b) How many of these permutations start with the let-
ter M?
2.35 A contractor wishes to build 9 houses, each dif-
ferent in design. In how many ways can he place these
houses on a street if 6 lots are on one side of the street
and 3 lots are on the opposite side?
2.36 (a) How many three-digit numbers can be
formed from the digits 0, 1, 2, 3, 4, 5, and 6 if
each digit can be used only once?
(b) How many of these are odd numbers?
(c) How many are greater than 330?
2.37 In how many ways can 4 boys and 5 girls sit in
a row if the boys and girls must alternate?
2.38 Four married couples have bought 8 seats in the
same row for a concert. In how many different ways
can they be seated
(a) with no restrictions?
(b) if each couple is to sit together?
(c) if all the men sit together to the right of all the
women?
2.39 In a regional spelling bee, the 8 finalists consist
of 3 boys and 5 girls. Find the number of sample points
in the sample space S for the number of possible orders
at the conclusion of the contest for
(a) all 8 finalists;
(b) the first 3 positions.
2.40 In how many ways can 5 starting positions on a
basketball team be filled with 8 men who can play any
of the positions?
2.41 Find the number of ways that 6 teachers can
be assigned to 4 sections of an introductory psychol-
ogy course if no teacher is assigned to more than one
section.
2.42 Three lottery tickets for first, second, and third
prizes are drawn from a group of 40 tickets. Find the
number of sample points in S for awarding the 3 prizes
if each contestant holds only 1 ticket.
2.43 In how many ways can 5 different trees be
planted in a circle?
2.44 In how many ways can a caravan of 8 covered
wagons from Arizona be arranged in a circle?
2.45 How many distinct permutations can be made
from the letters of the word INFINITY ?
2.46 In how many ways can 3 oaks, 4 pines, and 2
maples be arranged along a property line if one does
not distinguish among trees of the same kind?
2.47 How many ways are there to select 3 candidates
from 8 equally qualified recent graduates for openings
in an accounting firm?
2.48 How many ways are there that no two students
will have the same birth date in a class of size 60?
2.4 Probability of an Event
Perhaps it was humankind’s unquenchable thirst for gambling that led to the early
development of probability theory. In an effort to increase their winnings, gam-
blers called upon mathematicians to provide optimum strategies for various games
of chance. Some of the mathematicians providing these strategies were Pascal,
Leibniz, Fermat, and James Bernoulli. As a result of this development of prob-
ability theory, statistical inference, with all its predictions and generalizations,
has branched out far beyond games of chance to encompass many other fields as-
sociated with chance occurrences, such as politics, business, weather forecasting,
2.4 Probability of an Event 53
and scientific research. For these predictions and generalizations to be reasonably
accurate, an understanding of basic probability theory is essential.
What do we mean when we make the statement “John will probably win the
tennis match,” or “I have a fifty-fifty chance of getting an even number when a
die is tossed,” or “The university is not likely to win the football game tonight,”
or “Most of our graduating class will likely be married within 3 years”? In each
case, we are expressing an outcome of which we are not certain, but owing to past
information or from an understanding of the structure of the experiment, we have
some degree of confidence in the validity of the statement.
Throughout the remainder of this chapter, we consider only those experiments
for which the sample space contains a finite number of elements. The likelihood of
the occurrence of an event resulting from such a statistical experiment is evaluated
by means of a set of real numbers, called weights or probabilities, ranging from
0 to 1. To every point in the sample space we assign a probability such that the
sum of all probabilities is 1. If we have reason to believe that a certain sample
point is quite likely to occur when the experiment is conducted, the probability
assigned should be close to 1. On the other hand, a probability closer to 0 is
assigned to a sample point that is not likely to occur. In many experiments, such
as tossing a coin or a die, all the sample points have the same chance of occurring
and are assigned equal probabilities. For points outside the sample space, that is,
for simple events that cannot possibly occur, we assign a probability of 0.
To find the probability of an event A, we sum all the probabilities assigned to
the sample points in A. This sum is called the probability of A and is denoted
by P(A).
Definition 2.9: The probability of an event A is the sum of the weights of all sample points in
A. Therefore,
0 ≤ P(A) ≤ 1, P(φ) = 0, and P(S) = 1.
Furthermore, if A1, A2, A3, . . . is a sequence of mutually exclusive events, then
P(A1 ∪ A2 ∪ A3 ∪ · · · ) = P(A1) + P(A2) + P(A3) + · · · .
Example 2.24: A coin is tossed twice. What is the probability that at least 1 head occurs?
Solution: The sample space for this experiment is
S = {HH, HT, TH, TT}.
If the coin is balanced, each of these outcomes is equally likely to occur. Therefore,
we assign a probability of ω to each sample point. Then 4ω = 1, or ω = 1/4. If A
represents the event of at least 1 head occurring, then
A = {HH, HT, TH} and P(A) =
1
4
+
1
4
+
1
4
=
3
4
.
Example 2.25: A die is loaded in such a way that an even number is twice as likely to occur as an
odd number. If E is the event that a number less than 4 occurs on a single toss of
the die, find P(E).
54 Chapter 2 Probability
Solution: The sample space is S = {1, 2, 3, 4, 5, 6}. We assign a probability of w to each
odd number and a probability of 2w to each even number. Since the sum of the
probabilities must be 1, we have 9w = 1 or w = 1/9. Hence, probabilities of 1/9
and 2/9 are assigned to each odd and even number, respectively. Therefore,
E = {1, 2, 3} and P(E) =
1
9
+
2
9
+
1
9
=
4
9
.
Example 2.26: In Example 2.25, let A be the event that an even number turns up and let B be
the event that a number divisible by 3 occurs. Find P(A ∪ B) and P(A ∩ B).
Solution: For the events A = {2, 4, 6} and B = {3, 6}, we have
A ∪ B = {2, 3, 4, 6} and A ∩ B = {6}.
By assigning a probability of 1/9 to each odd number and 2/9 to each even number,
we have
P(A ∪ B) =
2
9
+
1
9
+
2
9
+
2
9
=
7
9
and P(A ∩ B) =
2
9
.
If the sample space for an experiment contains N elements, all of which are
equally likely to occur, we assign a probability equal to 1/N to each of the N
points. The probability of any event A containing n of these N sample points is
then the ratio of the number of elements in A to the number of elements in S.
Rule 2.3: If an experiment can result in any one of N different equally likely outcomes, and
if exactly n of these outcomes correspond to event A, then the probability of event
A is
P(A) =
n
N
.
Example 2.27: A statistics class for engineers consists of 25 industrial, 10 mechanical, 10 electrical,
and 8 civil engineering students. If a person is randomly selected by the instruc-
tor to answer a question, find the probability that the student chosen is (a) an
industrial engineering major and (b) a civil engineering or an electrical engineering
major.
Solution: Denote by I, M, E, and C the students majoring in industrial, mechanical, electri-
cal, and civil engineering, respectively. The total number of students in the class
is 53, all of whom are equally likely to be selected.
(a) Since 25 of the 53 students are majoring in industrial engineering, the prob-
ability of event I, selecting an industrial engineering major at random, is
P(I) =
25
53
.
(b) Since 18 of the 53 students are civil or electrical engineering majors, it follows
that
P(C ∪ E) =
18
53
.
2.4 Probability of an Event 55
Example 2.28: In a poker hand consisting of 5 cards, find the probability of holding 2 aces and 3
jacks.
Solution: The number of ways of being dealt 2 aces from 4 cards is

4
2

=
4!
2! 2!
= 6,
and the number of ways of being dealt 3 jacks from 4 cards is

4
3

=
4!
3! 1!
= 4.
By the multiplication rule (Rule 2.1), there are n = (6)(4) = 24 hands with 2 aces
and 3 jacks. The total number of 5-card poker hands, all of which are equally
likely, is
N =

52
5

=
52!
5! 47!
= 2,598,960.
Therefore, the probability of getting 2 aces and 3 jacks in a 5-card poker hand is
P(C) =
24
2, 598, 960
= 0.9 × 10−5
.
If the outcomes of an experiment are not equally likely to occur, the probabil-
ities must be assigned on the basis of prior knowledge or experimental evidence.
For example, if a coin is not balanced, we could estimate the probabilities of heads
and tails by tossing the coin a large number of times and recording the outcomes.
According to the relative frequency definition of probability, the true probabil-
ities would be the fractions of heads and tails that occur in the long run. Another
intuitive way of understanding probability is the indifference approach. For in-
stance, if you have a die that you believe is balanced, then using this indifference
approach, you determine that the probability that each of the six sides will show
up after a throw is 1/6.
To find a numerical value that represents adequately the probability of winning
at tennis, we must depend on our past performance at the game as well as that of
the opponent and, to some extent, our belief in our ability to win. Similarly, to
find the probability that a horse will win a race, we must arrive at a probability
based on the previous records of all the horses entered in the race as well as the
records of the jockeys riding the horses. Intuition would undoubtedly also play a
part in determining the size of the bet that we might be willing to wager. The
use of intuition, personal beliefs, and other indirect information in arriving at
probabilities is referred to as the subjective definition of probability.
In most of the applications of probability in this book, the relative frequency
interpretation of probability is the operative one. Its foundation is the statistical
experiment rather than subjectivity, and it is best viewed as the limiting relative
frequency. As a result, many applications of probability in science and engineer-
ing must be based on experiments that can be repeated. Less objective notions of
probability are encountered when we assign probabilities based on prior informa-
tion and opinions, as in “There is a good chance that the Giants will lose the Super
56 Chapter 2 Probability
Bowl.” When opinions and prior information differ from individual to individual,
subjective probability becomes the relevant resource. In Bayesian statistics (see
Chapter 18), a more subjective interpretation of probability will be used, based on
an elicitation of prior probability information.
2.5 Additive Rules
Often it is easiest to calculate the probability of some event from known prob-
abilities of other events. This may well be true if the event in question can be
represented as the union of two other events or as the complement of some event.
Several important laws that frequently simplify the computation of probabilities
follow. The first, called the additive rule, applies to unions of events.
Theorem 2.7: If A and B are two events, then
P(A ∪ B) = P(A) + P(B) − P(A ∩ B).
A B
A  B
S
Figure 2.7: Additive rule of probability.
Proof: Consider the Venn diagram in Figure 2.7. The P(A ∪ B) is the sum of the prob-
abilities of the sample points in A ∪ B. Now P(A) + P(B) is the sum of all
the probabilities in A plus the sum of all the probabilities in B. Therefore, we
have added the probabilities in (A ∩ B) twice. Since these probabilities add up
to P(A ∩ B), we must subtract this probability once to obtain the sum of the
probabilities in A ∪ B.
Corollary 2.1: If A and B are mutually exclusive, then
P(A ∪ B) = P(A) + P(B).
Corollary 2.1 is an immediate result of Theorem 2.7, since if A and B are
mutually exclusive, A ∩ B = 0 and then P(A ∩ B) = P(φ) = 0. In general, we can
write Corollary 2.2.
2.5 Additive Rules 57
Corollary 2.2: If A1, A2, . . . , An are mutually exclusive, then
P(A1 ∪ A2 ∪ · · · ∪ An) = P(A1) + P(A2) + · · · + P(An).
A collection of events {A1, A2, . . . , An} of a sample space S is called a partition
of S if A1, A2, . . . , An are mutually exclusive and A1 ∪ A2 ∪ · · · ∪ An = S. Thus,
we have
Corollary 2.3: If A1, A2, . . . , An is a partition of sample space S, then
P(A1 ∪ A2 ∪ · · · ∪ An) = P(A1) + P(A2) + · · · + P(An) = P(S) = 1.
As one might expect, Theorem 2.7 extends in an analogous fashion.
Theorem 2.8: For three events A, B, and C,
P(A ∪ B ∪ C) = P(A) + P(B) + P(C)
− P(A ∩ B) − P(A ∩ C) − P(B ∩ C) + P(A ∩ B ∩ C).
Example 2.29: John is going to graduate from an industrial engineering department in a university
by the end of the semester. After being interviewed at two companies he likes,
he assesses that his probability of getting an offer from company A is 0.8, and
his probability of getting an offer from company B is 0.6. If he believes that
the probability that he will get offers from both companies is 0.5, what is the
probability that he will get at least one offer from these two companies?
Solution: Using the additive rule, we have
P(A ∪ B) = P(A) + P(B) − P(A ∩ B) = 0.8 + 0.6 − 0.5 = 0.9.
Example 2.30: What is the probability of getting a total of 7 or 11 when a pair of fair dice is
tossed?
Solution: Let A be the event that 7 occurs and B the event that 11 comes up. Now, a total
of 7 occurs for 6 of the 36 sample points, and a total of 11 occurs for only 2 of the
sample points. Since all sample points are equally likely, we have P(A) = 1/6 and
P(B) = 1/18. The events A and B are mutually exclusive, since a total of 7 and
11 cannot both occur on the same toss. Therefore,
P(A ∪ B) = P(A) + P(B) =
1
6
+
1
18
=
2
9
.
This result could also have been obtained by counting the total number of points
for the event A ∪ B, namely 8, and writing
P(A ∪ B) =
n
N
=
8
36
=
2
9
.
58 Chapter 2 Probability
Theorem 2.7 and its three corollaries should help the reader gain more insight
into probability and its interpretation. Corollaries 2.1 and 2.2 suggest the very
intuitive result dealing with the probability of occurrence of at least one of a number
of events, no two of which can occur simultaneously. The probability that at least
one occurs is the sum of the probabilities of occurrence of the individual events.
The third corollary simply states that the highest value of a probability (unity) is
assigned to the entire sample space S.
Example 2.31: If the probabilities are, respectively, 0.09, 0.15, 0.21, and 0.23 that a person pur-
chasing a new automobile will choose the color green, white, red, or blue, what is
the probability that a given buyer will purchase a new automobile that comes in
one of those colors?
Solution: Let G, W, R, and B be the events that a buyer selects, respectively, a green,
white, red, or blue automobile. Since these four events are mutually exclusive, the
probability is
P(G ∪ W ∪ R ∪ B) = P(G) + P(W) + P(R) + P(B)
= 0.09 + 0.15 + 0.21 + 0.23 = 0.68.
Often it is more difficult to calculate the probability that an event occurs than
it is to calculate the probability that the event does not occur. Should this be the
case for some event A, we simply find P(A
) first and then, using Theorem 2.7,
find P(A) by subtraction.
Theorem 2.9: If A and A
are complementary events, then
P(A) + P(A
) = 1.
Proof: Since A ∪ A
= S and the sets A and A
are disjoint,
1 = P(S) = P(A ∪ A
) = P(A) + P(A
).
Example 2.32: If the probabilities that an automobile mechanic will service 3, 4, 5, 6, 7, or 8 or
more cars on any given workday are, respectively, 0.12, 0.19, 0.28, 0.24, 0.10, and
0.07, what is the probability that he will service at least 5 cars on his next day at
work?
Solution: Let E be the event that at least 5 cars are serviced. Now, P(E) = 1 − P(E
),
where E
is the event that fewer than 5 cars are serviced. Since
P(E
) = 0.12 + 0.19 = 0.31,
it follows from Theorem 2.9 that
P(E) = 1 − 0.31 = 0.69.
Example 2.33: Suppose the manufacturer’s specifications for the length of a certain type of com-
puter cable are 2000 ± 10 millimeters. In this industry, it is known that small cable
is just as likely to be defective (not meeting specifications) as large cable. That is,
/ /
Exercises 59
the probability of randomly producing a cable with length exceeding 2010 millime-
ters is equal to the probability of producing a cable with length smaller than 1990
millimeters. The probability that the production procedure meets specifications is
known to be 0.99.
(a) What is the probability that a cable selected randomly is too large?
(b) What is the probability that a randomly selected cable is larger than 1990
millimeters?
Solution: Let M be the event that a cable meets specifications. Let S and L be the events
that the cable is too small and too large, respectively. Then
(a) P(M) = 0.99 and P(S) = P(L) = (1 − 0.99)/2 = 0.005.
(b) Denoting by X the length of a randomly selected cable, we have
P(1990 ≤ X ≤ 2010) = P(M) = 0.99.
Since P(X ≥ 2010) = P(L) = 0.005,
P(X ≥ 1990) = P(M) + P(L) = 0.995.
This also can be solved by using Theorem 2.9:
P(X ≥ 1990) + P(X  1990) = 1.
Thus, P(X ≥ 1990) = 1 − P(S) = 1 − 0.005 = 0.995.
Exercises
2.49 Find the errors in each of the following state-
ments:
(a) The probabilities that an automobile salesperson
will sell 0, 1, 2, or 3 cars on any given day in Febru-
ary are, respectively, 0.19, 0.38, 0.29, and 0.15.
(b) The probability that it will rain tomorrow is 0.40,
and the probability that it will not rain tomorrow
is 0.52.
(c) The probabilities that a printer will make 0, 1, 2,
3, or 4 or more mistakes in setting a document are,
respectively, 0.19, 0.34, −0.25, 0.43, and 0.29.
(d) On a single draw from a deck of playing cards, the
probability of selecting a heart is 1/4, the probabil-
ity of selecting a black card is 1/2, and the proba-
bility of selecting both a heart and a black card is
1/8.
2.50 Assuming that all elements of S in Exercise 2.8
on page 42 are equally likely to occur, find
(a) the probability of event A;
(b) the probability of event C;
(c) the probability of event A ∩ C.
2.51 A box contains 500 envelopes, of which 75 con-
tain $100 in cash, 150 contain $25, and 275 contain
$10. An envelope may be purchased for $25. What is
the sample space for the different amounts of money?
Assign probabilities to the sample points and then find
the probability that the first envelope purchased con-
tains less than $100.
2.52 Suppose that in a senior college class of 500 stu-
dents it is found that 210 smoke, 258 drink alcoholic
beverages, 216 eat between meals, 122 smoke and drink
alcoholic beverages, 83 eat between meals and drink
alcoholic beverages, 97 smoke and eat between meals,
and 52 engage in all three of these bad health practices.
If a member of this senior class is selected at random,
find the probability that the student
(a) smokes but does not drink alcoholic beverages;
(b) eats between meals and drinks alcoholic beverages
but does not smoke;
(c) neither smokes nor eats between meals.
2.53 The probability that an American industry will
locate in Shanghai, China, is 0.7, the probability that
/ /
60 Chapter 2 Probability
it will locate in Beijing, China, is 0.4, and the proba-
bility that it will locate in either Shanghai or Beijing or
both is 0.8. What is the probability that the industry
will locate
(a) in both cities?
(b) in neither city?
2.54 From past experience, a stockbroker believes
that under present economic conditions a customer will
invest in tax-free bonds with a probability of 0.6, will
invest in mutual funds with a probability of 0.3, and
will invest in both tax-free bonds and mutual funds
with a probability of 0.15. At this time, find the prob-
ability that a customer will invest
(a) in either tax-free bonds or mutual funds;
(b) in neither tax-free bonds nor mutual funds.
2.55 If each coded item in a catalog begins with 3
distinct letters followed by 4 distinct nonzero digits,
find the probability of randomly selecting one of these
coded items with the first letter a vowel and the last
digit even.
2.56 An automobile manufacturer is concerned about
a possible recall of its best-selling four-door sedan. If
there were a recall, there is a probability of 0.25 of a
defect in the brake system, 0.18 of a defect in the trans-
mission, 0.17 of a defect in the fuel system, and 0.40 of
a defect in some other area.
(a) What is the probability that the defect is the brakes
or the fueling system if the probability of defects in
both systems simultaneously is 0.15?
(b) What is the probability that there are no defects
in either the brakes or the fueling system?
2.57 If a letter is chosen at random from the English
alphabet, find the probability that the letter
(a) is a vowel exclusive of y;
(b) is listed somewhere ahead of the letter j;
(c) is listed somewhere after the letter g.
2.58 A pair of fair dice is tossed. Find the probability
of getting
(a) a total of 8;
(b) at most a total of 5.
2.59 In a poker hand consisting of 5 cards, find the
probability of holding
(a) 3 aces;
(b) 4 hearts and 1 club.
2.60 If 3 books are picked at random from a shelf con-
taining 5 novels, 3 books of poems, and a dictionary,
what is the probability that
(a) the dictionary is selected?
(b) 2 novels and 1 book of poems are selected?
2.61 In a high school graduating class of 100 stu-
dents, 54 studied mathematics, 69 studied history, and
35 studied both mathematics and history. If one of
these students is selected at random, find the proba-
bility that
(a) the student took mathematics or history;
(b) the student did not take either of these subjects;
(c) the student took history but not mathematics.
2.62 Dom’s Pizza Company uses taste testing and
statistical analysis of the data prior to marketing any
new product. Consider a study involving three types
of crusts (thin, thin with garlic and oregano, and thin
with bits of cheese). Dom’s is also studying three
sauces (standard, a new sauce with more garlic, and
a new sauce with fresh basil).
(a) How many combinations of crust and sauce are in-
volved?
(b) What is the probability that a judge will get a plain
thin crust with a standard sauce for his first taste
test?
2.63 According to Consumer Digest (July/August
1996), the probable location of personal computers
(PC) in the home is as follows:
Adult bedroom: 0.03
Child bedroom: 0.15
Other bedroom: 0.14
Office or den: 0.40
Other rooms: 0.28
(a) What is the probability that a PC is in a bedroom?
(b) What is the probability that it is not in a bedroom?
(c) Suppose a household is selected at random from
households with a PC; in what room would you
expect to find a PC?
2.64 Interest centers around the life of an electronic
component. Suppose it is known that the probabil-
ity that the component survives for more than 6000
hours is 0.42. Suppose also that the probability that
the component survives no longer than 4000 hours is
0.04.
(a) What is the probability that the life of the compo-
nent is less than or equal to 6000 hours?
(b) What is the probability that the life is greater than
4000 hours?
/ /
Exercises 61
2.65 Consider the situation of Exercise 2.64. Let A
be the event that the component fails a particular test
and B be the event that the component displays strain
but does not actually fail. Event A occurs with prob-
ability 0.20, and event B occurs with probability 0.35.
(a) What is the probability that the component does
not fail the test?
(b) What is the probability that the component works
perfectly well (i.e., neither displays strain nor fails
the test)?
(c) What is the probability that the component either
fails or shows strain in the test?
2.66 Factory workers are constantly encouraged to
practice zero tolerance when it comes to accidents in
factories. Accidents can occur because the working en-
vironment or conditions themselves are unsafe. On the
other hand, accidents can occur due to carelessness
or so-called human error. In addition, the worker’s
shift, 7:00 A.M.–3:00 P.M. (day shift), 3:00 P.M.–11:00
P.M. (evening shift), or 11:00 P.M.–7:00 A.M. (graveyard
shift), may be a factor. During the last year, 300 acci-
dents have occurred. The percentages of the accidents
for the condition combinations are as follows:
Unsafe Human
Shift Conditions Error
Day 5% 32%
Evening 6% 25%
Graveyard 2% 30%
If an accident report is selected randomly from the 300
reports,
(a) what is the probability that the accident occurred
on the graveyard shift?
(b) what is the probability that the accident occurred
due to human error?
(c) what is the probability that the accident occurred
due to unsafe conditions?
(d) what is the probability that the accident occurred
on either the evening or the graveyard shift?
2.67 Consider the situation of Example 2.32 on page
58.
(a) What is the probability that no more than 4 cars
will be serviced by the mechanic?
(b) What is the probability that he will service fewer
than 8 cars?
(c) What is the probability that he will service either
3 or 4 cars?
2.68 Interest centers around the nature of an oven
purchased at a particular department store. It can be
either a gas or an electric oven. Consider the decisions
made by six distinct customers.
(a) Suppose that the probability is 0.40 that at most
two of these individuals purchase an electric oven.
What is the probability that at least three purchase
the electric oven?
(b) Suppose it is known that the probability that all
six purchase the electric oven is 0.007 while 0.104 is
the probability that all six purchase the gas oven.
What is the probability that at least one of each
type is purchased?
2.69 It is common in many industrial areas to use
a filling machine to fill boxes full of product. This
occurs in the food industry as well as other areas in
which the product is used in the home, for example,
detergent. These machines are not perfect, and indeed
they may A, fill to specification, B, underfill, and C,
overfill. Generally, the practice of underfilling is that
which one hopes to avoid. Let P(B) = 0.001 while
P(A) = 0.990.
(a) Give P(C).
(b) What is the probability that the machine does not
underfill?
(c) What is the probability that the machine either
overfills or underfills?
2.70 Consider the situation of Exercise 2.69. Suppose
50,000 boxes of detergent are produced per week and
suppose also that those underfilled are “sent back,”
with customers requesting reimbursement of the pur-
chase price. Suppose also that the cost of production
is known to be $4.00 per box while the purchase price
is $4.50 per box.
(a) What is the weekly profit under the condition of no
defective boxes?
(b) What is the loss in profit expected due to under-
filling?
2.71 As the situation of Exercise 2.69 might suggest,
statistical procedures are often used for control of qual-
ity (i.e., industrial quality control). At times, the
weight of a product is an important variable to con-
trol. Specifications are given for the weight of a certain
packaged product, and a package is rejected if it is ei-
ther too light or too heavy. Historical data suggest that
0.95 is the probability that the product meets weight
specifications whereas 0.002 is the probability that the
product is too light. For each single packaged product,
the manufacturer invests $20.00 in production and the
purchase price for the consumer is $25.00.
(a) What is the probability that a package chosen ran-
domly from the production line is too heavy?
(b) For each 10,000 packages sold, what profit is re-
ceived by the manufacturer if all packages meet
weight specification?
(c) Assuming that all defective packages are rejected
62 Chapter 2 Probability
and rendered worthless, how much is the profit re-
duced on 10,000 packages due to failure to meet
weight specification?
2.72 Prove that
P(A
∩ B
) = 1 + P(A ∩ B) − P(A) − P(B).
2.6 Conditional Probability, Independence, and the Product
Rule
One very important concept in probability theory is conditional probability. In
some applications, the practitioner is interested in the probability structure under
certain restrictions. For instance, in epidemiology, rather than studying the chance
that a person from the general population has diabetes, it might be of more interest
to know this probability for a distinct group such as Asian women in the age range
of 35 to 50 or Hispanic men in the age range of 40 to 60. This type of probability
is called a conditional probability.
Conditional Probability
The probability of an event B occurring when it is known that some event A
has occurred is called a conditional probability and is denoted by P(B|A). The
symbol P(B|A) is usually read “the probability that B occurs given that A occurs”
or simply “the probability of B, given A.”
Consider the event B of getting a perfect square when a die is tossed. The die
is constructed so that the even numbers are twice as likely to occur as the odd
numbers. Based on the sample space S = {1, 2, 3, 4, 5, 6}, with probabilities of
1/9 and 2/9 assigned, respectively, to the odd and even numbers, the probability
of B occurring is 1/3. Now suppose that it is known that the toss of the die
resulted in a number greater than 3. We are now dealing with a reduced sample
space A = {4, 5, 6}, which is a subset of S. To find the probability that B occurs,
relative to the space A, we must first assign new probabilities to the elements of
A proportional to their original probabilities such that their sum is 1. Assigning a
probability of w to the odd number in A and a probability of 2w to the two even
numbers, we have 5w = 1, or w = 1/5. Relative to the space A, we find that B
contains the single element 4. Denoting this event by the symbol B|A, we write
B|A = {4}, and hence
P(B|A) =
2
5
.
This example illustrates that events may have different probabilities when consid-
ered relative to different sample spaces.
We can also write
P(B|A) =
2
5
=
2/9
5/9
=
P(A ∩ B)
P(A)
,
where P(A ∩ B) and P(A) are found from the original sample space S. In other
words, a conditional probability relative to a subspace A of S may be calculated
directly from the probabilities assigned to the elements of the original sample space
S.
2.6 Conditional Probability, Independence, and the Product Rule 63
Definition 2.10: The conditional probability of B, given A, denoted by P(B|A), is defined by
P(B|A) =
P(A ∩ B)
P(A)
, provided P(A)  0.
As an additional illustration, suppose that our sample space S is the population
of adults in a small town who have completed the requirements for a college degree.
We shall categorize them according to gender and employment status. The data
are given in Table 2.1.
Table 2.1: Categorization of the Adults in a Small Town
Employed Unemployed Total
Male
Female
460
140
40
260
500
400
Total 600 300 900
One of these individuals is to be selected at random for a tour throughout the
country to publicize the advantages of establishing new industries in the town. We
shall be concerned with the following events:
M: a man is chosen,
E: the one chosen is employed.
Using the reduced sample space E, we find that
P(M|E) =
460
600
=
23
30
.
Let n(A) denote the number of elements in any set A. Using this notation,
since each adult has an equal chance of being selected, we can write
P(M|E) =
n(E ∩ M)
n(E)
=
n(E ∩ M)/n(S)
n(E)/n(S)
=
P(E ∩ M)
P(E)
,
where P(E ∩ M) and P(E) are found from the original sample space S. To verify
this result, note that
P(E) =
600
900
=
2
3
and P(E ∩ M) =
460
900
=
23
45
.
Hence,
P(M|E) =
23/45
2/3
=
23
30
,
as before.
Example 2.34: The probability that a regularly scheduled flight departs on time is P(D) = 0.83;
the probability that it arrives on time is P(A) = 0.82; and the probability that it
departs and arrives on time is P(D ∩ A) = 0.78. Find the probability that a plane
64 Chapter 2 Probability
(a) arrives on time, given that it departed on time, and (b) departed on time, given
that it has arrived on time.
Solution: Using Definition 2.10, we have the following.
(a) The probability that a plane arrives on time, given that it departed on time,
is
P(A|D) =
P(D ∩ A)
P(D)
=
0.78
0.83
= 0.94.
(b) The probability that a plane departed on time, given that it has arrived on
time, is
P(D|A) =
P(D ∩ A)
P(A)
=
0.78
0.82
= 0.95.
The notion of conditional probability provides the capability of reevaluating the
idea of probability of an event in light of additional information, that is, when it
is known that another event has occurred. The probability P(A|B) is an updating
of P(A) based on the knowledge that event B has occurred. In Example 2.34, it
is important to know the probability that the flight arrives on time. One is given
the information that the flight did not depart on time. Armed with this additional
information, one can calculate the more pertinent probability P(A|D
), that is,
the probability that it arrives on time, given that it did not depart on time. In
many situations, the conclusions drawn from observing the more important condi-
tional probability change the picture entirely. In this example, the computation of
P(A|D
) is
P(A|D
) =
P(A ∩ D
)
P(D)
=
0.82 − 0.78
0.17
= 0.24.
As a result, the probability of an on-time arrival is diminished severely in the
presence of the additional information.
Example 2.35: The concept of conditional probability has countless uses in both industrial and
biomedical applications. Consider an industrial process in the textile industry in
which strips of a particular type of cloth are being produced. These strips can be
defective in two ways, length and nature of texture. For the case of the latter, the
process of identification is very complicated. It is known from historical information
on the process that 10% of strips fail the length test, 5% fail the texture test, and
only 0.8% fail both tests. If a strip is selected randomly from the process and a
quick measurement identifies it as failing the length test, what is the probability
that it is texture defective?
Solution: Consider the events
L: length defective, T: texture defective.
Given that the strip is length defective, the probability that this strip is texture
defective is given by
P(T|L) =
P(T ∩ L)
P(L)
=
0.008
0.1
= 0.08.
Thus, knowing the conditional probability provides considerably more information
than merely knowing P(T).
2.6 Conditional Probability, Independence, and the Product Rule 65
Independent Events
In the die-tossing experiment discussed on page 62, we note that P(B|A) = 2/5
whereas P(B) = 1/3. That is, P(B|A) = P(B), indicating that B depends on
A. Now consider an experiment in which 2 cards are drawn in succession from an
ordinary deck, with replacement. The events are defined as
A: the first card is an ace,
B: the second card is a spade.
Since the first card is replaced, our sample space for both the first and the second
draw consists of 52 cards, containing 4 aces and 13 spades. Hence,
P(B|A) =
13
52
=
1
4
and P(B) =
13
52
=
1
4
.
That is, P(B|A) = P(B). When this is true, the events A and B are said to be
independent.
Although conditional probability allows for an alteration of the probability of an
event in the light of additional material, it also enables us to understand better the
very important concept of independence or, in the present context, independent
events. In the airport illustration in Example 2.34, P(A|D) differs from P(A).
This suggests that the occurrence of D influenced A, and this is certainly expected
in this illustration. However, consider the situation where we have events A and
B and
P(A|B) = P(A).
In other words, the occurrence of B had no impact on the odds of occurrence of A.
Here the occurrence of A is independent of the occurrence of B. The importance
of the concept of independence cannot be overemphasized. It plays a vital role in
material in virtually all chapters in this book and in all areas of applied statistics.
Definition 2.11: Two events A and B are independent if and only if
P(B|A) = P(B) or P(A|B) = P(A),
assuming the existences of the conditional probabilities. Otherwise, A and B are
dependent.
The condition P(B|A) = P(B) implies that P(A|B) = P(A), and conversely.
For the card-drawing experiments, where we showed that P(B|A) = P(B) = 1/4,
we also can see that P(A|B) = P(A) = 1/13.
The Product Rule, or the Multiplicative Rule
Multiplying the formula in Definition 2.10 by P(A), we obtain the following im-
portant multiplicative rule (or product rule), which enables us to calculate
66 Chapter 2 Probability
the probability that two events will both occur.
Theorem 2.10: If in an experiment the events A and B can both occur, then
P(A ∩ B) = P(A)P(B|A), provided P(A)  0.
Thus, the probability that both A and B occur is equal to the probability that
A occurs multiplied by the conditional probability that B occurs, given that A
occurs. Since the events A ∩ B and B ∩ A are equivalent, it follows from Theorem
2.10 that we can also write
P(A ∩ B) = P(B ∩ A) = P(B)P(A|B).
In other words, it does not matter which event is referred to as A and which event
is referred to as B.
Example 2.36: Suppose that we have a fuse box containing 20 fuses, of which 5 are defective. If
2 fuses are selected at random and removed from the box in succession without
replacing the first, what is the probability that both fuses are defective?
Solution: We shall let A be the event that the first fuse is defective and B the event that the
second fuse is defective; then we interpret A ∩ B as the event that A occurs and
then B occurs after A has occurred. The probability of first removing a defective
fuse is 1/4; then the probability of removing a second defective fuse from the
remaining 4 is 4/19. Hence,
P(A ∩ B) =

1
4
 
4
19

=
1
19
.
Example 2.37: One bag contains 4 white balls and 3 black balls, and a second bag contains 3 white
balls and 5 black balls. One ball is drawn from the first bag and placed unseen in
the second bag. What is the probability that a ball now drawn from the second
bag is black?
Solution: Let B1, B2, and W1 represent, respectively, the drawing of a black ball from bag 1,
a black ball from bag 2, and a white ball from bag 1. We are interested in the union
of the mutually exclusive events B1 ∩ B2 and W1 ∩ B2. The various possibilities
and their probabilities are illustrated in Figure 2.8. Now
P[(B1 ∩ B2) or (W1 ∩ B2)] = P(B1 ∩ B2) + P(W1 ∩ B2)
= P(B1)P(B2|B1) + P(W1)P(B2|W1)
=

3
7
 
6
9

+

4
7
 
5
9

=
38
63
.
If, in Example 2.36, the first fuse is replaced and the fuses thoroughly rear-
ranged before the second is removed, then the probability of a defective fuse on the
second selection is still 1/4; that is, P(B|A) = P(B) and the events A and B are
independent. When this is true, we can substitute P(B) for P(B|A) in Theorem
2.10 to obtain the following special multiplicative rule.
2.6 Conditional Probability, Independence, and the Product Rule 67
Bag 1
4W, 3B
Bag 2
3W, 6B
Bag 2
4W, 5B
P(B1 ∩B2)=(3/7)(6/9)
P(B1 ∩W2)=(3/7)(3/9)
P(W1 ∩B2)=(4/7)(5/9)
P(W1 ∩W2) =(4/7)(4/9)
B
3/7
4/7
W
B
6/9
W
3/9
B
6/9
4/9
W
Figure 2.8: Tree diagram for Example 2.37.
Theorem 2.11: Two events A and B are independent if and only if
P(A ∩ B) = P(A)P(B).
Therefore, to obtain the probability that two independent events will both occur,
we simply find the product of their individual probabilities.
Example 2.38: A small town has one fire engine and one ambulance available for emergencies. The
probability that the fire engine is available when needed is 0.98, and the probability
that the ambulance is available when called is 0.92. In the event of an injury
resulting from a burning building, find the probability that both the ambulance
and the fire engine will be available, assuming they operate independently.
Solution: Let A and B represent the respective events that the fire engine and the ambulance
are available. Then
P(A ∩ B) = P(A)P(B) = (0.98)(0.92) = 0.9016.
Example 2.39: An electrical system consists of four components as illustrated in Figure 2.9. The
system works if components A and B work and either of the components C or D
works. The reliability (probability of working) of each component is also shown
in Figure 2.9. Find the probability that (a) the entire system works and (b) the
component C does not work, given that the entire system works. Assume that the
four components work independently.
Solution: In this configuration of the system, A, B, and the subsystem C and D constitute
a serial circuit system, whereas the subsystem C and D itself is a parallel circuit
system.
(a) Clearly the probability that the entire system works can be calculated as
68 Chapter 2 Probability
follows:
P[A ∩ B ∩ (C ∪ D)] = P(A)P(B)P(C ∪ D) = P(A)P(B)[1 − P(C
∩ D
)]
= P(A)P(B)[1 − P(C
)P(D
)]
= (0.9)(0.9)[1 − (1 − 0.8)(1 − 0.8)] = 0.7776.
The equalities above hold because of the independence among the four com-
ponents.
(b) To calculate the conditional probability in this case, notice that
P =
P(the system works but C does not work)
P(the system works)
=
P(A ∩ B ∩ C
∩ D)
P(the system works)
=
(0.9)(0.9)(1 − 0.8)(0.8)
0.7776
= 0.1667.
A B
C
D
0.9 0.9
0.8
0.8
Figure 2.9: An electrical system for Example 2.39.
The multiplicative rule can be extended to more than two-event situations.
Theorem 2.12: If, in an experiment, the events A1, A2, . . . , Ak can occur, then
P(A1 ∩ A2 ∩ · · · ∩ Ak)
= P(A1)P(A2|A1)P(A3|A1 ∩ A2) · · · P(Ak|A1 ∩ A2 ∩ · · · ∩ Ak−1).
If the events A1, A2, . . . , Ak are independent, then
P(A1 ∩ A2 ∩ · · · ∩ Ak) = P(A1)P(A2) · · · P(Ak).
Example 2.40: Three cards are drawn in succession, without replacement, from an ordinary deck
of playing cards. Find the probability that the event A1 ∩ A2 ∩ A3 occurs, where
A1 is the event that the first card is a red ace, A2 is the event that the second card
is a 10 or a jack, and A3 is the event that the third card is greater than 3 but less
than 7.
Solution: First we define the events
A1: the first card is a red ace,
A2: the second card is a 10 or a jack,
/ /
Exercises 69
A3: the third card is greater than 3 but less than 7.
Now
P(A1) =
2
52
, P(A2|A1) =
8
51
, P(A3|A1 ∩ A2) =
12
50
,
and hence, by Theorem 2.12,
P(A1 ∩ A2 ∩ A3) = P(A1)P(A2|A1)P(A3|A1 ∩ A2)
=

2
52
 
8
51
 
12
50

=
8
5525
.
The property of independence stated in Theorem 2.11 can be extended to deal
with more than two events. Consider, for example, the case of three events A, B,
and C. It is not sufficient to only have that P(A ∩ B ∩ C) = P(A)P(B)P(C) as a
definition of independence among the three. Suppose A = B and C = φ, the null
set. Although A∩B∩C = φ, which results in P(A∩B∩C) = 0 = P(A)P(B)P(C),
events A and B are not independent. Hence, we have the following definition.
Definition 2.12: A collection of events A = {A1, . . . , An} are mutually independent if for any
subset of A, Ai1 , . . . , Aik
, for k ≤ n, we have
P(Ai1
∩ · · · ∩ Aik
) = P(Ai1
) · · · P(Aik
).
Exercises
2.73 If R is the event that a convict committed armed
robbery and D is the event that the convict pushed
dope, state in words what probabilities are expressed
by
(a) P(R|D);
(b) P(D
|R);
(c) P(R
|D
).
2.74 A class in advanced physics is composed of 10
juniors, 30 seniors, and 10 graduate students. The final
grades show that 3 of the juniors, 10 of the seniors, and
5 of the graduate students received an A for the course.
If a student is chosen at random from this class and is
found to have earned an A, what is the probability that
he or she is a senior?
2.75 A random sample of 200 adults are classified be-
low by sex and their level of education attained.
Education Male Female
Elementary 38 45
Secondary 28 50
College 22 17
If a person is picked at random from this group, find
the probability that
(a) the person is a male, given that the person has a
secondary education;
(b) the person does not have a college degree, given
that the person is a female.
2.76 In an experiment to study the relationship of hy-
pertension and smoking habits, the following data are
collected for 180 individuals:
Moderate Heavy
Nonsmokers Smokers Smokers
H 21 36 30
NH 48 26 19
where H and NH in the table stand for Hypertension
and Nonhypertension, respectively. If one of these indi-
viduals is selected at random, find the probability that
the person is
(a) experiencing hypertension, given that the person is
a heavy smoker;
(b) a nonsmoker, given that the person is experiencing
no hypertension.
2.77 In the senior year of a high school graduating
class of 100 students, 42 studied mathematics, 68 stud-
ied psychology, 54 studied history, 22 studied both
mathematics and history, 25 studied both mathematics
and psychology, 7 studied history but neither mathe-
matics nor psychology, 10 studied all three subjects,
and 8 did not take any of the three. Randomly select
/ /
70 Chapter 2 Probability
a student from the class and find the probabilities of
the following events.
(a) A person enrolled in psychology takes all three sub-
jects.
(b) A person not taking psychology is taking both his-
tory and mathematics.
2.78 A manufacturer of a flu vaccine is concerned
about the quality of its flu serum. Batches of serum are
processed by three different departments having rejec-
tion rates of 0.10, 0.08, and 0.12, respectively. The in-
spections by the three departments are sequential and
independent.
(a) What is the probability that a batch of serum sur-
vives the first departmental inspection but is re-
jected by the second department?
(b) What is the probability that a batch of serum is
rejected by the third department?
2.79 In USA Today (Sept. 5, 1996), the results of a
survey involving the use of sleepwear while traveling
were listed as follows:
Male Female Total
Underwear 0.220 0.024 0.244
Nightgown 0.002 0.180 0.182
Nothing 0.160 0.018 0.178
Pajamas 0.102 0.073 0.175
T-shirt 0.046 0.088 0.134
Other 0.084 0.003 0.087
(a) What is the probability that a traveler is a female
who sleeps in the nude?
(b) What is the probability that a traveler is male?
(c) Assuming the traveler is male, what is the proba-
bility that he sleeps in pajamas?
(d) What is the probability that a traveler is male if
the traveler sleeps in pajamas or a T-shirt?
2.80 The probability that an automobile being filled
with gasoline also needs an oil change is 0.25; the prob-
ability that it needs a new oil filter is 0.40; and the
probability that both the oil and the filter need chang-
ing is 0.14.
(a) If the oil has to be changed, what is the probability
that a new oil filter is needed?
(b) If a new oil filter is needed, what is the probability
that the oil has to be changed?
2.81 The probability that a married man watches a
certain television show is 0.4, and the probability that
a married woman watches the show is 0.5. The proba-
bility that a man watches the show, given that his wife
does, is 0.7. Find the probability that
(a) a married couple watches the show;
(b) a wife watches the show, given that her husband
does;
(c) at least one member of a married couple will watch
the show.
2.82 For married couples living in a certain suburb,
the probability that the husband will vote on a bond
referendum is 0.21, the probability that the wife will
vote on the referendum is 0.28, and the probability that
both the husband and the wife will vote is 0.15. What
is the probability that
(a) at least one member of a married couple will vote?
(b) a wife will vote, given that her husband will vote?
(c) a husband will vote, given that his wife will not
vote?
2.83 The probability that a vehicle entering the Lu-
ray Caverns has Canadian license plates is 0.12; the
probability that it is a camper is 0.28; and the proba-
bility that it is a camper with Canadian license plates
is 0.09. What is the probability that
(a) a camper entering the Luray Caverns has Canadian
license plates?
(b) a vehicle with Canadian license plates entering the
Luray Caverns is a camper?
(c) a vehicle entering the Luray Caverns does not have
Canadian plates or is not a camper?
2.84 The probability that the head of a household is
home when a telemarketing representative calls is 0.4.
Given that the head of the house is home, the proba-
bility that goods will be bought from the company is
0.3. Find the probability that the head of the house is
home and goods are bought from the company.
2.85 The probability that a doctor correctly diag-
noses a particular illness is 0.7. Given that the doctor
makes an incorrect diagnosis, the probability that the
patient files a lawsuit is 0.9. What is the probability
that the doctor makes an incorrect diagnosis and the
patient sues?
2.86 In 1970, 11% of Americans completed four years
of college; 43% of them were women. In 1990, 22% of
Americans completed four years of college; 53% of them
were women (Time, Jan. 19, 1996).
(a) Given that a person completed four years of college
in 1970, what is the probability that the person was
a woman?
(b) What is the probability that a woman finished four
years of college in 1990?
(c) What is the probability that a man had not finished
college in 1990?
Exercises 71
2.87 A real estate agent has 8 master keys to open
several new homes. Only 1 master key will open any
given house. If 40% of these homes are usually left
unlocked, what is the probability that the real estate
agent can get into a specific home if the agent selects
3 master keys at random before leaving the office?
2.88 Before the distribution of certain statistical soft-
ware, every fourth compact disk (CD) is tested for ac-
curacy. The testing process consists of running four
independent programs and checking the results. The
failure rates for the four testing programs are, respec-
tively, 0.01, 0.03, 0.02, and 0.01.
(a) What is the probability that a CD was tested and
failed any test?
(b) Given that a CD was tested, what is the probability
that it failed program 2 or 3?
(c) In a sample of 100, how many CDs would you ex-
pect to be rejected?
(d) Given that a CD was defective, what is the proba-
bility that it was tested?
2.89 A town has two fire engines operating indepen-
dently. The probability that a specific engine is avail-
able when needed is 0.96.
(a) What is the probability that neither is available
when needed?
(b) What is the probability that a fire engine is avail-
able when needed?
2.90 Pollution of the rivers in the United States has
been a problem for many years. Consider the following
events:
A: the river is polluted,
B : a sample of water tested detects pollution,
C : fishing is permitted.
Assume P(A) = 0.3, P(B|A) = 0.75, P(B|A
) = 0.20,
P(C|A∩B) = 0.20, P(C|A
∩B) = 0.15, P(C|A∩B
) =
0.80, and P(C|A
∩ B
) = 0.90.
(a) Find P(A ∩ B ∩ C).
(b) Find P(B
∩ C).
(c) Find P(C).
(d) Find the probability that the river is polluted, given
that fishing is permitted and the sample tested did
not detect pollution.
2.91 Find the probability of randomly selecting 4
good quarts of milk in succession from a cooler con-
taining 20 quarts of which 5 have spoiled, by using
(a) the first formula of Theorem 2.12 on page 68;
(b) the formulas of Theorem 2.6 and Rule 2.3 on pages
50 and 54, respectively.
2.92 Suppose the diagram of an electrical system is
as given in Figure 2.10. What is the probability that
the system works? Assume the components fail inde-
pendently.
2.93 A circuit system is given in Figure 2.11. Assume
the components fail independently.
(a) What is the probability that the entire system
works?
(b) Given that the system works, what is the probabil-
ity that the component A is not working?
2.94 In the situation of Exercise 2.93, it is known that
the system does not work. What is the probability that
the component A also does not work?
D
A
B
C
0.9
0.95
0.7
0.8
Figure 2.10: Diagram for Exercise 2.92.
A B
C D E
0.7 0.7
0.8 0.8 0.8
Figure 2.11: Diagram for Exercise 2.93.
72 Chapter 2 Probability
2.7 Bayes’ Rule
Bayesian statistics is a collection of tools that is used in a special form of statistical
inference which applies in the analysis of experimental data in many practical
situations in science and engineering. Bayes’ rule is one of the most important
rules in probability theory. It is the foundation of Bayesian inference, which will
be discussed in Chapter 18.
Total Probability
Let us now return to the illustration of Section 2.6, where an individual is being
selected at random from the adults of a small town to tour the country and publicize
the advantages of establishing new industries in the town. Suppose that we are
now given the additional information that 36 of those employed and 12 of those
unemployed are members of the Rotary Club. We wish to find the probability of
the event A that the individual selected is a member of the Rotary Club. Referring
to Figure 2.12, we can write A as the union of the two mutually exclusive events
E ∩A and E
∩A. Hence, A = (E ∩A)∪(E
∩A), and by Corollary 2.1 of Theorem
2.7, and then Theorem 2.10, we can write
P(A) = P[(E ∩ A) ∪ (E
∩ A)] = P(E ∩ A) + P(E
∩ A)
= P(E)P(A|E) + P(E
)P(A|E
).
E
E
A
E  A
E  A
Figure 2.12: Venn diagram for the events A, E, and E
.
The data of Section 2.6, together with the additional data given above for the set
A, enable us to compute
P(E) =
600
900
=
2
3
, P(A|E) =
36
600
=
3
50
,
and
P(E
) =
1
3
, P(A|E
) =
12
300
=
1
25
.
If we display these probabilities by means of the tree diagram of Figure 2.13, where
the first branch yields the probability P(E)P(A|E) and the second branch yields
2.7 Bayes’ Rule 73
E' P(A|E)  1/25 A'
P(E')P(A|E')
P(E)P(A|E)
P(A|E) = 3/50
P
(
E
)
=
2
/
3
E A
P
(
E
'
)
=
1
/
3
Figure 2.13: Tree diagram for the data on page 63, using additional information
on page 72.
the probability P(E
)P(A|E
), it follows that
P(A) =

2
3
 
3
50

+

1
3
 
1
25

=
4
75
.
A generalization of the foregoing illustration to the case where the sample space
is partitioned into k subsets is covered by the following theorem, sometimes called
the theorem of total probability or the rule of elimination.
Theorem 2.13: If the events B1, B2, . . . , Bk constitute a partition of the sample space S such that
P(Bi) = 0 for i = 1, 2, . . . , k, then for any event A of S,
P(A) =
k

i=1
P(Bi ∩ A) =
k

i=1
P(Bi)P(A|Bi).
A
B1
B2
B3
B4 B5
…
Figure 2.14: Partitioning the sample space S.
74 Chapter 2 Probability
Proof: Consider the Venn diagram of Figure 2.14. The event A is seen to be the union of
the mutually exclusive events
B1 ∩ A, B2 ∩ A, . . . , Bk ∩ A;
that is,
A = (B1 ∩ A) ∪ (B2 ∩ A) ∪ · · · ∪ (Bk ∩ A).
Using Corollary 2.2 of Theorem 2.7 and Theorem 2.10, we have
P(A) = P[(B1 ∩ A) ∪ (B2 ∩ A) ∪ · · · ∪ (Bk ∩ A)]
= P(B1 ∩ A) + P(B2 ∩ A) + · · · + P(Bk ∩ A)
=
k

i=1
P(Bi ∩ A)
=
k

i=1
P(Bi)P(A|Bi).
Example 2.41: In a certain assembly plant, three machines, B1, B2, and B3, make 30%, 45%, and
25%, respectively, of the products. It is known from past experience that 2%, 3%,
and 2% of the products made by each machine, respectively, are defective. Now,
suppose that a finished product is randomly selected. What is the probability that
it is defective?
Solution: Consider the following events:
A: the product is defective,
B1: the product is made by machine B1,
B2: the product is made by machine B2,
B3: the product is made by machine B3.
Applying the rule of elimination, we can write
P(A) = P(B1)P(A|B1) + P(B2)P(A|B2) + P(B3)P(A|B3).
Referring to the tree diagram of Figure 2.15, we find that the three branches give
the probabilities
P(B1)P(A|B1) = (0.3)(0.02) = 0.006,
P(B2)P(A|B2) = (0.45)(0.03) = 0.0135,
P(B3)P(A|B3) = (0.25)(0.02) = 0.005,
and hence
P(A) = 0.006 + 0.0135 + 0.005 = 0.0245.
2.7 Bayes’ Rule 75
A
P(A | B 1 ) = 0.02
P(A | B 3 ) = 0.02
P(A | B 2 ) = 0.03
P(B 2 ) = 0.45
B 1
B 2
B 3
A
A
P
(
B
1
)
=
0
.
3
P
(
B
3
)
=
0
.
2
5
Figure 2.15: Tree diagram for Example 2.41.
Bayes’ Rule
Instead of asking for P(A) in Example 2.41, by the rule of elimination, suppose
that we now consider the problem of finding the conditional probability P(Bi|A).
In other words, suppose that a product was randomly selected and it is defective.
What is the probability that this product was made by machine Bi? Questions of
this type can be answered by using the following theorem, called Bayes’ rule:
Theorem 2.14: (Bayes’ Rule) If the events B1, B2, . . . , Bk constitute a partition of the sample
space S such that P(Bi) = 0 for i = 1, 2, . . . , k, then for any event A in S such
that P(A) = 0,
P(Br|A) =
P(Br ∩ A)
k

i=1
P(Bi ∩ A)
=
P(Br)P(A|Br)
k

i=1
P(Bi)P(A|Bi)
for r = 1, 2, . . . , k.
Proof: By the definition of conditional probability,
P(Br|A) =
P(Br ∩ A)
P(A)
,
and then using Theorem 2.13 in the denominator, we have
P(Br|A) =
P(Br ∩ A)
k

i=1
P(Bi ∩ A)
=
P(Br)P(A|Br)
k

i=1
P(Bi)P(A|Bi)
,
which completes the proof.
Example 2.42: With reference to Example 2.41, if a product was chosen randomly and found to
be defective, what is the probability that it was made by machine B3?
Solution: Using Bayes’ rule to write
P(B3|A) =
P(B3)P(A|B3)
P(B1)P(A|B1) + P(B2)P(A|B2) + P(B3)P(A|B3)
,
/ /
76 Chapter 2 Probability
and then substituting the probabilities calculated in Example 2.41, we have
P(B3|A) =
0.005
0.006 + 0.0135 + 0.005
=
0.005
0.0245
=
10
49
.
In view of the fact that a defective product was selected, this result suggests that
it probably was not made by machine B3.
Example 2.43: A manufacturing firm employs three analytical plans for the design and devel-
opment of a particular product. For cost reasons, all three are used at varying
times. In fact, plans 1, 2, and 3 are used for 30%, 20%, and 50% of the products,
respectively. The defect rate is different for the three procedures as follows:
P(D|P1) = 0.01, P(D|P2) = 0.03, P(D|P3) = 0.02,
where P(D|Pj) is the probability of a defective product, given plan j. If a random
product was observed and found to be defective, which plan was most likely used
and thus responsible?
Solution: From the statement of the problem
P(P1) = 0.30, P(P2) = 0.20, and P(P3) = 0.50,
we must find P(Pj|D) for j = 1, 2, 3. Bayes’ rule (Theorem 2.14) shows
P(P1|D) =
P(P1)P(D|P1)
P(P1)P(D|P1) + P(P2)P(D|P2) + P(P3)P(D|P3)
=
(0.30)(0.01)
(0.3)(0.01) + (0.20)(0.03) + (0.50)(0.02)
=
0.003
0.019
= 0.158.
Similarly,
P(P2|D) =
(0.03)(0.20)
0.019
= 0.316 and P(P3|D) =
(0.02)(0.50)
0.019
= 0.526.
The conditional probability of a defect given plan 3 is the largest of the three; thus
a defective for a random product is most likely the result of the use of plan 3.
Using Bayes’ rule, a statistical methodology called the Bayesian approach has
attracted a lot of attention in applications. An introduction to the Bayesian method
will be discussed in Chapter 18.
Exercises
2.95 In a certain region of the country it is known
from past experience that the probability of selecting
an adult over 40 years of age with cancer is 0.05. If
the probability of a doctor correctly diagnosing a per-
son with cancer as having the disease is 0.78 and the
probability of incorrectly diagnosing a person without
cancer as having the disease is 0.06, what is the prob-
ability that an adult over 40 years of age is diagnosed
as having cancer?
2.96 Police plan to enforce speed limits by using radar
traps at four different locations within the city limits.
The radar traps at each of the locations L1, L2, L3,
and L4 will be operated 40%, 30%, 20%, and 30% of
/ /
Review Exercises 77
the time. If a person who is speeding on her way to
work has probabilities of 0.2, 0.1, 0.5, and 0.2, respec-
tively, of passing through these locations, what is the
probability that she will receive a speeding ticket?
2.97 Referring to Exercise 2.95, what is the probabil-
ity that a person diagnosed as having cancer actually
has the disease?
2.98 If the person in Exercise 2.96 received a speed-
ing ticket on her way to work, what is the probability
that she passed through the radar trap located at L2?
2.99 Suppose that the four inspectors at a film fac-
tory are supposed to stamp the expiration date on each
package of film at the end of the assembly line. John,
who stamps 20% of the packages, fails to stamp the
expiration date once in every 200 packages; Tom, who
stamps 60% of the packages, fails to stamp the expira-
tion date once in every 100 packages; Jeff, who stamps
15% of the packages, fails to stamp the expiration date
once in every 90 packages; and Pat, who stamps 5% of
the packages, fails to stamp the expiration date once
in every 200 packages. If a customer complains that
her package of film does not show the expiration date,
what is the probability that it was inspected by John?
2.100 A regional telephone company operates three
identical relay stations at different locations. During a
one-year period, the number of malfunctions reported
by each station and the causes are shown below.
Station A B C
Problems with electricity supplied 2 1 1
Computer malfunction 4 3 2
Malfunctioning electrical equipment 5 4 2
Caused by other human errors 7 7 5
Suppose that a malfunction was reported and it was
found to be caused by other human errors. What is
the probability that it came from station C?
2.101 A paint-store chain produces and sells latex
and semigloss paint. Based on long-range sales, the
probability that a customer will purchase latex paint is
0.75. Of those that purchase latex paint, 60% also pur-
chase rollers. But only 30% of semigloss paint buyers
purchase rollers. A randomly selected buyer purchases
a roller and a can of paint. What is the probability
that the paint is latex?
2.102 Denote by A, B, and C the events that a grand
prize is behind doors A, B, and C, respectively. Sup-
pose you randomly picked a door, say A. The game
host opened a door, say B, and showed there was no
prize behind it. Now the host offers you the option
of either staying at the door that you picked (A) or
switching to the remaining unopened door (C). Use
probability to explain whether you should switch or
not.
Review Exercises
2.103 A truth serum has the property that 90% of
the guilty suspects are properly judged while, of course,
10% of the guilty suspects are improperly found inno-
cent. On the other hand, innocent suspects are mis-
judged 1% of the time. If the suspect was selected
from a group of suspects of which only 5% have ever
committed a crime, and the serum indicates that he is
guilty, what is the probability that he is innocent?
2.104 An allergist claims that 50% of the patients
she tests are allergic to some type of weed. What is
the probability that
(a) exactly 3 of her next 4 patients are allergic to
weeds?
(b) none of her next 4 patients is allergic to weeds?
2.105 By comparing appropriate regions of Venn di-
agrams, verify that
(a) (A ∩ B) ∪ (A ∩ B
) = A;
(b) A
∩ (B
∪ C) = (A
∩ B
) ∪ (A
∩ C).
2.106 The probabilities that a service station will
pump gas into 0, 1, 2, 3, 4, or 5 or more cars during
a certain 30-minute period are 0.03, 0.18, 0.24, 0.28,
0.10, and 0.17, respectively. Find the probability that
in this 30-minute period
(a) more than 2 cars receive gas;
(b) at most 4 cars receive gas;
(c) 4 or more cars receive gas.
2.107 How many bridge hands are possible contain-
ing 4 spades, 6 diamonds, 1 club, and 2 hearts?
2.108 If the probability is 0.1 that a person will make
a mistake on his or her state income tax return, find
the probability that
(a) four totally unrelated persons each make a mistake;
(b) Mr. Jones and Ms. Clark both make mistakes,
and Mr. Roberts and Ms. Williams do not make a
mistake.
/ /
78 Chapter 2 Probability
2.109 A large industrial firm uses three local motels
to provide overnight accommodations for its clients.
From past experience it is known that 20% of the
clients are assigned rooms at the Ramada Inn, 50% at
the Sheraton, and 30% at the Lakeview Motor Lodge.
If the plumbing is faulty in 5% of the rooms at the Ra-
mada Inn, in 4% of the rooms at the Sheraton, and in
8% of the rooms at the Lakeview Motor Lodge, what
is the probability that
(a) a client will be assigned a room with faulty
plumbing?
(b) a person with a room having faulty plumbing was
assigned accommodations at the Lakeview Motor
Lodge?
2.110 The probability that a patient recovers from a
delicate heart operation is 0.8. What is the probability
that
(a) exactly 2 of the next 3 patients who have this op-
eration survive?
(b) all of the next 3 patients who have this operation
survive?
2.111 In a certain federal prison, it is known that
2/3 of the inmates are under 25 years of age. It is
also known that 3/5 of the inmates are male and that
5/8 of the inmates are female or 25 years of age or
older. What is the probability that a prisoner selected
at random from this prison is female and at least 25
years old?
2.112 From 4 red, 5 green, and 6 yellow apples, how
many selections of 9 apples are possible if 3 of each
color are to be selected?
2.113 From a box containing 6 black balls and 4 green
balls, 3 balls are drawn in succession, each ball being re-
placed in the box before the next draw is made. What
is the probability that
(a) all 3 are the same color?
(b) each color is represented?
2.114 A shipment of 12 television sets contains 3 de-
fective sets. In how many ways can a hotel purchase
5 of these sets and receive at least 2 of the defective
sets?
2.115 A certain federal agency employs three con-
sulting firms (A, B, and C) with probabilities 0.40,
0.35, and 0.25, respectively. From past experience it
is known that the probability of cost overruns for the
firms are 0.05, 0.03, and 0.15, respectively. Suppose a
cost overrun is experienced by the agency.
(a) What is the probability that the consulting firm
involved is company C?
(b) What is the probability that it is company A?
2.116 A manufacturer is studying the effects of cook-
ing temperature, cooking time, and type of cooking oil
for making potato chips. Three different temperatures,
4 different cooking times, and 3 different oils are to be
used.
(a) What is the total number of combinations to be
studied?
(b) How many combinations will be used for each type
of oil?
(c) Discuss why permutations are not an issue in this
exercise.
2.117 Consider the situation in Exercise 2.116, and
suppose that the manufacturer can try only two com-
binations in a day.
(a) What is the probability that any given set of two
runs is chosen?
(b) What is the probability that the highest tempera-
ture is used in either of these two combinations?
2.118 A certain form of cancer is known to be found
in women over 60 with probability 0.07. A blood test
exists for the detection of the disease, but the test is
not infallible. In fact, it is known that 10% of the time
the test gives a false negative (i.e., the test incorrectly
gives a negative result) and 5% of the time the test
gives a false positive (i.e., incorrectly gives a positive
result). If a woman over 60 is known to have taken
the test and received a favorable (i.e., negative) result,
what is the probability that she has the disease?
2.119 A producer of a certain type of electronic com-
ponent ships to suppliers in lots of twenty. Suppose
that 60% of all such lots contain no defective compo-
nents, 30% contain one defective component, and 10%
contain two defective components. A lot is picked, two
components from the lot are randomly selected and
tested, and neither is defective.
(a) What is the probability that zero defective compo-
nents exist in the lot?
(b) What is the probability that one defective exists in
the lot?
(c) What is the probability that two defectives exist in
the lot?
2.120 A rare disease exists with which only 1 in 500
is affected. A test for the disease exists, but of course
it is not infallible. A correct positive result (patient
actually has the disease) occurs 95% of the time, while
a false positive result (patient does not have the dis-
8.9 Potential Misconceptions and Hazards 79
ease) occurs 1% of the time. If a randomly selected
individual is tested and the result is positive, what is
the probability that the individual has the disease?
2.121 A construction company employs two sales en-
gineers. Engineer 1 does the work of estimating cost
for 70% of jobs bid by the company. Engineer 2 does
the work for 30% of jobs bid by the company. It is
known that the error rate for engineer 1 is such that
0.02 is the probability of an error when he does the
work, whereas the probability of an error in the work
of engineer 2 is 0.04. Suppose a bid arrives and a se-
rious error occurs in estimating cost. Which engineer
would you guess did the work? Explain and show all
work.
2.122 In the field of quality control, the science of
statistics is often used to determine if a process is “out
of control.” Suppose the process is, indeed, out of con-
trol and 20% of items produced are defective.
(a) If three items arrive off the process line in succes-
sion, what is the probability that all three are de-
fective?
(b) If four items arrive in succession, what is the prob-
ability that three are defective?
2.123 An industrial plant is conducting a study to
determine how quickly injured workers are back on the
job following injury. Records show that 10% of all in-
jured workers are admitted to the hospital for treat-
ment and 15% are back on the job the next day. In
addition, studies show that 2% are both admitted for
hospital treatment and back on the job the next day.
If a worker is injured, what is the probability that the
worker will either be admitted to a hospital or be back
on the job the next day or both?
2.124 A firm is accustomed to training operators who
do certain tasks on a production line. Those operators
who attend the training course are known to be able to
meet their production quotas 90% of the time. New op-
erators who do not take the training course only meet
their quotas 65% of the time. Fifty percent of new op-
erators attend the course. Given that a new operator
meets her production quota, what is the probability
that she attended the program?
2.125 A survey of those using a particular statistical
software system indicated that 10% were dissatisfied.
Half of those dissatisfied purchased the system from
vendor A. It is also known that 20% of those surveyed
purchased from vendor A. Given that the software was
purchased from vendor A, what is the probability that
that particular user is dissatisfied?
2.126 During bad economic times, industrial workers
are dismissed and are often replaced by machines. The
history of 100 workers whose loss of employment is at-
tributable to technological advances is reviewed. For
each of these individuals, it is determined if he or she
was given an alternative job within the same company,
found a job with another company in the same field,
found a job in a new field, or has been unemployed for
1 year. In addition, the union status of each worker is
recorded. The following table summarizes the results.
Union Nonunion
Same Company
New Company (same field)
New Field
Unemployed
40
13
4
2
15
10
11
5
(a) If the selected worker found a job with a new com-
pany in the same field, what is the probability that
the worker is a union member?
(b) If the worker is a union member, what is the prob-
ability that the worker has been unemployed for a
year?
2.127 There is a 50-50 chance that the queen carries
the gene of hemophilia. If she is a carrier, then each
prince has a 50-50 chance of having hemophilia inde-
pendently. If the queen is not a carrier, the prince will
not have the disease. Suppose the queen has had three
princes without the disease. What is the probability
the queen is a carrier?
2.128 Group Project: Give each student a bag of
chocolate MMs. Divide the students into groups of 5
or 6. Calculate the relative frequency distribution for
color of MMs for each group.
(a) What is your estimated probability of randomly
picking a yellow? a red?
(b) Redo the calculations for the whole classroom. Did
the estimates change?
(c) Do you believe there is an equal number of each
color in a process batch? Discuss.
2.8 Potential Misconceptions and Hazards;
Relationship to Material in Other Chapters
This chapter contains the fundamental definitions, rules, and theorems that
provide a foundation that renders probability an important tool for evaluating
80 Chapter 2 Probability
scientific and engineering systems. The evaluations are often in the form of prob-
ability computations, as is illustrated in examples and exercises. Concepts such as
independence, conditional probability, Bayes’ rule, and others tend to mesh nicely
to solve practical problems in which the bottom line is to produce a probability
value. Illustrations in exercises are abundant. See, for example, Exercises 2.100
and 2.101. In these and many other exercises, an evaluation of a scientific system
is being made judiciously from a probability calculation, using rules and definitions
discussed in the chapter.
Now, how does the material in this chapter relate to that in other chapters?
It is best to answer this question by looking ahead to Chapter 3. Chapter 3 also
deals with the type of problems in which it is important to calculate probabili-
ties. We illustrate how system performance depends on the value of one or more
probabilities. Once again, conditional probability and independence play a role.
However, new concepts arise which allow more structure based on the notion of a
random variable and its probability distribution. Recall that the idea of frequency
distributions was discussed briefly in Chapter 1. The probability distribution dis-
plays, in equation form or graphically, the total information necessary to describe a
probability structure. For example, in Review Exercise 2.122 the random variable
of interest is the number of defective items, a discrete measurement. Thus, the
probability distribution would reveal the probability structure for the number of
defective items out of the number selected from the process. As the reader moves
into Chapter 3 and beyond, it will become apparent that assumptions will be re-
quired in order to determine and thus make use of probability distributions for
solving scientific problems.
Chapter 3
Random Variables and Probability
Distributions
3.1 Concept of a Random Variable
Statistics is concerned with making inferences about populations and population
characteristics. Experiments are conducted with results that are subject to chance.
The testing of a number of electronic components is an example of a statistical
experiment, a term that is used to describe any process by which several chance
observations are generated. It is often important to allocate a numerical description
to the outcome. For example, the sample space giving a detailed description of each
possible outcome when three electronic components are tested may be written
S = {NNN, NND, NDN, DNN, NDD, DND, DDN, DDD},
where N denotes nondefective and D denotes defective. One is naturally concerned
with the number of defectives that occur. Thus, each point in the sample space will
be assigned a numerical value of 0, 1, 2, or 3. These values are, of course, random
quantities determined by the outcome of the experiment. They may be viewed as
values assumed by the random variable X , the number of defective items when
three electronic components are tested.
Definition 3.1: A random variable is a function that associates a real number with each element
in the sample space.
We shall use a capital letter, say X, to denote a random variable and its correspond-
ing small letter, x in this case, for one of its values. In the electronic component
testing illustration above, we notice that the random variable X assumes the value
2 for all elements in the subset
E = {DDN, DND, NDD}
of the sample space S. That is, each possible value of X represents an event that
is a subset of the sample space for the given experiment.
81
82 Chapter 3 Random Variables and Probability Distributions
Example 3.1: Two balls are drawn in succession without replacement from an urn containing 4
red balls and 3 black balls. The possible outcomes and the values y of the random
variable Y , where Y is the number of red balls, are
Sample Space y
RR 2
RB 1
BR 1
BB 0
Example 3.2: A stockroom clerk returns three safety helmets at random to three steel mill em-
ployees who had previously checked them. If Smith, Jones, and Brown, in that
order, receive one of the three hats, list the sample points for the possible orders
of returning the helmets, and find the value m of the random variable M that
represents the number of correct matches.
Solution: If S, J, and B stand for Smith’s, Jones’s, and Brown’s helmets, respectively, then
the possible arrangements in which the helmets may be returned and the number
of correct matches are
Sample Space m
SJB 3
SBJ 1
BJS 1
JSB 1
JBS 0
BSJ 0
In each of the two preceding examples, the sample space contains a finite number
of elements. On the other hand, when a die is thrown until a 5 occurs, we obtain
a sample space with an unending sequence of elements,
S = {F, NF, NNF, NNNF, . . . },
where F and N represent, respectively, the occurrence and nonoccurrence of a 5.
But even in this experiment, the number of elements can be equated to the number
of whole numbers so that there is a first element, a second element, a third element,
and so on, and in this sense can be counted.
There are cases where the random variable is categorical in nature. Variables,
often called dummy variables, are used. A good illustration is the case in which
the random variable is binary in nature, as shown in the following example.
Example 3.3: Consider the simple condition in which components are arriving from the produc-
tion line and they are stipulated to be defective or not defective. Define the random
variable X by
X =

1, if the component is defective,
0, if the component is not defective.
3.1 Concept of a Random Variable 83
Clearly the assignment of 1 or 0 is arbitrary though quite convenient. This will
become clear in later chapters. The random variable for which 0 and 1 are chosen
to describe the two possible values is called a Bernoulli random variable.
Further illustrations of random variables are revealed in the following examples.
Example 3.4: Statisticians use sampling plans to either accept or reject batches or lots of
material. Suppose one of these sampling plans involves sampling independently 10
items from a lot of 100 items in which 12 are defective.
Let X be the random variable defined as the number of items found defec-
tive in the sample of 10. In this case, the random variable takes on the values
0, 1, 2, . . . , 9, 10.
Example 3.5: Suppose a sampling plan involves sampling items from a process until a defective
is observed. The evaluation of the process will depend on how many consecutive
items are observed. In that regard, let X be a random variable defined by the
number of items observed before a defective is found. With N a nondefective and
D a defective, sample spaces are S = {D} given X = 1, S = {ND} given X = 2,
S = {NND} given X = 3, and so on.
Example 3.6: Interest centers around the proportion of people who respond to a certain mail
order solicitation. Let X be that proportion. X is a random variable that takes
on all values x for which 0 ≤ x ≤ 1.
Example 3.7: Let X be the random variable defined by the waiting time, in hours, between
successive speeders spotted by a radar unit. The random variable X takes on all
values x for which x ≥ 0.
Definition 3.2: If a sample space contains a finite number of possibilities or an unending sequence
with as many elements as there are whole numbers, it is called a discrete sample
space.
The outcomes of some statistical experiments may be neither finite nor countable.
Such is the case, for example, when one conducts an investigation measuring the
distances that a certain make of automobile will travel over a prescribed test course
on 5 liters of gasoline. Assuming distance to be a variable measured to any degree
of accuracy, then clearly we have an infinite number of possible distances in the
sample space that cannot be equated to the number of whole numbers. Or, if one
were to record the length of time for a chemical reaction to take place, once again
the possible time intervals making up our sample space would be infinite in number
and uncountable. We see now that all sample spaces need not be discrete.
Definition 3.3: If a sample space contains an infinite number of possibilities equal to the number
of points on a line segment, it is called a continuous sample space.
A random variable is called a discrete random variable if its set of possible
outcomes is countable. The random variables in Examples 3.1 to 3.5 are discrete
random variables. But a random variable whose set of possible values is an entire
interval of numbers is not discrete. When a random variable can take on values
84 Chapter 3 Random Variables and Probability Distributions
on a continuous scale, it is called a continuous random variable. Often the
possible values of a continuous random variable are precisely the same values that
are contained in the continuous sample space. Obviously, the random variables
described in Examples 3.6 and 3.7 are continuous random variables.
In most practical problems, continuous random variables represent measured
data, such as all possible heights, weights, temperatures, distance, or life periods,
whereas discrete random variables represent count data, such as the number of
defectives in a sample of k items or the number of highway fatalities per year in
a given state. Note that the random variables Y and M of Examples 3.1 and 3.2
both represent count data, Y the number of red balls and M the number of correct
hat matches.
3.2 Discrete Probability Distributions
A discrete random variable assumes each of its values with a certain probability.
In the case of tossing a coin three times, the variable X, representing the number
of heads, assumes the value 2 with probability 3/8, since 3 of the 8 equally likely
sample points result in two heads and one tail. If one assumes equal weights for the
simple events in Example 3.2, the probability that no employee gets back the right
helmet, that is, the probability that M assumes the value 0, is 1/3. The possible
values m of M and their probabilities are
m 0 1 3
P(M = m) 1
3
1
2
1
6
Note that the values of m exhaust all possible cases and hence the probabilities
add to 1.
Frequently, it is convenient to represent all the probabilities of a random variable
X by a formula. Such a formula would necessarily be a function of the numerical
values x that we shall denote by f(x), g(x), r(x), and so forth. Therefore, we write
f(x) = P(X = x); that is, f(3) = P(X = 3). The set of ordered pairs (x, f(x)) is
called the probability function, probability mass function, or probability
distribution of the discrete random variable X.
Definition 3.4: The set of ordered pairs (x, f(x)) is a probability function, probability mass
function, or probability distribution of the discrete random variable X if, for
each possible outcome x,
1. f(x) ≥ 0,
2.

x
f(x) = 1,
3. P(X = x) = f(x).
Example 3.8: A shipment of 20 similar laptop computers to a retail outlet contains 3 that are
defective. If a school makes a random purchase of 2 of these computers, find the
probability distribution for the number of defectives.
Solution: Let X be a random variable whose values x are the possible numbers of defective
computers purchased by the school. Then x can only take the numbers 0, 1, and
3.2 Discrete Probability Distributions 85
2. Now
f(0) = P(X = 0) =
3
0
17
2

20
2
 =
68
95
, f(1) = P(X = 1) =
3
1
17
1

20
2
 =
51
190
,
f(2) = P(X = 2) =
3
2
17
0

20
2
 =
3
190
.
Thus, the probability distribution of X is
x 0 1 2
f(x) 68
95
51
190
3
190
Example 3.9: If a car agency sells 50% of its inventory of a certain foreign car equipped with side
airbags, find a formula for the probability distribution of the number of cars with
side airbags among the next 4 cars sold by the agency.
Solution: Since the probability of selling an automobile with side airbags is 0.5, the 24
= 16
points in the sample space are equally likely to occur. Therefore, the denominator
for all probabilities, and also for our function, is 16. To obtain the number of
ways of selling 3 cars with side airbags, we need to consider the number of ways
of partitioning 4 outcomes into two cells, with 3 cars with side airbags assigned
to one cell and the model without side airbags assigned to the other. This can be
done in
4
3

= 4 ways. In general, the event of selling x models with side airbags
and 4 − x models without side airbags can occur in
4
x

ways, where x can be 0, 1,
2, 3, or 4. Thus, the probability distribution f(x) = P(X = x) is
f(x) =
1
16

4
x

, for x = 0, 1, 2, 3, 4.
There are many problems where we may wish to compute the probability that
the observed value of a random variable X will be less than or equal to some real
number x. Writing F(x) = P(X ≤ x) for every real number x, we define F(x) to
be the cumulative distribution function of the random variable X.
Definition 3.5: The cumulative distribution function F(x) of a discrete random variable X
with probability distribution f(x) is
F(x) = P(X ≤ x) =

t≤x
f(t), for − ∞  x  ∞.
For the random variable M, the number of correct matches in Example 3.2, we
have
F(2) = P(M ≤ 2) = f(0) + f(1) =
1
3
+
1
2
=
5
6
.
The cumulative distribution function of M is
F(m) =
⎧
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎩
0, for m  0,
1
3 , for 0 ≤ m  1,
5
6 , for 1 ≤ m  3,
1, for m ≥ 3.
86 Chapter 3 Random Variables and Probability Distributions
One should pay particular notice to the fact that the cumulative distribution func-
tion is a monotone nondecreasing function defined not only for the values assumed
by the given random variable but for all real numbers.
Example 3.10: Find the cumulative distribution function of the random variable X in Example
3.9. Using F(x), verify that f(2) = 3/8.
Solution: Direct calculations of the probability distribution of Example 3.9 give f(0)= 1/16,
f(1) = 1/4, f(2)= 3/8, f(3)= 1/4, and f(4)= 1/16. Therefore,
F(0) = f(0) =
1
16
,
F(1) = f(0) + f(1) =
5
16
,
F(2) = f(0) + f(1) + f(2) =
11
16
,
F(3) = f(0) + f(1) + f(2) + f(3) =
15
16
,
F(4) = f(0) + f(1) + f(2) + f(3) + f(4) = 1.
Hence,
F(x) =
⎧
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎩
0, for x  0,
1
16 , for 0 ≤ x  1,
5
16 , for 1 ≤ x  2,
11
16 , for 2 ≤ x  3,
15
16 , for 3 ≤ x  4,
1 for x ≥ 4.
Now
f(2) = F(2) − F(1) =
11
16
−
5
16
=
3
8
.
It is often helpful to look at a probability distribution in graphic form. One
might plot the points (x, f(x)) of Example 3.9 to obtain Figure 3.1. By joining
the points to the x axis either with a dashed or with a solid line, we obtain a
probability mass function plot. Figure 3.1 makes it easy to see what values of X
are most likely to occur, and it also indicates a perfectly symmetric situation in
this case.
Instead of plotting the points (x, f(x)), we more frequently construct rectangles,
as in Figure 3.2. Here the rectangles are constructed so that their bases of equal
width are centered at each value x and their heights are equal to the corresponding
probabilities given by f(x). The bases are constructed so as to leave no space
between the rectangles. Figure 3.2 is called a probability histogram.
Since each base in Figure 3.2 has unit width, P(X = x) is equal to the area
of the rectangle centered at x. Even if the bases were not of unit width, we could
adjust the heights of the rectangles to give areas that would still equal the proba-
bilities of X assuming any of its values x. This concept of using areas to represent
3.3 Continuous Probability Distributions 87
x
f (x)
0 1 2 3 4
1/16
2/16
3/16
4/16
5/16
6/16
Figure 3.1: Probability mass function plot.
0 1 2 3 4
x
f (x)
1/16
2/16
3/16
4/16
5/16
6/16
Figure 3.2: Probability histogram.
probabilities is necessary for our consideration of the probability distribution of a
continuous random variable.
The graph of the cumulative distribution function of Example 3.9, which ap-
pears as a step function in Figure 3.3, is obtained by plotting the points (x, F(x)).
Certain probability distributions are applicable to more than one physical situ-
ation. The probability distribution of Example 3.9, for example, also applies to the
random variable Y , where Y is the number of heads when a coin is tossed 4 times,
or to the random variable W, where W is the number of red cards that occur when
4 cards are drawn at random from a deck in succession with each card replaced and
the deck shuffled before the next drawing. Special discrete distributions that can
be applied to many different experimental situations will be considered in Chapter
5.
F(x)
x
1/4
1/2
3/4
1
0 1 2 3 4
Figure 3.3: Discrete cumulative distribution function.
3.3 Continuous Probability Distributions
A continuous random variable has a probability of 0 of assuming exactly any of its
values. Consequently, its probability distribution cannot be given in tabular form.
88 Chapter 3 Random Variables and Probability Distributions
At first this may seem startling, but it becomes more plausible when we consider a
particular example. Let us discuss a random variable whose values are the heights
of all people over 21 years of age. Between any two values, say 163.5 and 164.5
centimeters, or even 163.99 and 164.01 centimeters, there are an infinite number
of heights, one of which is 164 centimeters. The probability of selecting a person
at random who is exactly 164 centimeters tall and not one of the infinitely large
set of heights so close to 164 centimeters that you cannot humanly measure the
difference is remote, and thus we assign a probability of 0 to the event. This is not
the case, however, if we talk about the probability of selecting a person who is at
least 163 centimeters but not more than 165 centimeters tall. Now we are dealing
with an interval rather than a point value of our random variable.
We shall concern ourselves with computing probabilities for various intervals of
continuous random variables such as P(a  X  b), P(W ≥ c), and so forth. Note
that when X is continuous,
P(a  X ≤ b) = P(a  X  b) + P(X = b) = P(a  X  b).
That is, it does not matter whether we include an endpoint of the interval or not.
This is not true, though, when X is discrete.
Although the probability distribution of a continuous random variable cannot
be presented in tabular form, it can be stated as a formula. Such a formula would
necessarily be a function of the numerical values of the continuous random variable
X and as such will be represented by the functional notation f(x). In dealing with
continuous variables, f(x) is usually called the probability density function, or
simply the density function, of X. Since X is defined over a continuous sample
space, it is possible for f(x) to have a finite number of discontinuities. However,
most density functions that have practical applications in the analysis of statistical
data are continuous and their graphs may take any of several forms, some of which
are shown in Figure 3.4. Because areas will be used to represent probabilities and
probabilities are positive numerical values, the density function must lie entirely
above the x axis.
(a) (b) (c) (d)
Figure 3.4: Typical density functions.
A probability density function is constructed so that the area under its curve
3.3 Continuous Probability Distributions 89
bounded by the x axis is equal to 1 when computed over the range of X for which
f(x) is defined. Should this range of X be a finite interval, it is always possible
to extend the interval to include the entire set of real numbers by defining f(x) to
be zero at all points in the extended portions of the interval. In Figure 3.5, the
probability that X assumes a value between a and b is equal to the shaded area
under the density function between the ordinates at x = a and x = b, and from
integral calculus is given by
P(a  X  b) =
b
a
f(x) dx.
a b
x
f(x)
Figure 3.5: P(a  X  b).
Definition 3.6: The function f(x) is a probability density function (pdf) for the continuous
random variable X, defined over the set of real numbers, if
1. f(x) ≥ 0, for all x ∈ R.
2.
 ∞
−∞
f(x) dx = 1.
3. P(a  X  b) =
 b
a
f(x) dx.
Example 3.11: Suppose that the error in the reaction temperature, in ◦
C, for a controlled labora-
tory experiment is a continuous random variable X having the probability density
function
f(x) =

x2
3 , −1  x  2,
0, elsewhere.
.
(a) Verify that f(x) is a density function.
(b) Find P(0  X ≤ 1).
Solution: We use Definition 3.6.
(a) Obviously, f(x) ≥ 0. To verify condition 2 in Definition 3.6, we have
∞
−∞
f(x) dx =
2
−1
x2
3
dx =
x3
9
|2
−1 =
8
9
+
1
9
= 1.
90 Chapter 3 Random Variables and Probability Distributions
(b) Using formula 3 in Definition 3.6, we obtain
P(0  X ≤ 1) =
1
0
x2
3
dx =
x3
9




1
0
=
1
9
.
Definition 3.7: The cumulative distribution function F(x) of a continuous random variable
X with density function f(x) is
F(x) = P(X ≤ x) =
x
−∞
f(t) dt, for − ∞  x  ∞.
As an immediate consequence of Definition 3.7, one can write the two results
P(a  X  b) = F(b) − F(a) and f(x) =
dF(x)
dx
,
if the derivative exists.
Example 3.12: For the density function of Example 3.11, find F(x), and use it to evaluate
P(0  X ≤ 1).
Solution: For −1  x  2,
F(x) =
x
−∞
f(t) dt =
x
−1
t2
3
dt =
t3
9




x
−1
=
x3
+ 1
9
.
Therefore,
F(x) =
⎧
⎪
⎨
⎪
⎩
0, x  −1,
x3
+1
9 , −1 ≤ x  2,
1, x ≥ 2.
The cumulative distribution function F(x) is expressed in Figure 3.6. Now
P(0  X ≤ 1) = F(1) − F(0) =
2
9
−
1
9
=
1
9
,
which agrees with the result obtained by using the density function in Example
3.11.
Example 3.13: The Department of Energy (DOE) puts projects out on bid and generally estimates
what a reasonable bid should be. Call the estimate b. The DOE has determined
that the density function of the winning (low) bid is
f(y) =

5
8b , 2
5 b ≤ y ≤ 2b,
0, elsewhere.
Find F(y) and use it to determine the probability that the winning bid is less than
the DOE’s preliminary estimate b.
Solution: For 2b/5 ≤ y ≤ 2b,
F(y) =
y
2b/5
5
8b
dy =
5t
8b




y
2b/5
=
5y
8b
−
1
4
.
/ /
Exercises 91
f (x)
x
0 2
1 1
0.5
1.0
Figure 3.6: Continuous cumulative distribution function.
Thus,
F(y) =
⎧
⎪
⎨
⎪
⎩
0, y  2
5 b,
5y
8b − 1
4 , 2
5 b ≤ y  2b,
1, y ≥ 2b.
To determine the probability that the winning bid is less than the preliminary bid
estimate b, we have
P(Y ≤ b) = F(b) =
5
8
−
1
4
=
3
8
.
Exercises
3.1 Classify the following random variables as dis-
crete or continuous:
X: the number of automobile accidents per year
in Virginia.
Y : the length of time to play 18 holes of golf.
M: the amount of milk produced yearly by a par-
ticular cow.
N: the number of eggs laid each month by a hen.
P: the number of building permits issued each
month in a certain city.
Q: the weight of grain produced per acre.
3.2 An overseas shipment of 5 foreign automobiles
contains 2 that have slight paint blemishes. If an
agency receives 3 of these automobiles at random, list
the elements of the sample space S, using the letters B
and N for blemished and nonblemished, respectively;
then to each sample point assign a value x of the ran-
dom variable X representing the number of automo-
biles with paint blemishes purchased by the agency.
3.3 Let W be a random variable giving the number
of heads minus the number of tails in three tosses of a
coin. List the elements of the sample space S for the
three tosses of the coin and to each sample point assign
a value w of W.
3.4 A coin is flipped until 3 heads in succession oc-
cur. List only those elements of the sample space that
require 6 or less tosses. Is this a discrete sample space?
Explain.
3.5 Determine the value c so that each of the follow-
ing functions can serve as a probability distribution of
the discrete random variable X:
(a) f(x) = c(x2
+ 4), for x = 0, 1, 2, 3;
(b) f(x) = c
2
x
 3
3−x

, for x = 0, 1, 2.
/ /
92 Chapter 3 Random Variables and Probability Distributions
3.6 The shelf life, in days, for bottles of a certain
prescribed medicine is a random variable having the
density function
f(x) =

20,000
(x+100)3 , x  0,
0, elsewhere.
Find the probability that a bottle of this medicine will
have a shell life of
(a) at least 200 days;
(b) anywhere from 80 to 120 days.
3.7 The total number of hours, measured in units of
100 hours, that a family runs a vacuum cleaner over a
period of one year is a continuous random variable X
that has the density function
f(x) =
⎧
⎨
⎩
x, 0  x  1,
2 − x, 1 ≤ x  2,
0, elsewhere.
Find the probability that over a period of one year, a
family runs their vacuum cleaner
(a) less than 120 hours;
(b) between 50 and 100 hours.
3.8 Find the probability distribution of the random
variable W in Exercise 3.3, assuming that the coin is
biased so that a head is twice as likely to occur as a
tail.
3.9 The proportion of people who respond to a certain
mail-order solicitation is a continuous random variable
X that has the density function
f(x) =

2(x+2)
5
, 0  x  1,
0, elsewhere.
(a) Show that P(0  X  1) = 1.
(b) Find the probability that more than 1/4 but fewer
than 1/2 of the people contacted will respond to
this type of solicitation.
3.10 Find a formula for the probability distribution of
the random variable X representing the outcome when
a single die is rolled once.
3.11 A shipment of 7 television sets contains 2 de-
fective sets. A hotel makes a random purchase of 3
of the sets. If x is the number of defective sets pur-
chased by the hotel, find the probability distribution
of X. Express the results graphically as a probability
histogram.
3.12 An investment firm offers its customers munici-
pal bonds that mature after varying numbers of years.
Given that the cumulative distribution function of T,
the number of years to maturity for a randomly se-
lected bond, is
F(t) =
⎧
⎪
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎪
⎩
0, t  1,
1
4
, 1 ≤ t  3,
1
2
, 3 ≤ t  5,
3
4
, 5 ≤ t  7,
1, t ≥ 7,
find
(a) P(T = 5);
(b) P(T  3);
(c) P(1.4  T  6);
(d) P(T ≤ 5 | T ≥ 2).
3.13 The probability distribution of X, the number
of imperfections per 10 meters of a synthetic fabric in
continuous rolls of uniform width, is given by
x 0 1 2 3 4
f(x) 0.41 0.37 0.16 0.05 0.01
Construct the cumulative distribution function of X.
3.14 The waiting time, in hours, between successive
speeders spotted by a radar unit is a continuous ran-
dom variable with cumulative distribution function
F(x) =
0, x  0,
1 − e−8x
, x ≥ 0.
Find the probability of waiting less than 12 minutes
between successive speeders
(a) using the cumulative distribution function of X;
(b) using the probability density function of X.
3.15 Find the cumulative distribution function of the
random variable X representing the number of defec-
tives in Exercise 3.11. Then using F(x), find
(a) P(X = 1);
(b) P(0  X ≤ 2).
3.16 Construct a graph of the cumulative distribution
function of Exercise 3.15.
3.17 A continuous random variable X that can as-
sume values between x = 1 and x = 3 has a density
function given by f(x) = 1/2.
(a) Show that the area under the curve is equal to 1.
(b) Find P(2  X  2.5).
(c) Find P(X ≤ 1.6).
/ /
Exercises 93
3.18 A continuous random variable X that can as-
sume values between x = 2 and x = 5 has a density
function given by f(x) = 2(1 + x)/27. Find
(a) P(X  4);
(b) P(3 ≤ X  4).
3.19 For the density function of Exercise 3.17, find
F(x). Use it to evaluate P(2  X  2.5).
3.20 For the density function of Exercise 3.18, find
F(x), and use it to evaluate P(3 ≤ X  4).
3.21 Consider the density function
f(x) =
k
√
x, 0  x  1,
0, elsewhere.
(a) Evaluate k.
(b) Find F(x) and use it to evaluate
P(0.3  X  0.6).
3.22 Three cards are drawn in succession from a deck
without replacement. Find the probability distribution
for the number of spades.
3.23 Find the cumulative distribution function of the
random variable W in Exercise 3.8. Using F(w), find
(a) P(W  0);
(b) P(−1 ≤ W  3).
3.24 Find the probability distribution for the number
of jazz CDs when 4 CDs are selected at random from
a collection consisting of 5 jazz CDs, 2 classical CDs,
and 3 rock CDs. Express your results by means of a
formula.
3.25 From a box containing 4 dimes and 2 nickels,
3 coins are selected at random without replacement.
Find the probability distribution for the total T of the
3 coins. Express the probability distribution graphi-
cally as a probability histogram.
3.26 From a box containing 4 black balls and 2 green
balls, 3 balls are drawn in succession, each ball being
replaced in the box before the next draw is made. Find
the probability distribution for the number of green
balls.
3.27 The time to failure in hours of an important
piece of electronic equipment used in a manufactured
DVD player has the density function
f(x) =
1
2000
exp(−x/2000), x ≥ 0,
0, x  0.
(a) Find F(x).
(b) Determine the probability that the component (and
thus the DVD player) lasts more than 1000 hours
before the component needs to be replaced.
(c) Determine the probability that the component fails
before 2000 hours.
3.28 A cereal manufacturer is aware that the weight
of the product in the box varies slightly from box
to box. In fact, considerable historical data have al-
lowed the determination of the density function that
describes the probability structure for the weight (in
ounces). Letting X be the random variable weight, in
ounces, the density function can be described as
f(x) =
2
5
, 23.75 ≤ x ≤ 26.25,
0, elsewhere.
(a) Verify that this is a valid density function.
(b) Determine the probability that the weight is
smaller than 24 ounces.
(c) The company desires that the weight exceeding 26
ounces be an extremely rare occurrence. What is
the probability that this rare occurrence does ac-
tually occur?
3.29 An important factor in solid missile fuel is the
particle size distribution. Significant problems occur if
the particle sizes are too large. From production data
in the past, it has been determined that the particle
size (in micrometers) distribution is characterized by
f(x) =
3x−4
, x  1,
0, elsewhere.
(a) Verify that this is a valid density function.
(b) Evaluate F(x).
(c) What is the probability that a random particle
from the manufactured fuel exceeds 4 micrometers?
3.30 Measurements of scientific systems are always
subject to variation, some more than others. There
are many structures for measurement error, and statis-
ticians spend a great deal of time modeling these errors.
Suppose the measurement error X of a certain physical
quantity is decided by the density function
f(x) =
k(3 − x2
), −1 ≤ x ≤ 1,
0, elsewhere.
(a) Determine k that renders f(x) a valid density func-
tion.
(b) Find the probability that a random error in mea-
surement is less than 1/2.
(c) For this particular measurement, it is undesirable
if the magnitude of the error (i.e., |x|) exceeds 0.8.
What is the probability that this occurs?
94 Chapter 3 Random Variables and Probability Distributions
3.31 Based on extensive testing, it is determined by
the manufacturer of a washing machine that the time
Y (in years) before a major repair is required is char-
acterized by the probability density function
f(y) =
1
4
e−y/4
, y ≥ 0,
0, elsewhere.
(a) Critics would certainly consider the product a bar-
gain if it is unlikely to require a major repair before
the sixth year. Comment on this by determining
P(Y  6).
(b) What is the probability that a major repair occurs
in the first year?
3.32 The proportion of the budget for a certain type
of industrial company that is allotted to environmental
and pollution control is coming under scrutiny. A data
collection project determines that the distribution of
these proportions is given by
f(y) =
5(1 − y)4
, 0 ≤ y ≤ 1,
0, elsewhere.
(a) Verify that the above is a valid density function.
(b) What is the probability that a company chosen at
random expends less than 10% of its budget on en-
vironmental and pollution controls?
(c) What is the probability that a company selected
at random spends more than 50% of its budget on
environmental and pollution controls?
3.33 Suppose a certain type of small data processing
firm is so specialized that some have difficulty making
a profit in their first year of operation. The probabil-
ity density function that characterizes the proportion
Y that make a profit is given by
f(y) =
ky4
(1 − y)3
, 0 ≤ y ≤ 1,
0, elsewhere.
(a) What is the value of k that renders the above a
valid density function?
(b) Find the probability that at most 50% of the firms
make a profit in the first year.
(c) Find the probability that at least 80% of the firms
make a profit in the first year.
3.34 Magnetron tubes are produced on an automated
assembly line. A sampling plan is used periodically to
assess quality of the lengths of the tubes. This mea-
surement is subject to uncertainty. It is thought that
the probability that a random tube meets length spec-
ification is 0.99. A sampling plan is used in which the
lengths of 5 random tubes are measured.
(a) Show that the probability function of Y , the num-
ber out of 5 that meet length specification, is given
by the following discrete probability function:
f(y) =
5!
y!(5 − y)!
(0.99)y
(0.01)5−y
,
for y = 0, 1, 2, 3, 4, 5.
(b) Suppose random selections are made off the line
and 3 are outside specifications. Use f(y) above ei-
ther to support or to refute the conjecture that the
probability is 0.99 that a single tube meets specifi-
cations.
3.35 Suppose it is known from large amounts of his-
torical data that X, the number of cars that arrive at
a specific intersection during a 20-second time period,
is characterized by the following discrete probability
function:
f(x) = e−6 6x
x!
, for x = 0, 1, 2, . . . .
(a) Find the probability that in a specific 20-second
time period, more than 8 cars arrive at the
intersection.
(b) Find the probability that only 2 cars arrive.
3.36 On a laboratory assignment, if the equipment is
working, the density function of the observed outcome,
X, is
f(x) =
2(1 − x), 0  x  1,
0, otherwise.
(a) Calculate P(X ≤ 1/3).
(b) What is the probability that X will exceed 0.5?
(c) Given that X ≥ 0.5, what is the probability that
X will be less than 0.75?
3.4 Joint Probability Distributions
Our study of random variables and their probability distributions in the preced-
ing sections is restricted to one-dimensional sample spaces, in that we recorded
outcomes of an experiment as values assumed by a single random variable. There
will be situations, however, where we may find it desirable to record the simulta-
3.4 Joint Probability Distributions 95
neous outcomes of several random variables. For example, we might measure the
amount of precipitate P and volume V of gas released from a controlled chemical
experiment, giving rise to a two-dimensional sample space consisting of the out-
comes (p, v), or we might be interested in the hardness H and tensile strength T
of cold-drawn copper, resulting in the outcomes (h, t). In a study to determine the
likelihood of success in college based on high school data, we might use a three-
dimensional sample space and record for each individual his or her aptitude test
score, high school class rank, and grade-point average at the end of freshman year
in college.
If X and Y are two discrete random variables, the probability distribution for
their simultaneous occurrence can be represented by a function with values f(x, y)
for any pair of values (x, y) within the range of the random variables X and Y . It
is customary to refer to this function as the joint probability distribution of
X and Y .
Hence, in the discrete case,
f(x, y) = P(X = x, Y = y);
that is, the values f(x, y) give the probability that outcomes x and y occur at
the same time. For example, if an 18-wheeler is to have its tires serviced and X
represents the number of miles these tires have been driven and Y represents the
number of tires that need to be replaced, then f(30000, 5) is the probability that
the tires are used over 30,000 miles and the truck needs 5 new tires.
Definition 3.8: The function f(x, y) is a joint probability distribution or probability mass
function of the discrete random variables X and Y if
1. f(x, y) ≥ 0 for all (x, y),
2.

x

y
f(x, y) = 1,
3. P(X = x, Y = y) = f(x, y).
For any region A in the xy plane, P[(X, Y ) ∈ A] =
 
A
f(x, y).
Example 3.14: Two ballpoint pens are selected at random from a box that contains 3 blue pens,
2 red pens, and 3 green pens. If X is the number of blue pens selected and Y is
the number of red pens selected, find
(a) the joint probability function f(x, y),
(b) P[(X, Y ) ∈ A], where A is the region {(x, y)|x + y ≤ 1}.
Solution: The possible pairs of values (x, y) are (0, 0), (0, 1), (1, 0), (1, 1), (0, 2), and (2, 0).
(a) Now, f(0, 1), for example, represents the probability that a red and a green
pens are selected. The total number of equally likely ways of selecting any 2
pens from the 8 is
8
2

= 28. The number of ways of selecting 1 red from 2
red pens and 1 green from 3 green pens is
2
1
3
1

= 6. Hence, f(0, 1) = 6/28
= 3/14. Similar calculations yield the probabilities for the other cases, which
are presented in Table 3.1. Note that the probabilities sum to 1. In Chapter
96 Chapter 3 Random Variables and Probability Distributions
5, it will become clear that the joint probability distribution of Table 3.1 can
be represented by the formula
f(x, y) =
3
x
2
y
 3
2−x−y

8
2
 ,
for x = 0, 1, 2; y = 0, 1, 2; and 0 ≤ x + y ≤ 2.
(b) The probability that (X, Y ) fall in the region A is
P[(X, Y ) ∈ A] = P(X + Y ≤ 1) = f(0, 0) + f(0, 1) + f(1, 0)
=
3
28
+
3
14
+
9
28
=
9
14
.
Table 3.1: Joint Probability Distribution for Example 3.14
x Row
f(x, y) 0 1 2 Totals
0 3
28
9
28
3
28
15
28
y 1 3
14
3
14 0 3
7
2 1
28 0 0 1
28
Column Totals 5
14
15
28
3
28 1
When X and Y are continuous random variables, the joint density function
f(x, y) is a surface lying above the xy plane, and P[(X, Y ) ∈ A], where A is any
region in the xy plane, is equal to the volume of the right cylinder bounded by the
base A and the surface.
Definition 3.9: The function f(x, y) is a joint density function of the continuous random
variables X and Y if
1. f(x, y) ≥ 0, for all (x, y),
2.
 ∞
−∞
 ∞
−∞
f(x, y) dx dy = 1,
3. P[(X, Y ) ∈ A] =
 
A
f(x, y) dx dy, for any region A in the xy plane.
Example 3.15: A privately owned business operates both a drive-in facility and a walk-in facility.
On a randomly selected day, let X and Y , respectively, be the proportions of the
time that the drive-in and the walk-in facilities are in use, and suppose that the
joint density function of these random variables is
f(x, y) =

2
5 (2x + 3y), 0 ≤ x ≤ 1, 0 ≤ y ≤ 1,
0, elsewhere.
(a) Verify condition 2 of Definition 3.9.
(b) Find P[(X, Y ) ∈ A], where A = {(x, y) | 0  x  1
2 , 1
4  y  1
2 }.
3.4 Joint Probability Distributions 97
Solution: (a) The integration of f(x, y) over the whole region is
∞
−∞
∞
−∞
f(x, y) dx dy =
1
0
1
0
2
5
(2x + 3y) dx dy
=
1
0

2x2
5
+
6xy
5




x=1
x=0
dy
=
1
0

2
5
+
6y
5

dy =

2y
5
+
3y2
5




1
0
=
2
5
+
3
5
= 1.
(b) To calculate the probability, we use
P[(X, Y ) ∈ A] = P

0  X 
1
2
,
1
4
 Y 
1
2

=
1/2
1/4
1/2
0
2
5
(2x + 3y) dx dy
=
1/2
1/4

2x2
5
+
6xy
5




x=1/2
x=0
dy =
1/2
1/4

1
10
+
3y
5

dy
=

y
10
+
3y2
10




1/2
1/4
=
1
10

1
2
+
3
4

−

1
4
+
3
16

=
13
160
.
Given the joint probability distribution f(x, y) of the discrete random variables
X and Y , the probability distribution g(x) of X alone is obtained by summing
f(x, y) over the values of Y . Similarly, the probability distribution h(y) of Y alone
is obtained by summing f(x, y) over the values of X. We define g(x) and h(y) to
be the marginal distributions of X and Y , respectively. When X and Y are
continuous random variables, summations are replaced by integrals. We can now
make the following general definition.
Definition 3.10: The marginal distributions of X alone and of Y alone are
g(x) =

y
f(x, y) and h(y) =

x
f(x, y)
for the discrete case, and
g(x) =
∞
−∞
f(x, y) dy and h(y) =
∞
−∞
f(x, y) dx
for the continuous case.
The term marginal is used here because, in the discrete case, the values of g(x)
and h(y) are just the marginal totals of the respective columns and rows when the
values of f(x, y) are displayed in a rectangular table.
98 Chapter 3 Random Variables and Probability Distributions
Example 3.16: Show that the column and row totals of Table 3.1 give the marginal distribution
of X alone and of Y alone.
Solution: For the random variable X, we see that
g(0) = f(0, 0) + f(0, 1) + f(0, 2) =
3
28
+
3
14
+
1
28
=
5
14
,
g(1) = f(1, 0) + f(1, 1) + f(1, 2) =
9
28
+
3
14
+ 0 =
15
28
,
and
g(2) = f(2, 0) + f(2, 1) + f(2, 2) =
3
28
+ 0 + 0 =
3
28
,
which are just the column totals of Table 3.1. In a similar manner we could show
that the values of h(y) are given by the row totals. In tabular form, these marginal
distributions may be written as follows:
x 0 1 2
g(x) 5
14
15
28
3
28
y 0 1 2
h(y) 15
28
3
7
1
28
Example 3.17: Find g(x) and h(y) for the joint density function of Example 3.15.
Solution: By definition,
g(x) =
∞
−∞
f(x, y) dy =
1
0
2
5
(2x + 3y) dy =

4xy
5
+
6y2
10




y=1
y=0
=
4x + 3
5
,
for 0 ≤ x ≤ 1, and g(x) = 0 elsewhere. Similarly,
h(y) =
∞
−∞
f(x, y) dx =
1
0
2
5
(2x + 3y) dx =
2(1 + 3y)
5
,
for 0 ≤ y ≤ 1, and h(y) = 0 elsewhere.
The fact that the marginal distributions g(x) and h(y) are indeed the proba-
bility distributions of the individual variables X and Y alone can be verified by
showing that the conditions of Definition 3.4 or Definition 3.6 are satisfied. For
example, in the continuous case
∞
−∞
g(x) dx =
∞
−∞
∞
−∞
f(x, y) dy dx = 1,
and
P(a  X  b) = P(a  X  b, −∞  Y  ∞)
=
b
a
∞
−∞
f(x, y) dy dx =
b
a
g(x) dx.
In Section 3.1, we stated that the value x of the random variable X represents
an event that is a subset of the sample space. If we use the definition of conditional
probability as stated in Chapter 2,
P(B|A) =
P(A ∩ B)
P(A)
, provided P(A)  0,
3.4 Joint Probability Distributions 99
where A and B are now the events defined by X = x and Y = y, respectively, then
P(Y = y | X = x) =
P(X = x, Y = y)
P(X = x)
=
f(x, y)
g(x)
, provided g(x)  0,
where X and Y are discrete random variables.
It is not difficult to show that the function f(x, y)/g(x), which is strictly a func-
tion of y with x fixed, satisfies all the conditions of a probability distribution. This
is also true when f(x, y) and g(x) are the joint density and marginal distribution,
respectively, of continuous random variables. As a result, it is extremely important
that we make use of the special type of distribution of the form f(x, y)/g(x) in
order to be able to effectively compute conditional probabilities. This type of dis-
tribution is called a conditional probability distribution; the formal definition
follows.
Definition 3.11: Let X and Y be two random variables, discrete or continuous. The conditional
distribution of the random variable Y given that X = x is
f(y|x) =
f(x, y)
g(x)
, provided g(x)  0.
Similarly, the conditional distribution of X given that Y = y is
f(x|y) =
f(x, y)
h(y)
, provided h(y)  0.
If we wish to find the probability that the discrete random variable X falls between
a and b when it is known that the discrete variable Y = y, we evaluate
P(a  X  b | Y = y) =

axb
f(x|y),
where the summation extends over all values of X between a and b. When X and
Y are continuous, we evaluate
P(a  X  b | Y = y) =
b
a
f(x|y) dx.
Example 3.18: Referring to Example 3.14, find the conditional distribution of X, given that Y = 1,
and use it to determine P(X = 0 | Y = 1).
Solution: We need to find f(x|y), where y = 1. First, we find that
h(1) =
2

x=0
f(x, 1) =
3
14
+
3
14
+ 0 =
3
7
.
Now
f(x|1) =
f(x, 1)
h(1)
=

7
3

f(x, 1), x = 0, 1, 2.
100 Chapter 3 Random Variables and Probability Distributions
Therefore,
f(0|1) =

7
3

f(0, 1) =

7
3
 
3
14

=
1
2
, f(1|1) =

7
3

f(1, 1) =

7
3
 
3
14

=
1
2
,
f(2|1) =

7
3

f(2, 1) =

7
3

(0) = 0,
and the conditional distribution of X, given that Y = 1, is
x 0 1 2
f(x|1) 1
2
1
2 0
Finally,
P(X = 0 | Y = 1) = f(0|1) =
1
2
.
Therefore, if it is known that 1 of the 2 pen refills selected is red, we have a
probability equal to 1/2 that the other refill is not blue.
Example 3.19: The joint density for the random variables (X, Y ), where X is the unit temperature
change and Y is the proportion of spectrum shift that a certain atomic particle
produces, is
f(x, y) =

10xy2
, 0  x  y  1,
0, elsewhere.
(a) Find the marginal densities g(x), h(y), and the conditional density f(y|x).
(b) Find the probability that the spectrum shifts more than half of the total
observations, given that the temperature is increased by 0.25 unit.
Solution: (a) By definition,
g(x) =
∞
−∞
f(x, y) dy =
1
x
10xy2
dy
=
10
3
xy3




y=1
y=x
=
10
3
x(1 − x3
), 0  x  1,
h(y) =
∞
−∞
f(x, y) dx =
y
0
10xy2
dx = 5x2
y2

x=y
x=0
= 5y4
, 0  y  1.
Now
f(y|x) =
f(x, y)
g(x)
=
10xy2
10
3 x(1 − x3)
=
3y2
1 − x3
, 0  x  y  1.
(b) Therefore,
P

Y 
1
2



 X = 0.25

=
1
1/2
f(y | x = 0.25) dy =
1
1/2
3y2
1 − 0.253
dy =
8
9
.
Example 3.20: Given the joint density function
f(x, y) =

x(1+3y2
)
4 , 0  x  2, 0  y  1,
0, elsewhere,
3.4 Joint Probability Distributions 101
find g(x), h(y), f(x|y), and evaluate P(1
4  X  1
2 | Y = 1
3 ).
Solution: By definition of the marginal density. for 0  x  2,
g(x) =
∞
−∞
f(x, y) dy =
1
0
x(1 + 3y2
)
4
dy
=

xy
4
+
xy3
4




y=1
y=0
=
x
2
,
and for 0  y  1,
h(y) =
∞
−∞
f(x, y) dx =
2
0
x(1 + 3y2
)
4
dx
=

x2
8
+
3x2
y2
8




x=2
x=0
=
1 + 3y2
2
.
Therefore, using the conditional density definition, for 0  x  2,
f(x|y) =
f(x, y)
h(y)
=
x(1 + 3y2
)/4
(1 + 3y2)/2
=
x
2
,
and
P

1
4
 X 
1
2



 Y =
1
3

=
1/2
1/4
x
2
dx =
3
64
.
Statistical Independence
If f(x|y) does not depend on y, as is the case for Example 3.20, then f(x|y) = g(x)
and f(x, y) = g(x)h(y). The proof follows by substituting
f(x, y) = f(x|y)h(y)
into the marginal distribution of X. That is,
g(x) =
∞
−∞
f(x, y) dy =
∞
−∞
f(x|y)h(y) dy.
If f(x|y) does not depend on y, we may write
g(x) = f(x|y)
∞
−∞
h(y) dy.
Now
∞
−∞
h(y) dy = 1,
since h(y) is the probability density function of Y . Therefore,
g(x) = f(x|y) and then f(x, y) = g(x)h(y).
102 Chapter 3 Random Variables and Probability Distributions
It should make sense to the reader that if f(x|y) does not depend on y, then of
course the outcome of the random variable Y has no impact on the outcome of the
random variable X. In other words, we say that X and Y are independent random
variables. We now offer the following formal definition of statistical independence.
Definition 3.12: Let X and Y be two random variables, discrete or continuous, with joint proba-
bility distribution f(x, y) and marginal distributions g(x) and h(y), respectively.
The random variables X and Y are said to be statistically independent if and
only if
f(x, y) = g(x)h(y)
for all (x, y) within their range.
The continuous random variables of Example 3.20 are statistically indepen-
dent, since the product of the two marginal distributions gives the joint density
function. This is obviously not the case, however, for the continuous variables of
Example 3.19. Checking for statistical independence of discrete random variables
requires a more thorough investigation, since it is possible to have the product of
the marginal distributions equal to the joint probability distribution for some but
not all combinations of (x, y). If you can find any point (x, y) for which f(x, y)
is defined such that f(x, y) = g(x)h(y), the discrete variables X and Y are not
statistically independent.
Example 3.21: Show that the random variables of Example 3.14 are not statistically independent.
Proof: Let us consider the point (0, 1). From Table 3.1 we find the three probabilities
f(0, 1), g(0), and h(1) to be
f(0, 1) =
3
14
,
g(0) =
2

y=0
f(0, y) =
3
28
+
3
14
+
1
28
=
5
14
,
h(1) =
2

x=0
f(x, 1) =
3
14
+
3
14
+ 0 =
3
7
.
Clearly,
f(0, 1) = g(0)h(1),
and therefore X and Y are not statistically independent.
All the preceding definitions concerning two random variables can be general-
ized to the case of n random variables. Let f(x1, x2, . . . , xn) be the joint probability
function of the random variables X1, X2, . . . , Xn. The marginal distribution of X1,
for example, is
g(x1) =

x2
· · ·

xn
f(x1, x2, . . . , xn)
3.4 Joint Probability Distributions 103
for the discrete case, and
g(x1) =
∞
−∞
· · ·
∞
−∞
f(x1, x2, . . . , xn) dx2 dx3 · · · dxn
for the continuous case. We can now obtain joint marginal distributions such
as g(x1, x2), where
g(x1, x2) =
⎧
⎨
⎩

x3
· · ·

xn
f(x1, x2, . . . , xn) (discrete case),
 ∞
−∞
· · ·
 ∞
−∞
f(x1, x2, . . . , xn) dx3 dx4 · · · dxn (continuous case).
We could consider numerous conditional distributions. For example, the joint con-
ditional distribution of X1, X2, and X3, given that X4 = x4, X5 = x5, . . . , Xn =
xn, is written
f(x1, x2, x3 | x4, x5, . . . , xn) =
f(x1, x2, . . . , xn)
g(x4, x5, . . . , xn)
,
where g(x4, x5, . . . , xn) is the joint marginal distribution of the random variables
X4, X5, . . . , Xn.
A generalization of Definition 3.12 leads to the following definition for the mu-
tual statistical independence of the variables X1, X2, . . . , Xn.
Definition 3.13: Let X1, X2, . . . , Xn be n random variables, discrete or continuous, with
joint probability distribution f(x1, x2, . . . , xn) and marginal distribution
f1(x1), f2(x2), . . . , fn(xn), respectively. The random variables X1, X2, . . . , Xn are
said to be mutually statistically independent if and only if
f(x1, x2, . . . , xn) = f1(x1)f2(x2) · · · fn(xn)
for all (x1, x2, . . . , xn) within their range.
Example 3.22: Suppose that the shelf life, in years, of a certain perishable food product packaged
in cardboard containers is a random variable whose probability density function is
given by
f(x) =

e−x
, x  0,
0, elsewhere.
Let X1, X2, and X3 represent the shelf lives for three of these containers selected
independently and find P(X1  2, 1  X2  3, X3  2).
Solution: Since the containers were selected independently, we can assume that the random
variables X1, X2, and X3 are statistically independent, having the joint probability
density
f(x1, x2, x3) = f(x1)f(x2)f(x3) = e−x1
e−x2
e−x3
= e−x1−x2−x3
,
for x1  0, x2  0, x3  0, and f(x1, x2, x3) = 0 elsewhere. Hence
P(X1  2, 1  X2  3, X3  2) =
∞
2
3
1
2
0
e−x1−x2−x3
dx1 dx2 dx3
= (1 − e−2
)(e−1
− e−3
)e−2
= 0.0372.
/ /
104 Chapter 3 Random Variables and Probability Distributions
What Are Important Characteristics of Probability Distributions
and Where Do They Come From?
This is an important point in the text to provide the reader with a transition into
the next three chapters. We have given illustrations in both examples and exercises
of practical scientific and engineering situations in which probability distributions
and their properties are used to solve important problems. These probability dis-
tributions, either discrete or continuous, were introduced through phrases like “it
is known that” or “suppose that” or even in some cases “historical evidence sug-
gests that.” These are situations in which the nature of the distribution and even
a good estimate of the probability structure can be determined through historical
data, data from long-term studies, or even large amounts of planned data. The
reader should remember the discussion of the use of histograms in Chapter 1 and
from that recall how frequency distributions are estimated from the histograms.
However, not all probability functions and probability density functions are derived
from large amounts of historical data. There are a substantial number of situa-
tions in which the nature of the scientific scenario suggests a distribution type.
Indeed, many of these are reflected in exercises in both Chapter 2 and this chap-
ter. When independent repeated observations are binary in nature (e.g., defective
or not, survive or not, allergic or not) with value 0 or 1, the distribution covering
this situation is called the binomial distribution and the probability function
is known and will be demonstrated in its generality in Chapter 5. Exercise 3.34
in Section 3.3 and Review Exercise 3.80 are examples, and there are others that
the reader should recognize. The scenario of a continuous distribution in time to
failure, as in Review Exercise 3.69 or Exercise 3.27 on page 93, often suggests a dis-
tribution type called the exponential distribution. These types of illustrations
are merely two of many so-called standard distributions that are used extensively
in real-world problems because the scientific scenario that gives rise to each of them
is recognizable and occurs often in practice. Chapters 5 and 6 cover many of these
types along with some underlying theory concerning their use.
A second part of this transition to material in future chapters deals with the
notion of population parameters or distributional parameters. Recall in
Chapter 1 we discussed the need to use data to provide information about these
parameters. We went to some length in discussing the notions of a mean and
variance and provided a vision for the concepts in the context of a population.
Indeed, the population mean and variance are easily found from the probability
function for the discrete case or probability density function for the continuous
case. These parameters and their importance in the solution of many types of
real-world problems will provide much of the material in Chapters 8 through 17.
Exercises
3.37 Determine the values of c so that the follow-
ing functions represent joint probability distributions
of the random variables X and Y :
(a) f(x, y) = cxy, for x = 1, 2, 3; y = 1, 2, 3;
(b) f(x, y) = c|x − y|, for x = −2, 0, 2; y = −2, 3.
3.38 If the joint probability distribution of X and Y
is given by
f(x, y) =
x + y
30
, for x = 0, 1, 2, 3; y = 0, 1, 2,
find
/ /
Exercises 105
(a) P(X ≤ 2, Y = 1);
(b) P(X  2, Y ≤ 1);
(c) P(X  Y );
(d) P(X + Y = 4).
3.39 From a sack of fruit containing 3 oranges, 2 ap-
ples, and 3 bananas, a random sample of 4 pieces of
fruit is selected. If X is the number of oranges and Y
is the number of apples in the sample, find
(a) the joint probability distribution of X and Y ;
(b) P[(X, Y ) ∈ A], where A is the region that is given
by {(x, y) | x + y ≤ 2}.
3.40 A fast-food restaurant operates both a drive-
through facility and a walk-in facility. On a randomly
selected day, let X and Y , respectively, be the propor-
tions of the time that the drive-through and walk-in
facilities are in use, and suppose that the joint density
function of these random variables is
f(x, y) =
2
3
(x + 2y), 0 ≤ x ≤ 1, 0 ≤ y ≤ 1,
0, elsewhere.
(a) Find the marginal density of X.
(b) Find the marginal density of Y .
(c) Find the probability that the drive-through facility
is busy less than one-half of the time.
3.41 A candy company distributes boxes of choco-
lates with a mixture of creams, toffees, and cordials.
Suppose that the weight of each box is 1 kilogram, but
the individual weights of the creams, toffees, and cor-
dials vary from box to box. For a randomly selected
box, let X and Y represent the weights of the creams
and the toffees, respectively, and suppose that the joint
density function of these variables is
f(x, y) =
24xy, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, x + y ≤ 1,
0, elsewhere.
(a) Find the probability that in a given box the cordials
account for more than 1/2 of the weight.
(b) Find the marginal density for the weight of the
creams.
(c) Find the probability that the weight of the toffees
in a box is less than 1/8 of a kilogram if it is known
that creams constitute 3/4 of the weight.
3.42 Let X and Y denote the lengths of life, in years,
of two components in an electronic system. If the joint
density function of these variables is
f(x, y) =
e−(x+y)
, x  0, y  0,
0, elsewhere,
find P(0  X  1 | Y = 2).
3.43 Let X denote the reaction time, in seconds, to
a certain stimulus and Y denote the temperature (◦
F)
at which a certain reaction starts to take place. Sup-
pose that two random variables X and Y have the joint
density
f(x, y) =
4xy, 0  x  1, 0  y  1,
0, elsewhere.
Find
(a) P(0 ≤ X ≤ 1
2
and 1
4
≤ Y ≤ 1
2
);
(b) P(X  Y ).
3.44 Each rear tire on an experimental airplane is
supposed to be filled to a pressure of 40 pounds per
square inch (psi). Let X denote the actual air pressure
for the right tire and Y denote the actual air pressure
for the left tire. Suppose that X and Y are random
variables with the joint density function
f(x, y) =
k(x2
+ y2
), 30 ≤ x  50, 30 ≤ y  50,
0, elsewhere.
(a) Find k.
(b) Find P(30 ≤ X ≤ 40 and 40 ≤ Y  50).
(c) Find the probability that both tires are underfilled.
3.45 Let X denote the diameter of an armored elec-
tric cable and Y denote the diameter of the ceramic
mold that makes the cable. Both X and Y are scaled
so that they range between 0 and 1. Suppose that X
and Y have the joint density
f(x, y) =

1
y
, 0  x  y  1,
0, elsewhere.
Find P(X + Y  1/2).
3.46 Referring to Exercise 3.38, find
(a) the marginal distribution of X;
(b) the marginal distribution of Y .
3.47 The amount of kerosene, in thousands of liters,
in a tank at the beginning of any day is a random
amount Y from which a random amount X is sold dur-
ing that day. Suppose that the tank is not resupplied
during the day so that x ≤ y, and assume that the
joint density function of these variables is
f(x, y) =
2, 0  x ≤ y  1,
0, elsewhere.
(a) Determine if X and Y are independent.
/ /
106 Chapter 3 Random Variables and Probability Distributions
(b) Find P(1/4  X  1/2 | Y = 3/4).
3.48 Referring to Exercise 3.39, find
(a) f(y|2) for all values of y;
(b) P(Y = 0 | X = 2).
3.49 Let X denote the number of times a certain nu-
merical control machine will malfunction: 1, 2, or 3
times on any given day. Let Y denote the number of
times a technician is called on an emergency call. Their
joint probability distribution is given as
x
f(x, y) 1 2 3
y
1
3
5
0.05
0.05
0.00
0.05
0.10
0.20
0.10
0.35
0.10
(a) Evaluate the marginal distribution of X.
(b) Evaluate the marginal distribution of Y .
(c) Find P(Y = 3 | X = 2).
3.50 Suppose that X and Y have the following joint
probability distribution:
x
f(x, y) 2 4
1 0.10 0.15
y 3 0.20 0.30
5 0.10 0.15
(a) Find the marginal distribution of X.
(b) Find the marginal distribution of Y .
3.51 Three cards are drawn without replacement
from the 12 face cards (jacks, queens, and kings) of
an ordinary deck of 52 playing cards. Let X be the
number of kings selected and Y the number of jacks.
Find
(a) the joint probability distribution of X and Y ;
(b) P[(X, Y ) ∈ A], where A is the region given by
{(x, y) | x + y ≥ 2}.
3.52 A coin is tossed twice. Let Z denote the number
of heads on the first toss and W the total number of
heads on the 2 tosses. If the coin is unbalanced and a
head has a 40% chance of occurring, find
(a) the joint probability distribution of W and Z;
(b) the marginal distribution of W;
(c) the marginal distribution of Z;
(d) the probability that at least 1 head occurs.
3.53 Given the joint density function
f(x, y) =
6−x−y
8
, 0  x  2, 2  y  4,
0, elsewhere,
find P(1  Y  3 | X = 1).
3.54 Determine whether the two random variables of
Exercise 3.49 are dependent or independent.
3.55 Determine whether the two random variables of
Exercise 3.50 are dependent or independent.
3.56 The joint density function of the random vari-
ables X and Y is
f(x, y) =
6x, 0  x  1, 0  y  1 − x,
0, elsewhere.
(a) Show that X and Y are not independent.
(b) Find P(X  0.3 | Y = 0.5).
3.57 Let X, Y , and Z have the joint probability den-
sity function
f(x, y, z) =
kxy2
z, 0  x, y  1, 0  z  2,
0, elsewhere.
(a) Find k.
(b) Find P(X  1
4
, Y  1
2
, 1  Z  2).
3.58 Determine whether the two random variables of
Exercise 3.43 are dependent or independent.
3.59 Determine whether the two random variables of
Exercise 3.44 are dependent or independent.
3.60 The joint probability density function of the ran-
dom variables X, Y , and Z is
f(x, y, z) =

4xyz2
9
, 0  x, y  1, 0  z  3,
0, elsewhere.
Find
(a) the joint marginal density function of Y and Z;
(b) the marginal density of Y ;
(c) P(1
4
 X  1
2
, Y  1
3
, 1  Z  2);
(d) P(0  X  1
2
| Y = 1
4
, Z = 2).
/ /
Review Exercises 107
Review Exercises
3.61 A tobacco company produces blends of tobacco,
with each blend containing various proportions of
Turkish, domestic, and other tobaccos. The propor-
tions of Turkish and domestic in a blend are random
variables with joint density function (X = Turkish and
Y = domestic)
f(x, y) =
24xy, 0 ≤ x, y ≤ 1, x + y ≤ 1,
0, elsewhere.
(a) Find the probability that in a given box the Turkish
tobacco accounts for over half the blend.
(b) Find the marginal density function for the propor-
tion of the domestic tobacco.
(c) Find the probability that the proportion of Turk-
ish tobacco is less than 1/8 if it is known that the
blend contains 3/4 domestic tobacco.
3.62 An insurance company offers its policyholders a
number of different premium payment options. For a
randomly selected policyholder, let X be the number of
months between successive payments. The cumulative
distribution function of X is
F(x) =
⎧
⎪
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎪
⎩
0, if x  1,
0.4, if 1 ≤ x  3,
0.6, if 3 ≤ x  5,
0.8, if 5 ≤ x  7,
1.0, if x ≥ 7.
(a) What is the probability mass function of X?
(b) Compute P(4  X ≤ 7).
3.63 Two electronic components of a missile system
work in harmony for the success of the total system.
Let X and Y denote the life in hours of the two com-
ponents. The joint density of X and Y is
f(x, y) =
ye−y(1+x)
, x, y ≥ 0,
0, elsewhere.
(a) Give the marginal density functions for both ran-
dom variables.
(b) What is the probability that the lives of both com-
ponents will exceed 2 hours?
3.64 A service facility operates with two service lines.
On a randomly selected day, let X be the proportion of
time that the first line is in use whereas Y is the pro-
portion of time that the second line is in use. Suppose
that the joint probability density function for (X, Y ) is
f(x, y) =
3
2
(x2
+ y2
), 0 ≤ x, y ≤ 1,
0, elsewhere.
(a) Compute the probability that neither line is busy
more than half the time.
(b) Find the probability that the first line is busy more
than 75% of the time.
3.65 Let the number of phone calls received by a
switchboard during a 5-minute interval be a random
variable X with probability function
f(x) =
e−2
2x
x!
, for x = 0, 1, 2, . . . .
(a) Determine the probability that X equals 0, 1, 2, 3,
4, 5, and 6.
(b) Graph the probability mass function for these val-
ues of x.
(c) Determine the cumulative distribution function for
these values of X.
3.66 Consider the random variables X and Y with
joint density function
f(x, y) =
x + y, 0 ≤ x, y ≤ 1,
0, elsewhere.
(a) Find the marginal distributions of X and Y .
(b) Find P(X  0.5, Y  0.5).
3.67 An industrial process manufactures items that
can be classified as either defective or not defective.
The probability that an item is defective is 0.1. An
experiment is conducted in which 5 items are drawn
randomly from the process. Let the random variable X
be the number of defectives in this sample of 5. What
is the probability mass function of X?
3.68 Consider the following joint probability density
function of the random variables X and Y :
f(x, y) =
3x−y
9
, 1  x  3, 1  y  2,
0, elsewhere.
(a) Find the marginal density functions of X and Y .
(b) Are X and Y independent?
(c) Find P(X  2).
3.69 The life span in hours of an electrical compo-
nent is a random variable with cumulative distribution
function
F(x) =
1 − e− x
50 , x  0,
0, eleswhere.
/ /
108 Chapter 3 Random Variables and Probability Distributions
(a) Determine its probability density function.
(b) Determine the probability that the life span of such
a component will exceed 70 hours.
3.70 Pairs of pants are being produced by a particu-
lar outlet facility. The pants are checked by a group of
10 workers. The workers inspect pairs of pants taken
randomly from the production line. Each inspector is
assigned a number from 1 through 10. A buyer selects
a pair of pants for purchase. Let the random variable
X be the inspector number.
(a) Give a reasonable probability mass function for X.
(b) Plot the cumulative distribution function for X.
3.71 The shelf life of a product is a random variable
that is related to consumer acceptance. It turns out
that the shelf life Y in days of a certain type of bakery
product has a density function
f(y) =
1
2
e−y/2
, 0 ≤ y  ∞,
0, elsewhere.
What fraction of the loaves of this product stocked to-
day would you expect to be sellable 3 days from now?
3.72 Passenger congestion is a service problem in air-
ports. Trains are installed within the airport to reduce
the congestion. With the use of the train, the time X in
minutes that it takes to travel from the main terminal
to a particular concourse has density function
f(x) =
1
10
, 0 ≤ x ≤ 10,
0, elsewhere.
(a) Show that the above is a valid probability density
function.
(b) Find the probability that the time it takes a pas-
senger to travel from the main terminal to the con-
course will not exceed 7 minutes.
3.73 Impurities in a batch of final product of a chem-
ical process often reflect a serious problem. From con-
siderable plant data gathered, it is known that the pro-
portion Y of impurities in a batch has a density func-
tion given by
f(y) =
10(1 − y)9
, 0 ≤ y ≤ 1,
0, elsewhere.
(a) Verify that the above is a valid density function.
(b) A batch is considered not sellable and then not
acceptable if the percentage of impurities exceeds
60%. With the current quality of the process, what
is the percentage of batches that are not
acceptable?
3.74 The time Z in minutes between calls to an elec-
trical supply system has the probability density func-
tion
f(z) =
1
10
e−z/10
, 0  z  ∞,
0, elsewhere.
(a) What is the probability that there are no calls
within a 20-minute time interval?
(b) What is the probability that the first call comes
within 10 minutes of opening?
3.75 A chemical system that results from a chemical
reaction has two important components among others
in a blend. The joint distribution describing the pro-
portions X1 and X2 of these two components is given
by
f(x1, x2) =
2, 0  x1  x2  1,
0, elsewhere.
(a) Give the marginal distribution of X1.
(b) Give the marginal distribution of X2.
(c) What is the probability that component propor-
tions produce the results X1  0.2 and X2  0.5?
(d) Give the conditional distribution fX1|X2
(x1|x2).
3.76 Consider the situation of Review Exercise 3.75.
But suppose the joint distribution of the two propor-
tions is given by
f(x1, x2) =
6x2, 0  x2  x1  1,
0, elsewhere.
(a) Give the marginal distribution fX1 (x1) of the pro-
portion X1 and verify that it is a valid density
function.
(b) What is the probability that proportion X2 is less
than 0.5, given that X1 is 0.7?
3.77 Consider the random variables X and Y that
represent the number of vehicles that arrive at two sep-
arate street corners during a certain 2-minute period.
These street corners are fairly close together so it is im-
portant that traffic engineers deal with them jointly if
necessary. The joint distribution of X and Y is known
to be
f(x, y) =
9
16
·
1
4(x+y)
,
for x = 0, 1, 2, . . . and y = 0, 1, 2, . . . .
(a) Are the two random variables X and Y indepen-
dent? Explain why or why not.
(b) What is the probability that during the time pe-
riod in question less than 4 vehicles arrive at the
two street corners?
3.5 Potential Misconceptions and Hazards 109
3.78 The behavior of series of components plays a
huge role in scientific and engineering reliability prob-
lems. The reliability of the entire system is certainly
no better than that of the weakest component in the
series. In a series system, the components operate in-
dependently of each other. In a particular system con-
taining three components, the probabilities of meeting
specifications for components 1, 2, and 3, respectively,
are 0.95, 0.99, and 0.92. What is the probability that
the entire system works?
3.79 Another type of system that is employed in en-
gineering work is a group of parallel components or a
parallel system. In this more conservative approach,
the probability that the system operates is larger than
the probability that any component operates. The sys-
tem fails only when all components fail. Consider a sit-
uation in which there are 4 independent components in
a parallel system with probability of operation given by
Component 1: 0.95; Component 2: 0.94;
Component 3: 0.90; Component 4: 0.97.
What is the probability that the system does not fail?
3.80 Consider a system of components in which there
are 5 independent components, each of which possesses
an operational probability of 0.92. The system does
have a redundancy built in such that it does not fail
if 3 out of the 5 components are operational. What is
the probability that the total system is operational?
3.81 Project: Take 5 class periods to observe the
shoe color of individuals in class. Assume the shoe
color categories are red, white, black, brown, and other.
Complete a frequency table for each color category.
(a) Estimate and interpret the meaning of the proba-
bility distribution.
(b) What is the estimated probability that in the next
class period a randomly selected student will be
wearing a red or a white pair of shoes?
3.5 Potential Misconceptions and Hazards;
Relationship to Material in Other Chapters
In future chapters it will become apparent that probability distributions represent
the structure through which probabilities that are computed aid in the evalua-
tion and understanding of a process. For example, in Review Exercise 3.65, the
probability distribution that quantifies the probability of a heavy load during cer-
tain time periods can be very useful in planning for any changes in the system.
Review Exercise 3.69 describes a scenario in which the life span of an electronic
component is studied. Knowledge of the probability structure for the component
will contribute significantly to an understanding of the reliability of a large system
of which the component is a part. In addition, an understanding of the general
nature of probability distributions will enhance understanding of the concept of
a P-value, which was introduced briefly in Chapter 1 and will play a major role
beginning in Chapter 10 and extending throughout the balance of the text.
Chapters 4, 5, and 6 depend heavily on the material in this chapter. In Chapter
4, we discuss the meaning of important parameters in probability distributions.
These important parameters quantify notions of central tendency and variabil-
ity in a system. In fact, knowledge of these quantities themselves, quite apart
from the complete distribution, can provide insight into the nature of the system.
Chapters 5 and 6 will deal with engineering, biological, or general scientific scenar-
ios that identify special types of distributions. For example, the structure of the
probability function in Review Exercise 3.65 will easily be identified under certain
assumptions discussed in Chapter 5. The same holds for the scenario of Review
Exercise 3.69. This is a special type of time to failure problem for which the
probability density function will be discussed in Chapter 6.
110 Chapter 3 Random Variables and Probability Distributions
As far as potential hazards with the use of material in this chapter, the warning
to the reader is not to read more into the material than is evident. The general
nature of the probability distribution for a specific scientific phenomenon is not
obvious from what is learned in this chapter. The purpose of this chapter is for
readers to learn how to manipulate a probability distribution, not to learn how
to identify a specific type. Chapters 5 and 6 go a long way toward identification
according to the general nature of the scientific system.
Chapter 4
Mathematical Expectation
4.1 Mean of a Random Variable
In Chapter 1, we discussed the sample mean, which is the arithmetic mean of the
data. Now consider the following. If two coins are tossed 16 times and X is the
number of heads that occur per toss, then the values of X are 0, 1, and 2. Suppose
that the experiment yields no heads, one head, and two heads a total of 4, 7, and 5
times, respectively. The average number of heads per toss of the two coins is then
(0)(4) + (1)(7) + (2)(5)
16
= 1.06.
This is an average value of the data and yet it is not a possible outcome of {0, 1, 2}.
Hence, an average is not necessarily a possible outcome for the experiment. For
instance, a salesman’s average monthly income is not likely to be equal to any of
his monthly paychecks.
Let us now restructure our computation for the average number of heads so as
to have the following equivalent form:
(0)

4
16

+ (1)

7
16

+ (2)

5
16

= 1.06.
The numbers 4/16, 7/16, and 5/16 are the fractions of the total tosses resulting in 0,
1, and 2 heads, respectively. These fractions are also the relative frequencies for the
different values of X in our experiment. In fact, then, we can calculate the mean,
or average, of a set of data by knowing the distinct values that occur and their
relative frequencies, without any knowledge of the total number of observations in
our set of data. Therefore, if 4/16, or 1/4, of the tosses result in no heads, 7/16 of
the tosses result in one head, and 5/16 of the tosses result in two heads, the mean
number of heads per toss would be 1.06 no matter whether the total number of
tosses were 16, 1000, or even 10,000.
This method of relative frequencies is used to calculate the average number of
heads per toss of two coins that we might expect in the long run. We shall refer
to this average value as the mean of the random variable X or the mean of
the probability distribution of X and write it as μx or simply as μ when it is
111
112 Chapter 4 Mathematical Expectation
clear to which random variable we refer. It is also common among statisticians to
refer to this mean as the mathematical expectation, or the expected value of the
random variable X, and denote it as E(X).
Assuming that 1 fair coin was tossed twice, we find that the sample space for
our experiment is
S = {HH, HT, TH, TT}.
Since the 4 sample points are all equally likely, it follows that
P(X = 0) = P(TT) =
1
4
, P(X = 1) = P(TH) + P(HT) =
1
2
,
and
P(X = 2) = P(HH) =
1
4
,
where a typical element, say TH, indicates that the first toss resulted in a tail
followed by a head on the second toss. Now, these probabilities are just the relative
frequencies for the given events in the long run. Therefore,
μ = E(X) = (0)

1
4

+ (1)

1
2

+ (2)

1
4

= 1.
This result means that a person who tosses 2 coins over and over again will, on the
average, get 1 head per toss.
The method described above for calculating the expected number of heads
per toss of 2 coins suggests that the mean, or expected value, of any discrete
random variable may be obtained by multiplying each of the values x1, x2, . . . , xn
of the random variable X by its corresponding probability f(x1), f(x2), . . . , f(xn)
and summing the products. This is true, however, only if the random variable is
discrete. In the case of continuous random variables, the definition of an expected
value is essentially the same with summations replaced by integrations.
Definition 4.1: Let X be a random variable with probability distribution f(x). The mean, or
expected value, of X is
μ = E(X) =

x
xf(x)
if X is discrete, and
μ = E(X) =
∞
−∞
xf(x) dx
if X is continuous.
The reader should note that the way to calculate the expected value, or mean,
shown here is different from the way to calculate the sample mean described in
Chapter 1, where the sample mean is obtained by using data. In mathematical
expectation, the expected value is calculated by using the probability distribution.
4.1 Mean of a Random Variable 113
However, the mean is usually understood as a “center” value of the underlying
distribution if we use the expected value, as in Definition 4.1.
Example 4.1: A lot containing 7 components is sampled by a quality inspector; the lot contains
4 good components and 3 defective components. A sample of 3 is taken by the
inspector. Find the expected value of the number of good components in this
sample.
Solution: Let X represent the number of good components in the sample. The probability
distribution of X is
f(x) =
4
x
 3
3−x

7
3
 , x = 0, 1, 2, 3.
Simple calculations yield f(0) = 1/35, f(1) = 12/35, f(2) = 18/35, and f(3) =
4/35. Therefore,
μ = E(X) = (0)

1
35

+ (1)

12
35

+ (2)

18
35

+ (3)

4
35

=
12
7
= 1.7.
Thus, if a sample of size 3 is selected at random over and over again from a lot
of 4 good components and 3 defective components, it will contain, on average, 1.7
good components.
Example 4.2: A salesperson for a medical device company has two appointments on a given day.
At the first appointment, he believes that he has a 70% chance to make the deal,
from which he can earn $1000 commission if successful. On the other hand, he
thinks he only has a 40% chance to make the deal at the second appointment,
from which, if successful, he can make $1500. What is his expected commission
based on his own probability belief? Assume that the appointment results are
independent of each other.
Solution: First, we know that the salesperson, for the two appointments, can have 4 possible
commission totals: $0, $1000, $1500, and $2500. We then need to calculate their
associated probabilities. By independence, we obtain
f($0) = (1 − 0.7)(1 − 0.4) = 0.18, f($2500) = (0.7)(0.4) = 0.28,
f($1000) = (0.7)(1 − 0.4) = 0.42, and f($1500) = (1 − 0.7)(0.4) = 0.12.
Therefore, the expected commission for the salesperson is
E(X) = ($0)(0.18) + ($1000)(0.42) + ($1500)(0.12) + ($2500)(0.28)
= $1300.
Examples 4.1 and 4.2 are designed to allow the reader to gain some insight
into what we mean by the expected value of a random variable. In both cases the
random variables are discrete. We follow with an example involving a continuous
random variable, where an engineer is interested in the mean life of a certain
type of electronic device. This is an illustration of a time to failure problem that
occurs often in practice. The expected value of the life of a device is an important
parameter for its evaluation.
114 Chapter 4 Mathematical Expectation
Example 4.3: Let X be the random variable that denotes the life in hours of a certain electronic
device. The probability density function is
f(x) =

20,000
x3 , x  100,
0, elsewhere.
Find the expected life of this type of device.
Solution: Using Definition 4.1, we have
μ = E(X) =
∞
100
x
20, 000
x3
dx =
∞
100
20, 000
x2
dx = 200.
Therefore, we can expect this type of device to last, on average, 200 hours.
Now let us consider a new random variable g(X), which depends on X; that
is, each value of g(X) is determined by the value of X. For instance, g(X) might
be X2
or 3X − 1, and whenever X assumes the value 2, g(X) assumes the value
g(2). In particular, if X is a discrete random variable with probability distribution
f(x), for x = −1, 0, 1, 2, and g(X) = X2
, then
P[g(X) = 0] = P(X = 0) = f(0),
P[g(X) = 1] = P(X = −1) + P(X = 1) = f(−1) + f(1),
P[g(X) = 4] = P(X = 2) = f(2),
and so the probability distribution of g(X) may be written
g(x) 0 1 4
P[g(X) = g(x)] f(0) f(−1) + f(1) f(2)
By the definition of the expected value of a random variable, we obtain
μg(X) = E[g(x)] = 0f(0) + 1[f(−1) + f(1)] + 4f(2)
= (−1)2
f(−1) + (0)2
f(0) + (1)2
f(1) + (2)2
f(2) =

x
g(x)f(x).
This result is generalized in Theorem 4.1 for both discrete and continuous random
variables.
Theorem 4.1: Let X be a random variable with probability distribution f(x). The expected
value of the random variable g(X) is
μg(X) = E[g(X)] =

x
g(x)f(x)
if X is discrete, and
μg(X) = E[g(X)] =
∞
−∞
g(x)f(x) dx
if X is continuous.
4.1 Mean of a Random Variable 115
Example 4.4: Suppose that the number of cars X that pass through a car wash between 4:00
P.M. and 5:00 P.M. on any sunny Friday has the following probability distribution:
x 4 5 6 7 8 9
P(X = x) 1
12
1
12
1
4
1
4
1
6
1
6
Let g(X) = 2X−1 represent the amount of money, in dollars, paid to the attendant
by the manager. Find the attendant’s expected earnings for this particular time
period.
Solution: By Theorem 4.1, the attendant can expect to receive
E[g(X)] = E(2X − 1) =
9

x=4
(2x − 1)f(x)
= (7)

1
12

+ (9)

1
12

+ (11)

1
4

+ (13)

1
4

+ (15)

1
6

+ (17)

1
6

= $12.67.
Example 4.5: Let X be a random variable with density function
f(x) =

x2
3 , −1  x  2,
0, elsewhere.
Find the expected value of g(X) = 4X + 3.
Solution: By Theorem 4.1, we have
E(4X + 3) =
2
−1
(4x + 3)x2
3
dx =
1
3
2
−1
(4x3
+ 3x2
) dx = 8.
We shall now extend our concept of mathematical expectation to the case of
two random variables X and Y with joint probability distribution f(x, y).
Definition 4.2: Let X and Y be random variables with joint probability distribution f(x, y). The
mean, or expected value, of the random variable g(X, Y ) is
μg(X,Y ) = E[g(X, Y )] =

x

y
g(x, y)f(x, y)
if X and Y are discrete, and
μg(X,Y ) = E[g(X, Y )] =
∞
−∞
∞
−∞
g(x, y)f(x, y) dx dy
if X and Y are continuous.
Generalization of Definition 4.2 for the calculation of mathematical expectations
of functions of several random variables is straightforward.
116 Chapter 4 Mathematical Expectation
Example 4.6: Let X and Y be the random variables with joint probability distribution indicated
in Table 3.1 on page 96. Find the expected value of g(X, Y ) = XY . The table is
reprinted here for convenience.
x Row
f(x, y) 0 1 2 Totals
0 3
28
9
28
3
28
15
28
y 1 3
14
3
14 0 3
7
2 1
28 0 0 1
28
Column Totals 5
14
15
28
3
28 1
Solution: By Definition 4.2, we write
E(XY ) =
2

x=0
2

y=0
xyf(x, y)
= (0)(0)f(0, 0) + (0)(1)f(0, 1)
+ (1)(0)f(1, 0) + (1)(1)f(1, 1) + (2)(0)f(2, 0)
= f(1, 1) =
3
14
.
Example 4.7: Find E(Y/X) for the density function
f(x, y) =

x(1+3y2
)
4 , 0  x  2, 0  y  1,
0, elsewhere.
Solution: We have
E

Y
X

=
1
0
2
0
y(1 + 3y2
)
4
dxdy =
1
0
y + 3y3
2
dy =
5
8
.
Note that if g(X, Y ) = X in Definition 4.2, we have
E(X) =
⎧
⎨
⎩

x

y
xf(x, y) =

x
xg(x) (discrete case),
 ∞
−∞
 ∞
−∞
xf(x, y) dy dx =
 ∞
−∞
xg(x) dx (continuous case),
where g(x) is the marginal distribution of X. Therefore, in calculating E(X) over
a two-dimensional space, one may use either the joint probability distribution of
X and Y or the marginal distribution of X. Similarly, we define
E(Y ) =
⎧
⎨
⎩

y

x
yf(x, y) =

y
yh(y) (discrete case),
 ∞
−∞
 ∞
−∞
yf(x, y) dxdy =
 ∞
−∞
yh(y) dy (continuous case),
where h(y) is the marginal distribution of the random variable Y .
/ /
Exercises 117
Exercises
4.1 The probability distribution of X, the number of
imperfections per 10 meters of a synthetic fabric in con-
tinuous rolls of uniform width, is given in Exercise 3.13
on page 92 as
x 0 1 2 3 4
f(x) 0.41 0.37 0.16 0.05 0.01
Find the average number of imperfections per 10 me-
ters of this fabric.
4.2 The probability distribution of the discrete ran-
dom variable X is
f(x) =
3
x

1
4
x 
3
4
3−x
, x = 0, 1, 2, 3.
Find the mean of X.
4.3 Find the mean of the random variable T repre-
senting the total of the three coins in Exercise 3.25 on
page 93.
4.4 A coin is biased such that a head is three times
as likely to occur as a tail. Find the expected number
of tails when this coin is tossed twice.
4.5 In a gambling game, a woman is paid $3 if she
draws a jack or a queen and $5 if she draws a king or
an ace from an ordinary deck of 52 playing cards. If
she draws any other card, she loses. How much should
she pay to play if the game is fair?
4.6 An attendant at a car wash is paid according to
the number of cars that pass through. Suppose the
probabilities are 1/12, 1/12, 1/4, 1/4, 1/6, and 1/6,
respectively, that the attendant receives $7, $9, $11,
$13, $15, or $17 between 4:00 P.M. and 5:00 P.M. on
any sunny Friday. Find the attendant’s expected earn-
ings for this particular period.
4.7 By investing in a particular stock, a person can
make a profit in one year of $4000 with probability 0.3
or take a loss of $1000 with probability 0.7. What is
this person’s expected gain?
4.8 Suppose that an antique jewelry dealer is inter-
ested in purchasing a gold necklace for which the prob-
abilities are 0.22, 0.36, 0.28, and 0.14, respectively, that
she will be able to sell it for a profit of $250, sell it for
a profit of $150, break even, or sell it for a loss of $150.
What is her expected profit?
4.9 A private pilot wishes to insure his airplane for
$200,000. The insurance company estimates that a to-
tal loss will occur with probability 0.002, a 50% loss
with probability 0.01, and a 25% loss with probability
0.1. Ignoring all other partial losses, what premium
should the insurance company charge each year to re-
alize an average profit of $500?
4.10 Two tire-quality experts examine stacks of tires
and assign a quality rating to each tire on a 3-point
scale. Let X denote the rating given by expert A and
Y denote the rating given by B. The following table
gives the joint distribution for X and Y .
y
f(x, y) 1 2 3
1 0.10 0.05 0.02
x 2 0.10 0.35 0.05
3 0.03 0.10 0.20
Find μX and μY .
4.11 The density function of coded measurements of
the pitch diameter of threads of a fitting is
f(x) =

4
π(1+x2)
, 0  x  1,
0, elsewhere.
Find the expected value of X.
4.12 If a dealer’s profit, in units of $5000, on a new
automobile can be looked upon as a random variable
X having the density function
f(x) =
2(1 − x), 0  x  1,
0, elsewhere,
find the average profit per automobile.
4.13 The density function of the continuous random
variable X, the total number of hours, in units of 100
hours, that a family runs a vacuum cleaner over a pe-
riod of one year, is given in Exercise 3.7 on page 92
as
f(x) =
⎧
⎨
⎩
x, 0  x  1,
2 − x, 1 ≤ x  2,
0, elsewhere.
Find the average number of hours per year that families
run their vacuum cleaners.
4.14 Find the proportion X of individuals who can be
expected to respond to a certain mail-order solicitation
if X has the density function
f(x) =

2(x+2)
5
, 0  x  1,
0, elsewhere.
/ /
118 Chapter 4 Mathematical Expectation
4.15 Assume that two random variables (X, Y ) are
uniformly distributed on a circle with radius a. Then
the joint probability density function is
f(x, y) =
1
πa2 , x2
+ y2
≤ a2
,
0, otherwise.
Find μX , the expected value of X.
4.16 Suppose that you are inspecting a lot of 1000
light bulbs, among which 20 are defectives. You choose
two light bulbs randomly from the lot without replace-
ment. Let
X1 =
1, if the 1st light bulb is defective,
0, otherwise,
X2 =
1, if the 2nd light bulb is defective,
0, otherwise.
Find the probability that at least one light bulb chosen
is defective. [Hint: Compute P(X1 + X2 = 1).]
4.17 Let X be a random variable with the following
probability distribution:
x −3 6 9
f(x) 1/6 1/2 1/3
Find μg(X), where g(X) = (2X + 1)2
.
4.18 Find the expected value of the random variable
g(X) = X2
, where X has the probability distribution
of Exercise 4.2.
4.19 A large industrial firm purchases several new
word processors at the end of each year, the exact num-
ber depending on the frequency of repairs in the previ-
ous year. Suppose that the number of word processors,
X, purchased each year has the following probability
distribution:
x 0 1 2 3
f(x) 1/10 3/10 2/5 1/5
If the cost of the desired model is $1200 per unit and
at the end of the year a refund of 50X2
dollars will be
issued, how much can this firm expect to spend on new
word processors during this year?
4.20 A continuous random variable X has the density
function
f(x) =
e−x
, x  0,
0, elsewhere.
Find the expected value of g(X) = e2X/3
.
4.21 What is the dealer’s average profit per auto-
mobile if the profit on each automobile is given by
g(X) = X2
, where X is a random variable having the
density function of Exercise 4.12?
4.22 The hospitalization period, in days, for patients
following treatment for a certain type of kidney disor-
der is a random variable Y = X + 4, where X has the
density function
f(x) =

32
(x+4)3 , x  0,
0, elsewhere.
Find the average number of days that a person is hos-
pitalized following treatment for this disorder.
4.23 Suppose that X and Y have the following joint
probability function:
x
f(x, y) 2 4
1 0.10 0.15
y 3 0.20 0.30
5 0.10 0.15
(a) Find the expected value of g(X, Y ) = XY 2
.
(b) Find μX and μY .
4.24 Referring to the random variables whose joint
probability distribution is given in Exercise 3.39 on
page 105,
(a) find E(X2
Y − 2XY );
(b) find μX − μY .
4.25 Referring to the random variables whose joint
probability distribution is given in Exercise 3.51 on
page 106, find the mean for the total number of jacks
and kings when 3 cards are drawn without replacement
from the 12 face cards of an ordinary deck of 52 playing
cards.
4.26 Let X and Y be random variables with joint
density function
f(x, y) =
4xy, 0  x, y  1,
0, elsewhere.
Find the expected value of Z =
√
X2 + Y 2.
4.27 In Exercise 3.27 on page 93, a density function
is given for the time to failure of an important compo-
nent of a DVD player. Find the mean number of hours
to failure of the component and thus the DVD player.
4.28 Consider the information in Exercise 3.28 on
page 93. The problem deals with the weight in ounces
of the product in a cereal box, with
f(x) =
2
5
, 23.75 ≤ x ≤ 26.25,
0, elsewhere.
4.2 Variance and Covariance of Random Variables 119
(a) Plot the density function.
(b) Compute the expected value, or mean weight, in
ounces.
(c) Are you surprised at your answer in (b)? Explain
why or why not.
4.29 Exercise 3.29 on page 93 dealt with an impor-
tant particle size distribution characterized by
f(x) =
3x−4
, x  1,
0, elsewhere.
(a) Plot the density function.
(b) Give the mean particle size.
4.30 In Exercise 3.31 on page 94, the distribution of
times before a major repair of a washing machine was
given as
f(y) =
1
4
e−y/4
, y ≥ 0,
0, elsewhere.
What is the population mean of the times to repair?
4.31 Consider Exercise 3.32 on page 94.
(a) What is the mean proportion of the budget allo-
cated to environmental and pollution control?
(b) What is the probability that a company selected
at random will have allocated to environmental
and pollution control a proportion that exceeds the
population mean given in (a)?
4.32 In Exercise 3.13 on page 92, the distribution of
the number of imperfections per 10 meters of synthetic
fabric is given by
x 0 1 2 3 4
f(x) 0.41 0.37 0.16 0.05 0.01
(a) Plot the probability function.
(b) Find the expected number of imperfections,
E(X) = μ.
(c) Find E(X2
).
4.2 Variance and Covariance of Random Variables
The mean, or expected value, of a random variable X is of special importance in
statistics because it describes where the probability distribution is centered. By
itself, however, the mean does not give an adequate description of the shape of the
distribution. We also need to characterize the variability in the distribution. In
Figure 4.1, we have the histograms of two discrete probability distributions that
have the same mean, μ = 2, but differ considerably in variability, or the dispersion
of their observations about the mean.
1 2 3 0 1 2 3 4
x
(a) (b)
x
Figure 4.1: Distributions with equal means and unequal dispersions.
The most important measure of variability of a random variable X is obtained
by applying Theorem 4.1 with g(X) = (X − μ)2
. The quantity is referred to as
the variance of the random variable X or the variance of the probability
120 Chapter 4 Mathematical Expectation
distribution of X and is denoted by Var(X) or the symbol σ2
X , or simply by σ2
when it is clear to which random variable we refer.
Definition 4.3: Let X be a random variable with probability distribution f(x) and mean μ. The
variance of X is
σ2
= E[(X − μ)2
] =

x
(x − μ)2
f(x), if X is discrete, and
σ2
= E[(X − μ)2
] =
∞
−∞
(x − μ)2
f(x) dx, if X is continuous.
The positive square root of the variance, σ, is called the standard deviation of
X.
The quantity x−μ in Definition 4.3 is called the deviation of an observation
from its mean. Since the deviations are squared and then averaged, σ2
will be much
smaller for a set of x values that are close to μ than it will be for a set of values
that vary considerably from μ.
Example 4.8: Let the random variable X represent the number of automobiles that are used for
official business purposes on any given workday. The probability distribution for
company A [Figure 4.1(a)] is
x 1 2 3
f(x) 0.3 0.4 0.3
and that for company B [Figure 4.1(b)] is
x 0 1 2 3 4
f(x) 0.2 0.1 0.3 0.3 0.1
Show that the variance of the probability distribution for company B is greater
than that for company A.
Solution: For company A, we find that
μA = E(X) = (1)(0.3) + (2)(0.4) + (3)(0.3) = 2.0,
and then
σ2
A =
3

x=1
(x − 2)2
= (1 − 2)2
(0.3) + (2 − 2)2
(0.4) + (3 − 2)2
(0.3) = 0.6.
For company B, we have
μB = E(X) = (0)(0.2) + (1)(0.1) + (2)(0.3) + (3)(0.3) + (4)(0.1) = 2.0,
and then
σ2
B =
4

x=0
(x − 2)2
f(x)
= (0 − 2)2
(0.2) + (1 − 2)2
(0.1) + (2 − 2)2
(0.3)
+ (3 − 2)2
(0.3) + (4 − 2)2
(0.1) = 1.6.
4.2 Variance and Covariance of Random Variables 121
Clearly, the variance of the number of automobiles that are used for official business
purposes is greater for company B than for company A.
An alternative and preferred formula for finding σ2
, which often simplifies the
calculations, is stated in the following theorem.
Theorem 4.2: The variance of a random variable X is
σ2
= E(X2
) − μ2
.
Proof: For the discrete case, we can write
σ2
=

x
(x − μ)2
f(x) =

x
(x2
− 2μx + μ2
)f(x)
=

x
x2
f(x) − 2μ

x
xf(x) + μ2

x
f(x).
Since μ =

x
xf(x) by definition, and

x
f(x) = 1 for any discrete probability
distribution, it follows that
σ2
=

x
x2
f(x) − μ2
= E(X2
) − μ2
.
For the continuous case the proof is step by step the same, with summations
replaced by integrations.
Example 4.9: Let the random variable X represent the number of defective parts for a machine
when 3 parts are sampled from a production line and tested. The following is the
probability distribution of X.
x 0 1 2 3
f(x) 0.51 0.38 0.10 0.01
Using Theorem 4.2, calculate σ2
.
Solution: First, we compute
μ = (0)(0.51) + (1)(0.38) + (2)(0.10) + (3)(0.01) = 0.61.
Now,
E(X2
) = (0)(0.51) + (1)(0.38) + (4)(0.10) + (9)(0.01) = 0.87.
Therefore,
σ2
= 0.87 − (0.61)2
= 0.4979.
Example 4.10: The weekly demand for a drinking-water product, in thousands of liters, from
a local chain of efficiency stores is a continuous random variable X having the
probability density
f(x) =

2(x − 1), 1  x  2,
0, elsewhere.
Find the mean and variance of X.
122 Chapter 4 Mathematical Expectation
Solution: Calculating E(X) and E(X2
, we have
μ = E(X) = 2
2
1
x(x − 1) dx =
5
3
and
E(X2
) = 2
2
1
x2
(x − 1) dx =
17
6
.
Therefore,
σ2
=
17
6
−

5
3
2
=
1
18
.
At this point, the variance or standard deviation has meaning only when we
compare two or more distributions that have the same units of measurement.
Therefore, we could compare the variances of the distributions of contents, mea-
sured in liters, of bottles of orange juice from two companies, and the larger value
would indicate the company whose product was more variable or less uniform. It
would not be meaningful to compare the variance of a distribution of heights to
the variance of a distribution of aptitude scores. In Section 4.4, we show how the
standard deviation can be used to describe a single distribution of observations.
We shall now extend our concept of the variance of a random variable X to
include random variables related to X. For the random variable g(X), the variance
is denoted by σ2
g(X) and is calculated by means of the following theorem.
Theorem 4.3: Let X be a random variable with probability distribution f(x). The variance of
the random variable g(X) is
σ2
g(X) = E{[g(X) − μg(X)]2
} =

x
[g(x) − μg(X)]2
f(x)
if X is discrete, and
σ2
g(X) = E{[g(X) − μg(X)]2
} =
∞
−∞
[g(x) − μg(X)]2
f(x) dx
if X is continuous.
Proof: Since g(X) is itself a random variable with mean μg(X) as defined in Theorem 4.1,
it follows from Definition 4.3 that
σ2
g(X) = E{[g(X) − μg(X)]}.
Now, applying Theorem 4.1 again to the random variable [g(X)−μg(X)]2
completes
the proof.
Example 4.11: Calculate the variance of g(X) = 2X + 3, where X is a random variable with
probability distribution
x 0 1 2 3
f(x) 1
4
1
8
1
2
1
8
4.2 Variance and Covariance of Random Variables 123
Solution: First, we find the mean of the random variable 2X +3. According to Theorem 4.1,
μ2X+3 = E(2X + 3) =
3

x=0
(2x + 3)f(x) = 6.
Now, using Theorem 4.3, we have
σ2
2X+3 = E{[(2X + 3) − μ2x+3]2
} = E[(2X + 3 − 6)2
]
= E(4X2
− 12X + 9) =
3

x=0
(4x2
− 12x + 9)f(x) = 4.
Example 4.12: Let X be a random variable having the density function given in Example 4.5 on
page 115. Find the variance of the random variable g(X) = 4X + 3.
Solution: In Example 4.5, we found that μ4X+3 = 8. Now, using Theorem 4.3,
σ2
4X+3 = E{[(4X + 3) − 8]2
} = E[(4X − 5)2
]
=
2
−1
(4x − 5)2 x2
3
dx =
1
3
2
−1
(16x4
− 40x3
+ 25x2
) dx =
51
5
.
If g(X, Y ) = (X −μX)(Y −μY ), where μX = E(X) and μY = E(Y ), Definition
4.2 yields an expected value called the covariance of X and Y , which we denote
by σXY or Cov(X, Y ).
Definition 4.4: Let X and Y be random variables with joint probability distribution f(x, y). The
covariance of X and Y is
σXY = E[(X − μX )(Y − μY )] =

x

y
(x − μX )(y − μy)f(x, y)
if X and Y are discrete, and
σXY = E[(X − μX)(Y − μY )] =
∞
−∞
∞
−∞
(x − μX )(y − μy)f(x, y) dx dy
if X and Y are continuous.
The covariance between two random variables is a measure of the nature of the
association between the two. If large values of X often result in large values of Y
or small values of X result in small values of Y , positive X −μX will often result in
positive Y −μY and negative X −μX will often result in negative Y −μY . Thus, the
product (X − μX )(Y − μY ) will tend to be positive. On the other hand, if large X
values often result in small Y values, the product (X −μX )(Y −μY ) will tend to be
negative. The sign of the covariance indicates whether the relationship between two
dependent random variables is positive or negative. When X and Y are statistically
independent, it can be shown that the covariance is zero (see Corollary 4.5). The
converse, however, is not generally true. Two variables may have zero covariance
and still not be statistically independent. Note that the covariance only describes
the linear relationship between two random variables. Therefore, if a covariance
between X and Y is zero, X and Y may have a nonlinear relationship, which means
that they are not necessarily independent.
124 Chapter 4 Mathematical Expectation
The alternative and preferred formula for σXY is stated by Theorem 4.4.
Theorem 4.4: The covariance of two random variables X and Y with means μX and μY , respec-
tively, is given by
σXY = E(XY ) − μX μY .
Proof: For the discrete case, we can write
σXY =

x

y
(x − μX )(y − μY )f(x, y)
=

x

y
xyf(x, y) − μX

x

y
yf(x, y)
− μY

x

y
xf(x, y) + μX μY

x

y
f(x, y).
Since
μX =

x
xf(x, y), μY =

y
yf(x, y), and

x

y
f(x, y) = 1
for any joint discrete distribution, it follows that
σXY = E(XY ) − μX μY − μY μX + μX μY = E(XY ) − μX μY .
For the continuous case, the proof is identical with summations replaced by inte-
grals.
Example 4.13: Example 3.14 on page 95 describes a situation involving the number of blue refills
X and the number of red refills Y . Two refills for a ballpoint pen are selected at
random from a certain box, and the following is the joint probability distribution:
x
f(x, y) 0 1 2 h(y)
0 3
28
9
28
3
28
15
28
y 1 3
14
3
14 0 3
7
2 1
28 0 0 1
28
g(x) 5
14
15
28
3
28 1
Find the covariance of X and Y .
Solution: From Example 4.6, we see that E(XY ) = 3/14. Now
μX =
2

x=0
xg(x) = (0)

5
14

+ (1)

15
28

+ (2)

3
28

=
3
4
,
and
μY =
2

y=0
yh(y) = (0)

15
28

+ (1)

3
7

+ (2)

1
28

=
1
2
.
4.2 Variance and Covariance of Random Variables 125
Therefore,
σXY = E(XY ) − μX μY =
3
14
−

3
4
 
1
2

= −
9
56
.
Example 4.14: The fraction X of male runners and the fraction Y of female runners who compete
in marathon races are described by the joint density function
f(x, y) =

8xy, 0 ≤ y ≤ x ≤ 1,
0, elsewhere.
Find the covariance of X and Y .
Solution: We first compute the marginal density functions. They are
g(x) =

4x3
, 0 ≤ x ≤ 1,
0, elsewhere,
and
h(y) =

4y(1 − y2
), 0 ≤ y ≤ 1,
0, elsewhere.
From these marginal density functions, we compute
μX = E(X) =
1
0
4x4
dx =
4
5
and μY =
1
0
4y2
(1 − y2
) dy =
8
15
.
From the joint density function given above, we have
E(XY ) =
1
0
1
y
8x2
y2
dx dy =
4
9
.
Then
σXY = E(XY ) − μX μY =
4
9
−

4
5
 
8
15

=
4
225
.
Although the covariance between two random variables does provide informa-
tion regarding the nature of the relationship, the magnitude of σXY does not indi-
cate anything regarding the strength of the relationship, since σXY is not scale-free.
Its magnitude will depend on the units used to measure both X and Y . There is a
scale-free version of the covariance called the correlation coefficient that is used
widely in statistics.
Definition 4.5: Let X and Y be random variables with covariance σXY and standard deviations
σX and σY , respectively. The correlation coefficient of X and Y is
ρXY =
σXY
σX σY
.
It should be clear to the reader that ρXY is free of the units of X and Y . The
correlation coefficient satisfies the inequality −1 ≤ ρXY ≤ 1. It assumes a value of
zero when σXY = 0. Where there is an exact linear dependency, say Y ≡ a + bX,
126 Chapter 4 Mathematical Expectation
ρXY = 1 if b  0 and ρXY = −1 if b  0. (See Exercise 4.48.) The correlation
coefficient is the subject of more discussion in Chapter 12, where we deal with
linear regression.
Example 4.15: Find the correlation coefficient between X and Y in Example 4.13.
Solution: Since
E(X2
) = (02
)

5
14

+ (12
)

15
28

+ (22
)

3
28

=
27
28
and
E(Y 2
) = (02
)

15
28

+ (12
)

3
7

+ (22
)

1
28

=
4
7
,
we obtain
σ2
X =
27
28
−

3
4
2
=
45
112
and σ2
Y =
4
7
−

1
2
2
=
9
28
.
Therefore, the correlation coefficient between X and Y is
ρXY =
σXY
σX σY
=
−9/56

(45/112)(9/28)
= −
1
√
5
.
Example 4.16: Find the correlation coefficient of X and Y in Example 4.14.
Solution: Because
E(X2
) =
1
0
4x5
dx =
2
3
and E(Y 2
) =
1
0
4y3
(1 − y2
) dy = 1 −
2
3
=
1
3
,
we conclude that
σ2
X =
2
3
−

4
5
2
=
2
75
and σ2
Y =
1
3
−

8
15
2
=
11
225
.
Hence,
ρXY =
4/225

(2/75)(11/225)
=
4
√
66
.
Note that although the covariance in Example 4.15 is larger in magnitude (dis-
regarding the sign) than that in Example 4.16, the relationship of the magnitudes
of the correlation coefficients in these two examples is just the reverse. This is
evidence that we cannot look at the magnitude of the covariance to decide on how
strong the relationship is.
/ /
Exercises 127
Exercises
4.33 Use Definition 4.3 on page 120 to find the vari-
ance of the random variable X of Exercise 4.7 on page
117.
4.34 Let X be a random variable with the following
probability distribution:
x −2 3 5
f(x) 0.3 0.2 0.5
Find the standard deviation of X.
4.35 The random variable X, representing the num-
ber of errors per 100 lines of software code, has the
following probability distribution:
x 2 3 4 5 6
f(x) 0.01 0.25 0.4 0.3 0.04
Using Theorem 4.2 on page 121, find the variance of
X.
4.36 Suppose that the probabilities are 0.4, 0.3, 0.2,
and 0.1, respectively, that 0, 1, 2, or 3 power failures
will strike a certain subdivision in any given year. Find
the mean and variance of the random variable X repre-
senting the number of power failures striking this sub-
division.
4.37 A dealer’s profit, in units of $5000, on a new
automobile is a random variable X having the density
function given in Exercise 4.12 on page 117. Find the
variance of X.
4.38 The proportion of people who respond to a cer-
tain mail-order solicitation is a random variable X hav-
ing the density function given in Exercise 4.14 on page
117. Find the variance of X.
4.39 The total number of hours, in units of 100 hours,
that a family runs a vacuum cleaner over a period of
one year is a random variable X having the density
function given in Exercise 4.13 on page 117. Find the
variance of X.
4.40 Referring to Exercise 4.14 on page 117, find
σ2
g(X) for the function g(X) = 3X2
+ 4.
4.41 Find the standard deviation of the random vari-
able g(X) = (2X + 1)2
in Exercise 4.17 on page 118.
4.42 Using the results of Exercise 4.21 on page 118,
find the variance of g(X) = X2
, where X is a random
variable having the density function given in Exercise
4.12 on page 117.
4.43 The length of time, in minutes, for an airplane
to obtain clearance for takeoff at a certain airport is a
random variable Y = 3X − 2, where X has the density
function
f(x) =
1
4
e−x/4
, x  0
0, elsewhere.
Find the mean and variance of the random variable Y .
4.44 Find the covariance of the random variables X
and Y of Exercise 3.39 on page 105.
4.45 Find the covariance of the random variables X
and Y of Exercise 3.49 on page 106.
4.46 Find the covariance of the random variables X
and Y of Exercise 3.44 on page 105.
4.47 For the random variables X and Y whose joint
density function is given in Exercise 3.40 on page 105,
find the covariance.
4.48 Given a random variable X, with standard de-
viation σX , and a random variable Y = a + bX, show
that if b  0, the correlation coefficient ρXY = −1, and
if b  0, ρXY = 1.
4.49 Consider the situation in Exercise 4.32 on page
119. The distribution of the number of imperfections
per 10 meters of synthetic failure is given by
x 0 1 2 3 4
f(x) 0.41 0.37 0.16 0.05 0.01
Find the variance and standard deviation of the num-
ber of imperfections.
4.50 For a laboratory assignment, if the equipment is
working, the density function of the observed outcome
X is
f(x) =
2(1 − x), 0  x  1,
0, otherwise.
Find the variance and standard deviation of X.
4.51 For the random variables X and Y in Exercise
3.39 on page 105, determine the correlation coefficient
between X and Y .
4.52 Random variables X and Y follow a joint distri-
bution
f(x, y) =
2, 0  x ≤ y  1,
0, otherwise.
Determine the correlation coefficient between X and
Y .
128 Chapter 4 Mathematical Expectation
4.3 Means and Variances of Linear Combinations of
Random Variables
We now develop some useful properties that will simplify the calculations of means
and variances of random variables that appear in later chapters. These properties
will permit us to deal with expectations in terms of other parameters that are
either known or easily computed. All the results that we present here are valid
for both discrete and continuous random variables. Proofs are given only for the
continuous case. We begin with a theorem and two corollaries that should be,
intuitively, reasonable to the reader.
Theorem 4.5: If a and b are constants, then
E(aX + b) = aE(X) + b.
Proof: By the definition of expected value,
E(aX + b) =
∞
−∞
(ax + b)f(x) dx = a
∞
−∞
xf(x) dx + b
∞
−∞
f(x) dx.
The first integral on the right is E(X) and the second integral equals 1. Therefore,
we have
E(aX + b) = aE(X) + b.
Corollary 4.1: Setting a = 0, we see that E(b) = b.
Corollary 4.2: Setting b = 0, we see that E(aX) = aE(X).
Example 4.17: Applying Theorem 4.5 to the discrete random variable f(X) = 2X − 1, rework
Example 4.4 on page 115.
Solution: According to Theorem 4.5, we can write
E(2X − 1) = 2E(X) − 1.
Now
μ = E(X) =
9

x=4
xf(x)
= (4)

1
12

+ (5)

1
12

+ (6)

1
4

+ (7)

1
4

+ (8)

1
6

+ (9)

1
6

=
41
6
.
Therefore,
μ2X−1 = (2)

41
6

− 1 = $12.67,
as before.
4.3 Means and Variances of Linear Combinations of Random Variables 129
Example 4.18: Applying Theorem 4.5 to the continuous random variable g(X) = 4X + 3, rework
Example 4.5 on page 115.
Solution: For Example 4.5, we may use Theorem 4.5 to write
E(4X + 3) = 4E(X) + 3.
Now
E(X) =
2
−1
x

x2
3

dx =
2
−1
x3
3
dx =
5
4
.
Therefore,
E(4X + 3) = (4)

5
4

+ 3 = 8,
as before.
Theorem 4.6: The expected value of the sum or difference of two or more functions of a random
variable X is the sum or difference of the expected values of the functions. That
is,
E[g(X) ± h(X)] = E[g(X)] ± E[h(X)].
Proof: By definition,
E[g(X) ± h(X)] =
∞
−∞
[g(x) ± h(x)]f(x) dx
=
∞
−∞
g(x)f(x) dx ±
∞
−∞
h(x)f(x) dx
= E[g(X)] ± E[h(X)].
Example 4.19: Let X be a random variable with probability distribution as follows:
x 0 1 2 3
f(x) 1
3
1
2 0 1
6
Find the expected value of Y = (X − 1)2
.
Solution: Applying Theorem 4.6 to the function Y = (X − 1)2
, we can write
E[(X − 1)2
] = E(X2
− 2X + 1) = E(X2
) − 2E(X) + E(1).
From Corollary 4.1, E(1) = 1, and by direct computation,
E(X) = (0)

1
3

+ (1)

1
2

+ (2)(0) + (3)

1
6

= 1 and
E(X2
) = (0)

1
3

+ (1)

1
2

+ (4)(0) + (9)

1
6

= 2.
Hence,
E[(X − 1)2
] = 2 − (2)(1) + 1 = 1.
130 Chapter 4 Mathematical Expectation
Example 4.20: The weekly demand for a certain drink, in thousands of liters, at a chain of con-
venience stores is a continuous random variable g(X) = X2
+ X − 2, where X has
the density function
f(x) =

2(x − 1), 1  x  2,
0, elsewhere.
Find the expected value of the weekly demand for the drink.
Solution: By Theorem 4.6, we write
E(X2
+ X − 2) = E(X2
) + E(X) − E(2).
From Corollary 4.1, E(2) = 2, and by direct integration,
E(X) =
2
1
2x(x − 1) dx =
5
3
and E(X2
) =
2
1
2x2
(x − 1) dx =
17
6
.
Now
E(X2
+ X − 2) =
17
6
+
5
3
− 2 =
5
2
,
so the average weekly demand for the drink from this chain of efficiency stores is
2500 liters.
Suppose that we have two random variables X and Y with joint probability dis-
tribution f(x, y). Two additional properties that will be very useful in succeeding
chapters involve the expected values of the sum, difference, and product of these
two random variables. First, however, let us prove a theorem on the expected
value of the sum or difference of functions of the given variables. This, of course,
is merely an extension of Theorem 4.6.
Theorem 4.7: The expected value of the sum or difference of two or more functions of the random
variables X and Y is the sum or difference of the expected values of the functions.
That is,
E[g(X, Y ) ± h(X, Y )] = E[g(X, Y )] ± E[h(X, Y )].
Proof: By Definition 4.2,
E[g(X, Y ) ± h(X, Y )] =
∞
−∞
∞
−∞
[g(x, y) ± h(x, y)]f(x, y) dx dy
=
∞
−∞
∞
−∞
g(x, y)f(x, y) dx dy ±
∞
−∞
∞
−∞
h(x, y)f(x, y) dx dy
= E[g(X, Y )] ± E[h(X, Y )].
Corollary 4.3: Setting g(X, Y ) = g(X) and h(X, Y ) = h(Y ), we see that
E[g(X) ± h(Y )] = E[g(X)] ± E[h(Y )].
4.3 Means and Variances of Linear Combinations of Random Variables 131
Corollary 4.4: Setting g(X, Y ) = X and h(X, Y ) = Y , we see that
E[X ± Y ] = E[X] ± E[Y ].
If X represents the daily production of some item from machine A and Y the
daily production of the same kind of item from machine B, then X + Y represents
the total number of items produced daily by both machines. Corollary 4.4 states
that the average daily production for both machines is equal to the sum of the
average daily production of each machine.
Theorem 4.8: Let X and Y be two independent random variables. Then
E(XY ) = E(X)E(Y ).
Proof: By Definition 4.2,
E(XY ) =
∞
−∞
∞
−∞
xyf(x, y) dx dy.
Since X and Y are independent, we may write
f(x, y) = g(x)h(y),
where g(x) and h(y) are the marginal distributions of X and Y , respectively. Hence,
E(XY ) =
∞
−∞
∞
−∞
xyg(x)h(y) dx dy =
∞
−∞
xg(x) dx
∞
−∞
yh(y) dy
= E(X)E(Y ).
Theorem 4.8 can be illustrated for discrete variables by considering the exper-
iment of tossing a green die and a red die. Let the random variable X represent
the outcome on the green die and the random variable Y represent the outcome
on the red die. Then XY represents the product of the numbers that occur on the
pair of dice. In the long run, the average of the products of the numbers is equal
to the product of the average number that occurs on the green die and the average
number that occurs on the red die.
Corollary 4.5: Let X and Y be two independent random variables. Then σXY = 0.
Proof: The proof can be carried out by using Theorems 4.4 and 4.8.
Example 4.21: It is known that the ratio of gallium to arsenide does not affect the functioning
of gallium-arsenide wafers, which are the main components of microchips. Let X
denote the ratio of gallium to arsenide and Y denote the functional wafers retrieved
during a 1-hour period. X and Y are independent random variables with the joint
density function
f(x, y) =

x(1+3y2
)
4 , 0  x  2, 0  y  1,
0, elsewhere.
132 Chapter 4 Mathematical Expectation
Show that E(XY ) = E(X)E(Y ), as Theorem 4.8 suggests.
Solution: By definition,
E(XY ) =
1
0
2
0
x2
y(1 + 3y2
)
4
dxdy =
5
6
, E(X) =
4
3
, and E(Y ) =
5
8
.
Hence,
E(X)E(Y ) =

4
3
 
5
8

=
5
6
= E(XY ).
We conclude this section by proving one theorem and presenting several corol-
laries that are useful for calculating variances or standard deviations.
Theorem 4.9: If X and Y are random variables with joint probability distribution f(x, y) and a,
b, and c are constants, then
σ2
aX+bY +c = a2
σ2
X + b2
σ2
Y + 2abσXY .
Proof: By definition, σ2
aX+bY +c = E{[(aX + bY + c) − μaX+bY +c]2
}. Now
μaX+bY +c = E(aX + bY + c) = aE(X) + bE(Y ) + c = aμX + bμY + c,
by using Corollary 4.4 followed by Corollary 4.2. Therefore,
σ2
aX+bY +c = E{[a(X − μX ) + b(Y − μY )]2
}
= a2
E[(X − μX )2
] + b2
E[(Y − μY )2
] + 2abE[(X − μX )(Y − μY )]
= a2
σ2
X + b2
σ2
Y + 2abσXY .
Using Theorem 4.9, we have the following corollaries.
Corollary 4.6: Setting b = 0, we see that
σ2
aX+c = a2
σ2
X = a2
σ2
.
Corollary 4.7: Setting a = 1 and b = 0, we see that
σ2
X+c = σ2
X = σ2
.
Corollary 4.8: Setting b = 0 and c = 0, we see that
σ2
aX = a2
σ2
X = a2
σ2
.
Corollaries 4.6 and 4.7 state that the variance is unchanged if a constant is
added to or subtracted from a random variable. The addition or subtraction of
a constant simply shifts the values of X to the right or to the left but does not
change their variability. However, if a random variable is multiplied or divided by
a constant, then Corollaries 4.6 and 4.8 state that the variance is multiplied or
divided by the square of the constant.
4.3 Means and Variances of Linear Combinations of Random Variables 133
Corollary 4.9: If X and Y are independent random variables, then
σ2
aX+bY = a2
σ2
X + b2
σ2
Y .
The result stated in Corollary 4.9 is obtained from Theorem 4.9 by invoking
Corollary 4.5.
Corollary 4.10: If X and Y are independent random variables, then
σ2
aX−bY = a2
σ2
X + b2
σ2
Y .
Corollary 4.10 follows when b in Corollary 4.9 is replaced by −b. Generalizing
to a linear combination of n independent random variables, we have Corollary 4.11.
Corollary 4.11: If X1, X2, . . . , Xn are independent random variables, then
σ2
a1X1+a2X2+···+anXn
= a2
1σ2
X1
+ a2
2σ2
X2
+ · · · + a2
nσ2
Xn
.
Example 4.22: If X and Y are random variables with variances σ2
X = 2 and σ2
Y = 4 and covariance
σXY = −2, find the variance of the random variable Z = 3X − 4Y + 8.
Solution:
σ2
Z = σ2
3X−4Y +8 = σ2
3X−4Y (by Corollary 4.6)
= 9σ2
X + 16σ2
Y − 24σXY (by Theorem 4.9)
= (9)(2) + (16)(4) − (24)(−2) = 130.
Example 4.23: Let X and Y denote the amounts of two different types of impurities in a batch
of a certain chemical product. Suppose that X and Y are independent random
variables with variances σ2
X = 2 and σ2
Y = 3. Find the variance of the random
variable Z = 3X − 2Y + 5.
Solution:
σ2
Z = σ2
3X−2Y +5 = σ2
3X−2Y (by Corollary 4.6)
= 9σ2
x + 4σ2
y (by Corollary 4.10)
= (9)(2) + (4)(3) = 30.
What If the Function Is Nonlinear?
In that which has preceded this section, we have dealt with properties of linear
functions of random variables for very important reasons. Chapters 8 through 15
will discuss and illustrate practical real-world problems in which the analyst is
constructing a linear model to describe a data set and thus to describe or explain
the behavior of a certain scientific phenomenon. Thus, it is natural that expected
values and variances of linear combinations of random variables are encountered.
However, there are situations in which properties of nonlinear functions of random
variables become important. Certainly there are many scientific phenomena that
are nonlinear, and certainly statistical modeling using nonlinear functions is very
important. In fact, in Chapter 12, we deal with the modeling of what have become
standard nonlinear models. Indeed, even a simple function of random variables,
such as Z = X/Y , occurs quite frequently in practice, and yet unlike in the case of
134 Chapter 4 Mathematical Expectation
the expected value of linear combinations of random variables, there is no simple
general rule. For example,
E(Z) = E(X/Y ) = E(X)/E(Y ),
except in very special circumstances.
The material provided by Theorems 4.5 through 4.9 and the various corollaries
is extremely useful in that there are no restrictions on the form of the density or
probability functions, apart from the property of independence when it is required
as in the corollaries following Theorems 4.9. To illustrate, consider Example 4.23;
the variance of Z = 3X −2Y +5 does not require restrictions on the distributions of
the amounts X and Y of the two types of impurities. Only independence between
X and Y is required. Now, we do have at our disposal the capacity to find μg(X)
and σ2
g(X) for any function g(·) from first principles established in Theorems 4.1
and 4.3, where it is assumed that the corresponding distribution f(x) is known.
Exercises 4.40, 4.41, and 4.42, among others, illustrate the use of these theorems.
Thus, if the function g(x) is nonlinear and the density function (or probability
function in the discrete case) is known, μg(X) and σ2
g(X) can be evaluated exactly.
But, similar to the rules given for linear combinations, are there rules for nonlinear
functions that can be used when the form of the distribution of the pertinent
random variables is not known?
In general, suppose X is a random variable and Y = g(x). The general solution
for E(Y ) or Var(Y ) can be difficult to find and depends on the complexity of the
function g(·). However, there are approximations available that depend on a linear
approximation of the function g(x). For example, suppose we denote E(X) as μ
and Var(X) = σ2
X . Then a Taylor series approximation of g(x) around X = μX
gives
g(x) = g(μX ) +
∂g(x)
∂x




x=μX
(x − μX ) +
∂2
g(x)
∂x2




x=μX
(x − μX )2
2
+ · · · .
As a result, if we truncate after the linear term and take the expected value of both
sides, we obtain E[g(X)] ≈ g(μX ), which is certainly intuitive and in some cases
gives a reasonable approximation. However, if we include the second-order term
of the Taylor series, then we have a second-order adjustment for this first-order
approximation as follows:
Approximation of
E[g(X)] E[g(X)] ≈ g(μX ) +
∂2
g(x)
∂x2




x=μX
σ2
X
2
.
Example 4.24: Given the random variable X with mean μX and variance σ2
X , give the second-order
approximation to E(eX
).
Solution: Since ∂ex
∂x = ex
and ∂2
ex
∂x2 = ex
, we obtain E(eX
) ≈ eμX
(1 + σ2
X /2).
Similarly, we can develop an approximation for Var[g(x)] by taking the variance
of both sides of the first-order Taylor series expansion of g(x).
Approximation of
Var[g(X)] Var[g(X)] ≈

∂g(x)
∂x
2
x=μX
σ2
X .
Example 4.25: Given the random variable X as in Example 4.24, give an approximate formula for
Var[g(x)].
4.4 Chebyshev’s Theorem 135
Solution: Again ∂ex
∂x = ex
; thus, Var(X) ≈ e2μX
σ2
X .
These approximations can be extended to nonlinear functions of more than one
random variable.
Given a set of independent random variables X1, X2, . . . , Xk with means μ1,
μ2, . . . , μk and variances σ2
1, σ2
2, . . . , σ2
k, respectively, let
Y = h(X1, X2, . . . , Xk)
be a nonlinear function; then the following are approximations for E(Y ) and
Var(Y ):
E(Y ) ≈ h(μ1, μ2, . . . , μk) +
k

i=1
σ2
i
2

∂2
h(x1, x2, . . . , xk)
∂x2
i




xi=μi, 1≤i≤k
,
Var(Y ) ≈
k

i=1

∂h(x1, x2, . . . , xk)
∂xi
2





xi=μi, 1≤i≤k
σ2
i .
Example 4.26: Consider two independent random variables X and Z with means μX and μZ and
variances σ2
X and σ2
Z, respectively. Consider a random variable
Y = X/Z.
Give approximations for E(Y ) and Var(Y ).
Solution: For E(Y ), we must use ∂y
∂x = 1
z and ∂y
∂z = − x
z2 . Thus,
∂2
y
∂x2
= 0 and
∂2
y
∂z2
=
2x
z3
.
As a result,
E(Y ) ≈
μX
μZ
+
μX
μ3
Z
σ2
Z =
μX
μZ

1 +
σ2
Z
μ2
Z

,
and the approximation for the variance of Y is given by
Var(Y ) ≈
1
μ2
Z
σ2
X +
μ2
X
μ4
Z
σ2
Z =
1
μ2
Z

σ2
X +
μ2
X
μ2
Z
σ2
Z

.
4.4 Chebyshev’s Theorem
In Section 4.2 we stated that the variance of a random variable tells us something
about the variability of the observations about the mean. If a random variable
has a small variance or standard deviation, we would expect most of the values to
be grouped around the mean. Therefore, the probability that the random variable
assumes a value within a certain interval about the mean is greater than for a
similar random variable with a larger standard deviation. If we think of probability
in terms of area, we would expect a continuous distribution with a large value of
σ to indicate a greater variability, and therefore we should expect the area to
be more spread out, as in Figure 4.2(a). A distribution with a small standard
deviation should have most of its area close to μ, as in Figure 4.2(b).
136 Chapter 4 Mathematical Expectation
x
μ
(a)
x
μ
(b)
Figure 4.2: Variability of continuous observations about the mean.
μ x
(a)
μ x
(b)
Figure 4.3: Variability of discrete observations about the mean.
We can argue the same way for a discrete distribution. The area in the prob-
ability histogram in Figure 4.3(b) is spread out much more than that in Figure
4.3(a) indicating a more variable distribution of measurements or outcomes.
The Russian mathematician P. L. Chebyshev (1821–1894) discovered that the
fraction of the area between any two values symmetric about the mean is related
to the standard deviation. Since the area under a probability distribution curve
or in a probability histogram adds to 1, the area between any two numbers is the
probability of the random variable assuming a value between these numbers.
The following theorem, due to Chebyshev, gives a conservative estimate of the
probability that a random variable assumes a value within k standard deviations
of its mean for any real number k.
/ /
Exercises 137
Theorem 4.10: (Chebyshev’s Theorem) The probability that any random variable X will as-
sume a value within k standard deviations of the mean is at least 1 − 1/k2
. That
is,
P(μ − kσ  X  μ + kσ) ≥ 1 −
1
k2
.
For k = 2, the theorem states that the random variable X has a probability of
at least 1−1/22
= 3/4 of falling within two standard deviations of the mean. That
is, three-fourths or more of the observations of any distribution lie in the interval
μ ± 2σ. Similarly, the theorem says that at least eight-ninths of the observations
of any distribution fall in the interval μ ± 3σ.
Example 4.27: A random variable X has a mean μ = 8, a variance σ2
= 9, and an unknown
probability distribution. Find
(a) P(−4  X  20),
(b) P(|X − 8| ≥ 6).
Solution: (a) P(−4  X  20) = P[8 − (4)(3)  X  8 + (4)(3)] ≥ 15
16 .
(b) P(|X − 8| ≥ 6) = 1 − P(|X − 8|  6) = 1 − P(−6  X − 8  6)
= 1 − P[8 − (2)(3)  X  8 + (2)(3)] ≤
1
4
.
Chebyshev’s theorem holds for any distribution of observations, and for this
reason the results are usually weak. The value given by the theorem is a lower
bound only. That is, we know that the probability of a random variable falling
within two standard deviations of the mean can be no less than 3/4, but we never
know how much more it might actually be. Only when the probability distribution
is known can we determine exact probabilities. For this reason we call the theorem
a distribution-free result. When specific distributions are assumed, as in future
chapters, the results will be less conservative. The use of Chebyshev’s theorem is
relegated to situations where the form of the distribution is unknown.
Exercises
4.53 Referring to Exercise 4.35 on page 127, find the
mean and variance of the discrete random variable
Z = 3X − 2, when X represents the number of errors
per 100 lines of code.
4.54 Using Theorem 4.5 and Corollary 4.6, find the
mean and variance of the random variable Z = 5X +3,
where X has the probability distribution of Exercise
4.36 on page 127.
4.55 Suppose that a grocery store purchases 5 car-
tons of skim milk at the wholesale price of $1.20 per
carton and retails the milk at $1.65 per carton. After
the expiration date, the unsold milk is removed from
the shelf and the grocer receives a credit from the dis-
tributor equal to three-fourths of the wholesale price.
If the probability distribution of the random variable
X, the number of cartons that are sold from this lot,
is
x 0 1 2 3 4 5
f(x) 1
15
2
15
2
15
3
15
4
15
3
15
find the expected profit.
4.56 Repeat Exercise 4.43 on page 127 by applying
Theorem 4.5 and Corollary 4.6.
4.57 Let X be a random variable with the following
probability distribution:
x −3 6 9
f(x) 1
6
1
2
1
3
/ /
138 Chapter 4 Mathematical Expectation
Find E(X) and E(X2
) and then, using these values,
evaluate E[(2X + 1)2
].
4.58 The total time, measured in units of 100 hours,
that a teenager runs her hair dryer over a period of one
year is a continuous random variable X that has the
density function
f(x) =
⎧
⎨
⎩
x, 0  x  1,
2 − x, 1 ≤ x  2,
0, elsewhere.
Use Theorem 4.6 to evaluate the mean of the random
variable Y = 60X2
+ 39X, where Y is equal to the
number of kilowatt hours expended annually.
4.59 If a random variable X is defined such that
E[(X − 1)2
] = 10 and E[(X − 2)2
] = 6,
find μ and σ2
.
4.60 Suppose that X and Y are independent random
variables having the joint probability distribution
x
f(x, y) 2 4
1 0.10 0.15
y 3 0.20 0.30
5 0.10 0.15
Find
(a) E(2X − 3Y );
(b) E(XY ).
4.61 Use Theorem 4.7 to evaluate E(2XY 2
− X2
Y )
for the joint probability distribution shown in Table
3.1 on page 96.
4.62 If X and Y are independent random variables
with variances σ2
X = 5 and σ2
Y = 3, find the variance
of the random variable Z = −2X + 4Y − 3.
4.63 Repeat Exercise 4.62 if X and Y are not inde-
pendent and σXY = 1.
4.64 Suppose that X and Y are independent random
variables with probability densities and
g(x) =
8
x3 , x  2,
0, elsewhere,
and
h(y) =
2y, 0  y  1,
0, elsewhere.
Find the expected value of Z = XY .
4.65 Let X represent the number that occurs when a
red die is tossed and Y the number that occurs when
a green die is tossed. Find
(a) E(X + Y );
(b) E(X − Y );
(c) E(XY ).
4.66 Let X represent the number that occurs when a
green die is tossed and Y the number that occurs when
a red die is tossed. Find the variance of the random
variable
(a) 2X − Y ;
(b) X + 3Y − 5.
4.67 If the joint density function of X and Y is given
by
f(x, y) =
2
7
(x + 2y), 0  x  1, 1  y  2,
0, elsewhere,
find the expected value of g(X, Y ) = X
Y 3 + X2
Y .
4.68 The power P in watts which is dissipated in an
electric circuit with resistance R is known to be given
by P = I2
R, where I is current in amperes and R is a
constant fixed at 50 ohms. However, I is a random vari-
able with μI = 15 amperes and σ2
I = 0.03 amperes2
.
Give numerical approximations to the mean and vari-
ance of the power P.
4.69 Consider Review Exercise 3.77 on page 108. The
random variables X and Y represent the number of ve-
hicles that arrive at two separate street corners during
a certain 2-minute period in the day. The joint distri-
bution is
f(x, y) =

1
4(x+y)
 
9
16

,
for x = 0, 1, 2, . . . and y = 0, 1, 2, . . . .
(a) Give E(X), E(Y ), Var(X), and Var(Y ).
(b) Consider Z = X + Y , the sum of the two. Find
E(Z) and Var(Z).
4.70 Consider Review Exercise 3.64 on page 107.
There are two service lines. The random variables X
and Y are the proportions of time that line 1 and line
2 are in use, respectively. The joint probability density
function for (X, Y ) is given by
f(x, y) =
3
2
(x2
+ y2
), 0 ≤ x, y ≤ 1,
0, elsewhere.
(a) Determine whether or not X and Y are indepen-
dent.
/ /
Review Exercises 139
(b) It is of interest to know something about the pro-
portion of Z = X + Y , the sum of the two propor-
tions. Find E(X + Y ). Also find E(XY ).
(c) Find Var(X), Var(Y ), and Cov(X, Y ).
(d) Find Var(X + Y ).
4.71 The length of time Y , in minutes, required to
generate a human reflex to tear gas has the density
function
f(y) =
1
4
e−y/4
, 0 ≤ y  ∞,
0, elsewhere.
(a) What is the mean time to reflex?
(b) Find E(Y 2
) and Var(Y ).
4.72 A manufacturing company has developed a ma-
chine for cleaning carpet that is fuel-efficient because
it delivers carpet cleaner so rapidly. Of interest is a
random variable Y , the amount in gallons per minute
delivered. It is known that the density function is given
by
f(y) =
1, 7 ≤ y ≤ 8,
0, elsewhere.
(a) Sketch the density function.
(b) Give E(Y ), E(Y 2
), and Var(Y ).
4.73 For the situation in Exercise 4.72, compute
E(eY
) using Theorem 4.1, that is, by using
E(eY
) =
 8
7
ey
f(y) dy.
Then compute E(eY
) not by using f(y), but rather by
using the second-order adjustment to the first-order
approximation of E(eY
). Comment.
4.74 Consider again the situation of Exercise 4.72. It
is required to find Var(eY
). Use Theorems 4.2 and 4.3
and define Z = eY
. Thus, use the conditions of Exer-
cise 4.73 to find
Var(Z) = E(Z2
) − [E(Z)]2
.
Then do it not by using f(y), but rather by using
the first-order Taylor series approximation to Var(eY
).
Comment!
4.75 An electrical firm manufactures a 100-watt light
bulb, which, according to specifications written on the
package, has a mean life of 900 hours with a standard
deviation of 50 hours. At most, what percentage of
the bulbs fail to last even 700 hours? Assume that the
distribution is symmetric about the mean.
4.76 Seventy new jobs are opening up at an automo-
bile manufacturing plant, and 1000 applicants show up
for the 70 positions. To select the best 70 from among
the applicants, the company gives a test that covers
mechanical skill, manual dexterity, and mathematical
ability. The mean grade on this test turns out to be
60, and the scores have a standard deviation of 6. Can
a person who scores 84 count on getting one of the
jobs? [Hint: Use Chebyshev’s theorem.] Assume that
the distribution is symmetric about the mean.
4.77 A random variable X has a mean μ = 10 and a
variance σ2
= 4. Using Chebyshev’s theorem, find
(a) P(|X − 10| ≥ 3);
(b) P(|X − 10|  3);
(c) P(5  X  15);
(d) the value of the constant c such that
P(|X − 10| ≥ c) ≤ 0.04.
4.78 Compute P(μ − 2σ  X  μ + 2σ), where X
has the density function
f(x) =
6x(1 − x), 0  x  1,
0, elsewhere,
and compare with the result given in Chebyshev’s
theorem.
Review Exercises
4.79 Prove Chebyshev’s theorem.
4.80 Find the covariance of random variables X and
Y having the joint probability density function
f(x, y) =
x + y, 0  x  1, 0  y  1,
0, elsewhere.
4.81 Referring to the random variables whose joint
probability density function is given in Exercise 3.47
on page 105, find the average amount of kerosene left
in the tank at the end of the day.
4.82 Assume the length X, in minutes, of a particu-
lar type of telephone conversation is a random variable
/ /
140 Chapter 4 Mathematical Expectation
with probability density function
f(x) =
1
5
e−x/5
, x  0,
0, elsewhere.
(a) Determine the mean length E(X) of this type of
telephone conversation.
(b) Find the variance and standard deviation of X.
(c) Find E[(X + 5)2
].
4.83 Referring to the random variables whose joint
density function is given in Exercise 3.41 on page 105,
find the covariance between the weight of the creams
and the weight of the toffees in these boxes of choco-
lates.
4.84 Referring to the random variables whose joint
probability density function is given in Exercise 3.41
on page 105, find the expected weight for the sum of
the creams and toffees if one purchased a box of these
chocolates.
4.85 Suppose it is known that the life X of a partic-
ular compressor, in hours, has the density function
f(x) =
1
900
e−x/900
, x  0,
0, elsewhere.
(a) Find the mean life of the compressor.
(b) Find E(X2
).
(c) Find the variance and standard deviation of the
random variable X.
4.86 Referring to the random variables whose joint
density function is given in Exercise 3.40 on page 105,
(a) find μX and μY ;
(b) find E[(X + Y )/2].
4.87 Show that Cov(aX, bY ) = ab Cov(X, Y ).
4.88 Consider the density function of Review Ex-
ercise 4.85. Demonstrate that Chebyshev’s theorem
holds for k = 2 and k = 3.
4.89 Consider the joint density function
f(x, y) =
16y
x3 , x  2, 0  y  1,
0, elsewhere.
Compute the correlation coefficient ρXY .
4.90 Consider random variables X and Y of Exercise
4.63 on page 138. Compute ρXY .
4.91 A dealer’s profit, in units of $5000, on a new au-
tomobile is a random variable X having density func-
tion
f(x) =
2(1 − x), 0 ≤ x ≤ 1,
0, elsewhere.
(a) Find the variance of the dealer’s profit.
(b) Demonstrate that Chebyshev’s theorem holds for
k = 2 with the density function above.
(c) What is the probability that the profit exceeds
$500?
4.92 Consider Exercise 4.10 on page 117. Can it be
said that the ratings given by the two experts are in-
dependent? Explain why or why not.
4.93 A company’s marketing and accounting depart-
ments have determined that if the company markets
its newly developed product, the contribution of the
product to the firm’s profit during the next 6 months
will be described by the following:
Profit Contribution Probability
−$5, 000
$10, 000
$30, 000
0.2
0.5
0.3
What is the company’s expected profit?
4.94 In a support system in the U.S. space program,
a single crucial component works only 85% of the time.
In order to enhance the reliability of the system, it is
decided that 3 components will be installed in parallel
such that the system fails only if they all fail. Assume
the components act independently and that they are
equivalent in the sense that all 3 of them have an 85%
success rate. Consider the random variable X as the
number of components out of 3 that fail.
(a) Write out a probability function for the random
variable X.
(b) What is E(X) (i.e., the mean number of compo-
nents out of 3 that fail)?
(c) What is Var(X)?
(d) What is the probability that the entire system is
successful?
(e) What is the probability that the system fails?
(f) If the desire is to have the system be successful
with probability 0.99, are three components suffi-
cient? If not, how many are required?
4.95 In business, it is important to plan and carry out
research in order to anticipate what will occur at the
end of the year. Research suggests that the profit (loss)
spectrum for a certain company, with corresponding
probabilities, is as follows:
/ /
Review Exercises 141
Profit Probability
−$15, 000 0.05
$0 0.15
$15,000 0.15
$25,000 0.30
$40,000 0.15
$50,000 0.10
$100,000 0.05
$150,000 0.03
$200,000 0.02
(a) What is the expected profit?
(b) Give the standard deviation of the profit.
4.96 It is known through data collection and consid-
erable research that the amount of time in seconds that
a certain employee of a company is late for work is a
random variable X with density function
f(x) =

3
(4)(503)
(502
− x2
), −50 ≤ x ≤ 50,
0, elsewhere.
In other words, he not only is slightly late at times,
but also can be early to work.
(a) Find the expected value of the time in seconds that
he is late.
(b) Find E(X2
).
(c) What is the standard deviation of the amount of
time he is late?
4.97 A delivery truck travels from point A to point B
and back using the same route each day. There are four
traffic lights on the route. Let X1 denote the number
of red lights the truck encounters going from A to B
and X2 denote the number encountered on the return
trip. Data collected over a long period suggest that the
joint probability distribution for (X1, X2) is given by
x2
x1 0 1 2 3 4
0 0.01 0.01 0.03 0.07 0.01
1 0.03 0.05 0.08 0.03 0.02
2 0.03 0.11 0.15 0.01 0.01
3 0.02 0.07 0.10 0.03 0.01
4 0.01 0.06 0.03 0.01 0.01
(a) Give the marginal density of X1.
(b) Give the marginal density of X2.
(c) Give the conditional density distribution of X1
given X2 = 3.
(d) Give E(X1).
(e) Give E(X2).
(f) Give E(X1 | X2 = 3).
(g) Give the standard deviation of X1.
4.98 A convenience store has two separate locations
where customers can be checked out as they leave.
These locations each have two cash registers and two
employees who check out customers. Let X be the
number of cash registers being used at a particular time
for location 1 and Y the number being used at the same
time for location 2. The joint probability function is
given by
y
x 0 1 2
0 0.12 0.04 0.04
1 0.08 0.19 0.05
2 0.06 0.12 0.30
(a) Give the marginal density of both X and Y as well
as the probability distribution of X given Y = 2.
(b) Give E(X) and Var(X).
(c) Give E(X | Y = 2) and Var(X | Y = 2).
4.99 Consider a ferry that can carry both buses and
cars across a waterway. Each trip costs the owner ap-
proximately $10. The fee for cars is $3 and the fee for
buses is $8. Let X and Y denote the number of buses
and cars, respectively, carried on a given trip. The
joint distribution of X and Y is given by
x
y 0 1 2
0 0.01 0.01 0.03
1 0.03 0.08 0.07
2 0.03 0.06 0.06
3 0.07 0.07 0.13
4 0.12 0.04 0.03
5 0.08 0.06 0.02
Compute the expected profit for the ferry trip.
4.100 As we shall illustrate in Chapter 12, statistical
methods associated with linear and nonlinear models
are very important. In fact, exponential functions are
often used in a wide variety of scientific and engineering
problems. Consider a model that is fit to a set of data
involving measured values k1 and k2 and a certain re-
sponse Y to the measurements. The model postulated
is
Ŷ = eb0+b1k1+b2k2
,
where Ŷ denotes the estimated value of Y, k1 and
k2 are fixed values, and b0, b1, and b2 are estimates
of constants and hence are random variables. Assume
that these random variables are independent and use
the approximate formula for the variance of a nonlinear
function of more than one variable. Give an expression
for Var(Ŷ ). Assume that the means of b0, b1, and b2
are known and are β0, β1, and β2, and assume that the
variances of b0, b1, and b2 are known and are σ2
0, σ2
1,
and σ2
2.
142 Chapter 4 Mathematical Expectation
4.101 Consider Review Exercise 3.73 on page 108. It
involved Y , the proportion of impurities in a batch,
and the density function is given by
f(y) =
10(1 − y)9
, 0 ≤ y ≤ 1,
0, elsewhere.
(a) Find the expected percentage of impurities.
(b) Find the expected value of the proportion of quality
material (i.e., find E(1 − Y )).
(c) Find the variance of the random variable Z = 1−Y .
4.102 Project: Let X = number of hours each stu-
dent in the class slept the night before. Create a dis-
crete variable by using the following arbitrary intervals:
X  3, 3 ≤ X  6, 6 ≤ X  9, and X ≥ 9.
(a) Estimate the probability distribution for X.
(b) Calculate the estimated mean and variance for X.
4.5 Potential Misconceptions and Hazards;
Relationship to Material in Other Chapters
The material in this chapter is extremely fundamental in nature, much like that in
Chapter 3. Whereas in Chapter 3 we focused on general characteristics of a prob-
ability distribution, in this chapter we defined important quantities or parameters
that characterize the general nature of the system. The mean of a distribution
reflects central tendency, and the variance or standard deviation reflects vari-
ability in the system. In addition, covariance reflects the tendency for two random
variables to “move together” in a system. These important parameters will remain
fundamental to all that follows in this text.
The reader should understand that the distribution type is often dictated by
the scientific scenario. However, the parameter values need to be estimated from
scientific data. For example, in the case of Review Exercise 4.85, the manufac-
turer of the compressor may know (material that will be presented in Chapter 6)
from experience and knowledge of the type of compressor that the nature of the
distribution is as indicated in the exercise. But the mean μ = 900 would be esti-
mated from experimentation on the machine. Though the parameter value of 900
is given as known here, it will not be known in real-life situations without the use
of experimental data. Chapter 9 is dedicated to estimation.
Chapter 5
Some Discrete Probability
Distributions
5.1 Introduction and Motivation
No matter whether a discrete probability distribution is represented graphically by
a histogram, in tabular form, or by means of a formula, the behavior of a random
variable is described. Often, the observations generated by different statistical ex-
periments have the same general type of behavior. Consequently, discrete random
variables associated with these experiments can be described by essentially the
same probability distribution and therefore can be represented by a single formula.
In fact, one needs only a handful of important probability distributions to describe
many of the discrete random variables encountered in practice.
Such a handful of distributions describe several real-life random phenomena.
For instance, in a study involving testing the effectiveness of a new drug, the num-
ber of cured patients among all the patients who use the drug approximately follows
a binomial distribution (Section 5.2). In an industrial example, when a sample of
items selected from a batch of production is tested, the number of defective items
in the sample usually can be modeled as a hypergeometric random variable (Sec-
tion 5.3). In a statistical quality control problem, the experimenter will signal a
shift of the process mean when observational data exceed certain limits. The num-
ber of samples required to produce a false alarm follows a geometric distribution
which is a special case of the negative binomial distribution (Section 5.4). On the
other hand, the number of white cells from a fixed amount of an individual’s blood
sample is usually random and may be described by a Poisson distribution (Section
5.5). In this chapter, we present these commonly used distributions with various
examples.
5.2 Binomial and Multinomial Distributions
An experiment often consists of repeated trials, each with two possible outcomes
that may be labeled success or failure. The most obvious application deals with
143
144 Chapter 5 Some Discrete Probability Distributions
the testing of items as they come off an assembly line, where each trial may indicate
a defective or a nondefective item. We may choose to define either outcome as a
success. The process is referred to as a Bernoulli process. Each trial is called a
Bernoulli trial. Observe, for example, if one were drawing cards from a deck, the
probabilities for repeated trials change if the cards are not replaced. That is, the
probability of selecting a heart on the first draw is 1/4, but on the second draw it is
a conditional probability having a value of 13/51 or 12/51, depending on whether
a heart appeared on the first draw: this, then, would no longer be considered a set
of Bernoulli trials.
The Bernoulli Process
Strictly speaking, the Bernoulli process must possess the following properties:
1. The experiment consists of repeated trials.
2. Each trial results in an outcome that may be classified as a success or a failure.
3. The probability of success, denoted by p, remains constant from trial to trial.
4. The repeated trials are independent.
Consider the set of Bernoulli trials where three items are selected at random
from a manufacturing process, inspected, and classified as defective or nondefective.
A defective item is designated a success. The number of successes is a random
variable X assuming integral values from 0 through 3. The eight possible outcomes
and the corresponding values of X are
Outcome NNN NDN NND DNN NDD DND DDN DDD
x 0 1 1 1 2 2 2 3
Since the items are selected independently and we assume that the process produces
25% defectives, we have
P(NDN) = P(N)P(D)P(N) =

3
4
 
1
4
 
3
4

=
9
64
.
Similar calculations yield the probabilities for the other possible outcomes. The
probability distribution of X is therefore
x 0 1 2 3
f(x) 27
64
27
64
9
64
1
64
Binomial Distribution
The number X of successes in n Bernoulli trials is called a binomial random
variable. The probability distribution of this discrete random variable is called
the binomial distribution, and its values will be denoted by b(x; n, p) since they
depend on the number of trials and the probability of a success on a given trial.
Thus, for the probability distribution of X, the number of defectives is
P(X = 2) = f(2) = b

2; 3,
1
4

=
9
64
.
5.2 Binomial and Multinomial Distributions 145
Let us now generalize the above illustration to yield a formula for b(x; n, p).
That is, we wish to find a formula that gives the probability of x successes in
n trials for a binomial experiment. First, consider the probability of x successes
and n − x failures in a specified order. Since the trials are independent, we can
multiply all the probabilities corresponding to the different outcomes. Each success
occurs with probability p and each failure with probability q = 1 − p. Therefore,
the probability for the specified order is px
qn−x
. We must now determine the total
number of sample points in the experiment that have x successes and n−x failures.
This number is equal to the number of partitions of n outcomes into two groups
with x in one group and n−x in the other and is written
n
x

as introduced in Section
2.3. Because these partitions are mutually exclusive, we add the probabilities of all
the different partitions to obtain the general formula, or simply multiply px
qn−x
by
n
x

.
Binomial
Distribution
A Bernoulli trial can result in a success with probability p and a failure with
probability q = 1−p. Then the probability distribution of the binomial random
variable X, the number of successes in n independent trials, is
b(x; n, p) =

n
x

px
qn−x
, x = 0, 1, 2, . . . , n.
Note that when n = 3 and p = 1/4, the probability distribution of X, the number
of defectives, may be written as
b

x; 3,
1
4

=

3
x
 
1
4
x 
3
4
3−x
, x = 0, 1, 2, 3,
rather than in the tabular form on page 144.
Example 5.1: The probability that a certain kind of component will survive a shock test is 3/4.
Find the probability that exactly 2 of the next 4 components tested survive.
Solution: Assuming that the tests are independent and p = 3/4 for each of the 4 tests, we
obtain
b

2; 4,
3
4

=

4
2
 
3
4
2 
1
4
2
=

4!
2! 2!
 
32
44

=
27
128
.
Where Does the Name Binomial Come From?
The binomial distribution derives its name from the fact that the n + 1 terms in
the binomial expansion of (q +p)n
correspond to the various values of b(x; n, p) for
x = 0, 1, 2, . . . , n. That is,
(q + p)n
=

n
0

qn
+

n
1

pqn−1
+

n
2

p2
qn−2
+ · · · +

n
n

pn
= b(0; n, p) + b(1; n, p) + b(2; n, p) + · · · + b(n; n, p).
Since p + q = 1, we see that
n

x=0
b(x; n, p) = 1,
146 Chapter 5 Some Discrete Probability Distributions
a condition that must hold for any probability distribution.
Frequently, we are interested in problems where it is necessary to find P(X  r)
or P(a ≤ X ≤ b). Binomial sums
B(r; n, p) =
r

x=0
b(x; n, p)
are given in Table A.1 of the Appendix for n = 1, 2, . . . , 20 for selected values of p
from 0.1 to 0.9. We illustrate the use of Table A.1 with the following example.
Example 5.2: The probability that a patient recovers from a rare blood disease is 0.4. If 15 people
are known to have contracted this disease, what is the probability that (a) at least
10 survive, (b) from 3 to 8 survive, and (c) exactly 5 survive?
Solution: Let X be the number of people who survive.
(a) P(X ≥ 10) = 1 − P(X  10) = 1 −
9

x=0
b(x; 15, 0.4) = 1 − 0.9662
= 0.0338
(b) P(3 ≤ X ≤ 8) =
8

x=3
b(x; 15, 0.4) =
8

x=0
b(x; 15, 0.4) −
2

x=0
b(x; 15, 0.4)
= 0.9050 − 0.0271 = 0.8779
(c) P(X = 5) = b(5; 15, 0.4) =
5

x=0
b(x; 15, 0.4) −
4

x=0
b(x; 15, 0.4)
= 0.4032 − 0.2173 = 0.1859
Example 5.3: A large chain retailer purchases a certain kind of electronic device from a manu-
facturer. The manufacturer indicates that the defective rate of the device is 3%.
(a) The inspector randomly picks 20 items from a shipment. What is the proba-
bility that there will be at least one defective item among these 20?
(b) Suppose that the retailer receives 10 shipments in a month and the inspector
randomly tests 20 devices per shipment. What is the probability that there
will be exactly 3 shipments each containing at least one defective device among
the 20 that are selected and tested from the shipment?
Solution: (a) Denote by X the number of defective devices among the 20. Then X follows
a b(x; 20, 0.03) distribution. Hence,
P(X ≥ 1) = 1 − P(X = 0) = 1 − b(0; 20, 0.03)
= 1 − (0.03)0
(1 − 0.03)20−0
= 0.4562.
(b) In this case, each shipment can either contain at least one defective item or
not. Hence, testing of each shipment can be viewed as a Bernoulli trial with
p = 0.4562 from part (a). Assuming independence from shipment to shipment
5.2 Binomial and Multinomial Distributions 147
and denoting by Y the number of shipments containing at least one defective
item, Y follows another binomial distribution b(y; 10, 0.4562). Therefore,
P(Y = 3) =

10
3

0.45623
(1 − 0.4562)7
= 0.1602.
Areas of Application
From Examples 5.1 through 5.3, it should be clear that the binomial distribution
finds applications in many scientific fields. An industrial engineer is keenly inter-
ested in the “proportion defective” in an industrial process. Often, quality control
measures and sampling schemes for processes are based on the binomial distribu-
tion. This distribution applies to any industrial situation where an outcome of a
process is dichotomous and the results of the process are independent, with the
probability of success being constant from trial to trial. The binomial distribution
is also used extensively for medical and military applications. In both fields, a
success or failure result is important. For example, “cure” or “no cure” is impor-
tant in pharmaceutical work, and “hit” or “miss” is often the interpretation of the
result of firing a guided missile.
Since the probability distribution of any binomial random variable depends only
on the values assumed by the parameters n, p, and q, it would seem reasonable
to assume that the mean and variance of a binomial random variable also depend
on the values assumed by these parameters. Indeed, this is true, and in the proof
of Theorem 5.1 we derive general formulas that can be used to compute the mean
and variance of any binomial random variable as functions of n, p, and q.
Theorem 5.1: The mean and variance of the binomial distribution b(x; n, p) are
μ = np and σ2
= npq.
Proof: Let the outcome on the jth trial be represented by a Bernoulli random variable
Ij, which assumes the values 0 and 1 with probabilities q and p, respectively.
Therefore, in a binomial experiment the number of successes can be written as the
sum of the n independent indicator variables. Hence,
X = I1 + I2 + · · · + In.
The mean of any Ij is E(Ij) = (0)(q) + (1)(p) = p. Therefore, using Corollary 4.4
on page 131, the mean of the binomial distribution is
μ = E(X) = E(I1) + E(I2) + · · · + E(In) = p + p + · · · + p
  
n terms
= np.
The variance of any Ij is σ2
Ij
= E(I2
j )−p2
= (0)2
(q)+(1)2
(p)−p2
= p(1−p) = pq.
Extending Corollary 4.11 to the case of n independent Bernoulli variables gives the
variance of the binomial distribution as
σ2
X = σ2
I1
+ σ2
I2
+ · · · + σ2
In
= pq + pq + · · · + pq
  
n terms
= npq.
148 Chapter 5 Some Discrete Probability Distributions
Example 5.4: It is conjectured that an impurity exists in 30% of all drinking wells in a certain
rural community. In order to gain some insight into the true extent of the problem,
it is determined that some testing is necessary. It is too expensive to test all of the
wells in the area, so 10 are randomly selected for testing.
(a) Using the binomial distribution, what is the probability that exactly 3 wells
have the impurity, assuming that the conjecture is correct?
(b) What is the probability that more than 3 wells are impure?
Solution: (a) We require
b(3; 10, 0.3) =
3

x=0
b(x; 10, 0.3) −
2

x=0
b(x; 10, 0.3) = 0.6496 − 0.3828 = 0.2668.
(b) In this case, P(X  3) = 1 − 0.6496 = 0.3504.
Example 5.5: Find the mean and variance of the binomial random variable of Example 5.2, and
then use Chebyshev’s theorem (on page 137) to interpret the interval μ ± 2σ.
Solution: Since Example 5.2 was a binomial experiment with n = 15 and p = 0.4, by Theorem
5.1, we have
μ = (15)(0.4) = 6 and σ2
= (15)(0.4)(0.6) = 3.6.
Taking the square root of 3.6, we find that σ = 1.897. Hence, the required interval is
6±(2)(1.897), or from 2.206 to 9.794. Chebyshev’s theorem states that the number
of recoveries among 15 patients who contracted the disease has a probability of at
least 3/4 of falling between 2.206 and 9.794 or, because the data are discrete,
between 2 and 10 inclusive.
There are solutions in which the computation of binomial probabilities may
allow us to draw a scientific inference about population after data are collected.
An illustration is given in the next example.
Example 5.6: Consider the situation of Example 5.4. The notion that 30% of the wells are impure
is merely a conjecture put forth by the area water board. Suppose 10 wells are
randomly selected and 6 are found to contain the impurity. What does this imply
about the conjecture? Use a probability statement.
Solution: We must first ask: “If the conjecture is correct, is it likely that we would find 6 or
more impure wells?”
P(X ≥ 6) =
10

x=0
b(x; 10, 0.3) −
5

x=0
b(x; 10, 0.3) = 1 − 0.9527 = 0.0473.
As a result, it is very unlikely (4.7% chance) that 6 or more wells would be found
impure if only 30% of all are impure. This casts considerable doubt on the conjec-
ture and suggests that the impurity problem is much more severe.
As the reader should realize by now, in many applications there are more than
two possible outcomes. To borrow an example from the field of genetics, the color of
guinea pigs produced as offspring may be red, black, or white. Often the “defective”
or “not defective” dichotomy is truly an oversimplification in engineering situations.
Indeed, there are often more than two categories that characterize items or parts
coming off an assembly line.
5.2 Binomial and Multinomial Distributions 149
Multinomial Experiments and the Multinomial Distribution
The binomial experiment becomes a multinomial experiment if we let each
trial have more than two possible outcomes. The classification of a manufactured
product as being light, heavy, or acceptable and the recording of accidents at a
certain intersection according to the day of the week constitute multinomial exper-
iments. The drawing of a card from a deck with replacement is also a multinomial
experiment if the 4 suits are the outcomes of interest.
In general, if a given trial can result in any one of k possible outcomes E1, E2, . . . ,
Ek with probabilities p1, p2, . . . , pk, then the multinomial distribution will give
the probability that E1 occurs x1 times, E2 occurs x2 times, . . . , and Ek occurs
xk times in n independent trials, where
x1 + x2 + · · · + xk = n.
We shall denote this joint probability distribution by
f(x1, x2, . . . , xk; p1, p2, . . . , pk, n).
Clearly, p1 + p2 + · · · + pk = 1, since the result of each trial must be one of the k
possible outcomes.
To derive the general formula, we proceed as in the binomial case. Since the
trials are independent, any specified order yielding x1 outcomes for E1, x2 for
E2, . . . , xk for Ek will occur with probability px1
1 px2
2 · · · pxk
k . The total number of
orders yielding similar outcomes for the n trials is equal to the number of partitions
of n items into k groups with x1 in the first group, x2 in the second group, . . . ,
and xk in the kth group. This can be done in

n
x1, x2, . . . , xk

=
n!
x1! x2! · · · xk!
ways. Since all the partitions are mutually exclusive and occur with equal proba-
bility, we obtain the multinomial distribution by multiplying the probability for a
specified order by the total number of partitions.
Multinomial
Distribution
If a given trial can result in the k outcomes E1, E2, . . . , Ek with probabilities
p1, p2, . . . , pk, then the probability distribution of the random variables X1, X2,
. . . , Xk, representing the number of occurrences for E1, E2, . . . , Ek in n inde-
pendent trials, is
f(x1, x2, . . . , xk; p1, p2, . . . , pk, n) =

n
x1, x2, . . . , xk

px1
1 px2
2 · · · pxk
k ,
with
k

i=1
xi = n and
k

i=1
pi = 1.
The multinomial distribution derives its name from the fact that the terms of
the multinomial expansion of (p1 + p2 + · · · + pk)n
correspond to all the possible
values of f(x1, x2, . . . , xk; p1, p2, . . . , pk, n).
/ /
150 Chapter 5 Some Discrete Probability Distributions
Example 5.7: The complexity of arrivals and departures of planes at an airport is such that
computer simulation is often used to model the “ideal” conditions. For a certain
airport with three runways, it is known that in the ideal setting the following are
the probabilities that the individual runways are accessed by a randomly arriving
commercial jet:
Runway 1: p1 = 2/9,
Runway 2: p2 = 1/6,
Runway 3: p3 = 11/18.
What is the probability that 6 randomly arriving airplanes are distributed in the
following fashion?
Runway 1: 2 airplanes,
Runway 2: 1 airplane,
Runway 3: 3 airplanes
Solution: Using the multinomial distribution, we have
f

2, 1, 3;
2
9
,
1
6
,
11
18
, 6

=

6
2, 1, 3
 
2
9
2 
1
6
1 
11
18
3
=
6!
2! 1! 3!
·
22
92
·
1
6
·
113
183
= 0.1127.
Exercises
5.1 A random variable X that assumes the values
x1, x2, . . . , xk is called a discrete uniform random vari-
able if its probability mass function is f(x) = 1
k
for all
of x1, x2, . . . , xk and 0 otherwise. Find the mean and
variance of X.
5.2 Twelve people are given two identical speakers,
which they are asked to listen to for differences, if any.
Suppose that these people answer simply by guessing.
Find the probability that three people claim to have
heard a difference between the two speakers.
5.3 An employee is selected from a staff of 10 to super-
vise a certain project by selecting a tag at random from
a box containing 10 tags numbered from 1 to 10. Find
the formula for the probability distribution of X rep-
resenting the number on the tag that is drawn. What
is the probability that the number drawn is less than
4?
5.4 In a certain city district, the need for money to
buy drugs is stated as the reason for 75% of all thefts.
Find the probability that among the next 5 theft cases
reported in this district,
(a) exactly 2 resulted from the need for money to buy
drugs;
(b) at most 3 resulted from the need for money to buy
drugs.
5.5 According to Chemical Engineering Progress
(November 1990), approximately 30% of all pipework
failures in chemical plants are caused by operator error.
(a) What is the probability that out of the next 20
pipework failures at least 10 are due to operator
error?
(b) What is the probability that no more than 4 out of
20 such failures are due to operator error?
(c) Suppose, for a particular plant, that out of the ran-
dom sample of 20 such failures, exactly 5 are due
to operator error. Do you feel that the 30% figure
stated above applies to this plant? Comment.
5.6 According to a survey by the Administrative
Management Society, one-half of U.S. companies give
employees 4 weeks of vacation after they have been
with the company for 15 years. Find the probabil-
ity that among 6 companies surveyed at random, the
number that give employees 4 weeks of vacation after
15 years of employment is
(a) anywhere from 2 to 5;
(b) fewer than 3.
5.7 One prominent physician claims that 70% of those
with lung cancer are chain smokers. If his assertion is
correct,
(a) find the probability that of 10 such patients
/ /
Exercises 151
recently admitted to a hospital, fewer than half are
chain smokers;
(b) find the probability that of 20 such patients re-
cently admitted to a hospital, fewer than half are
chain smokers.
5.8 According to a study published by a group of Uni-
versity of Massachusetts sociologists, approximately
60% of the Valium users in the state of Massachusetts
first took Valium for psychological problems. Find the
probability that among the next 8 users from this state
who are interviewed,
(a) exactly 3 began taking Valium for psychological
problems;
(b) at least 5 began taking Valium for problems that
were not psychological.
5.9 In testing a certain kind of truck tire over rugged
terrain, it is found that 25% of the trucks fail to com-
plete the test run without a blowout. Of the next 15
trucks tested, find the probability that
(a) from 3 to 6 have blowouts;
(b) fewer than 4 have blowouts;
(c) more than 5 have blowouts.
5.10 A nationwide survey of college seniors by the
University of Michigan revealed that almost 70% dis-
approve of daily pot smoking, according to a report in
Parade. If 12 seniors are selected at random and asked
their opinion, find the probability that the number who
disapprove of smoking pot daily is
(a) anywhere from 7 to 9;
(b) at most 5;
(c) not less than 8.
5.11 The probability that a patient recovers from a
delicate heart operation is 0.9. What is the probabil-
ity that exactly 5 of the next 7 patients having this
operation survive?
5.12 A traffic control engineer reports that 75% of the
vehicles passing through a checkpoint are from within
the state. What is the probability that fewer than 4 of
the next 9 vehicles are from out of state?
5.13 A national study that examined attitudes about
antidepressants revealed that approximately 70% of re-
spondents believe “antidepressants do not really cure
anything, they just cover up the real trouble.” Accord-
ing to this study, what is the probability that at least
3 of the next 5 people selected at random will hold this
opinion?
5.14 The percentage of wins for the Chicago Bulls
basketball team going into the playoffs for the 1996–97
season was 87.7. Round the 87.7 to 90 in order to use
Table A.1.
(a) What is the probability that the Bulls sweep (4-0)
the initial best-of-7 playoff series?
(b) What is the probability that the Bulls win the ini-
tial best-of-7 playoff series?
(c) What very important assumption is made in an-
swering parts (a) and (b)?
5.15 It is known that 60% of mice inoculated with a
serum are protected from a certain disease. If 5 mice
are inoculated, find the probability that
(a) none contracts the disease;
(b) fewer than 2 contract the disease;
(c) more than 3 contract the disease.
5.16 Suppose that airplane engines operate indepen-
dently and fail with probability equal to 0.4. Assuming
that a plane makes a safe flight if at least one-half of its
engines run, determine whether a 4-engine plane or a 2-
engine plane has the higher probability for a successful
flight.
5.17 If X represents the number of people in Exer-
cise 5.13 who believe that antidepressants do not cure
but only cover up the real problem, find the mean and
variance of X when 5 people are selected at random.
5.18 (a) In Exercise 5.9, how many of the 15 trucks
would you expect to have blowouts?
(b) What is the variance of the number of blowouts ex-
perienced by the 15 trucks? What does that mean?
5.19 As a student drives to school, he encounters a
traffic signal. This traffic signal stays green for 35 sec-
onds, yellow for 5 seconds, and red for 60 seconds. As-
sume that the student goes to school each weekday
between 8:00 and 8:30 a.m. Let X1 be the number of
times he encounters a green light, X2 be the number
of times he encounters a yellow light, and X3 be the
number of times he encounters a red light. Find the
joint distribution of X1, X2, and X3.
5.20 According to USA Today (March 18, 1997), of 4
million workers in the general workforce, 5.8% tested
positive for drugs. Of those testing positive, 22.5%
were cocaine users and 54.4% marijuana users.
(a) What is the probability that of 10 workers testing
positive, 2 are cocaine users, 5 are marijuana users,
and 3 are users of other drugs?
(b) What is the probability that of 10 workers testing
positive, all are marijuana users?
152 Chapter 5 Some Discrete Probability Distributions
(c) What is the probability that of 10 workers testing
positive, none is a cocaine user?
5.21 The surface of a circular dart board has a small
center circle called the bull’s-eye and 20 pie-shaped re-
gions numbered from 1 to 20. Each of the pie-shaped
regions is further divided into three parts such that a
person throwing a dart that lands in a specific region
scores the value of the number, double the number,
or triple the number, depending on which of the three
parts the dart hits. If a person hits the bull’s-eye with
probability 0.01, hits a double with probability 0.10,
hits a triple with probability 0.05, and misses the dart
board with probability 0.02, what is the probability
that 7 throws will result in no bull’s-eyes, no triples, a
double twice, and a complete miss once?
5.22 According to a genetics theory, a certain cross of
guinea pigs will result in red, black, and white offspring
in the ratio 8:4:4. Find the probability that among 8
offspring, 5 will be red, 2 black, and 1 white.
5.23 The probabilities are 0.4, 0.2, 0.3, and 0.1, re-
spectively, that a delegate to a certain convention ar-
rived by air, bus, automobile, or train. What is the
probability that among 9 delegates randomly selected
at this convention, 3 arrived by air, 3 arrived by bus,
1 arrived by automobile, and 2 arrived by train?
5.24 A safety engineer claims that only 40% of all
workers wear safety helmets when they eat lunch at
the workplace. Assuming that this claim is right, find
the probability that 4 of 6 workers randomly chosen
will be wearing their helmets while having lunch at the
workplace.
5.25 Suppose that for a very large shipment of
integrated-circuit chips, the probability of failure for
any one chip is 0.10. Assuming that the assumptions
underlying the binomial distributions are met, find the
probability that at most 3 chips fail in a random sample
of 20.
5.26 Assuming that 6 in 10 automobile accidents are
due mainly to a speed violation, find the probabil-
ity that among 8 automobile accidents, 6 will be due
mainly to a speed violation
(a) by using the formula for the binomial distribution;
(b) by using Table A.1.
5.27 If the probability that a fluorescent light has a
useful life of at least 800 hours is 0.9, find the proba-
bilities that among 20 such lights
(a) exactly 18 will have a useful life of at least 800
hours;
(b) at least 15 will have a useful life of at least 800
hours;
(c) at least 2 will not have a useful life of at least 800
hours.
5.28 A manufacturer knows that on average 20% of
the electric toasters produced require repairs within 1
year after they are sold. When 20 toasters are ran-
domly selected, find appropriate numbers x and y such
that
(a) the probability that at least x of them will require
repairs is less than 0.5;
(b) the probability that at least y of them will not re-
quire repairs is greater than 0.8.
5.3 Hypergeometric Distribution
The simplest way to view the distinction between the binomial distribution of
Section 5.2 and the hypergeometric distribution is to note the way the sampling is
done. The types of applications for the hypergeometric are very similar to those
for the binomial distribution. We are interested in computing probabilities for the
number of observations that fall into a particular category. But in the case of the
binomial distribution, independence among trials is required. As a result, if that
distribution is applied to, say, sampling from a lot of items (deck of cards, batch
of production items), the sampling must be done with replacement of each item
after it is observed. On the other hand, the hypergeometric distribution does not
require independence and is based on sampling done without replacement.
Applications for the hypergeometric distribution are found in many areas, with
heavy use in acceptance sampling, electronic testing, and quality assurance. Ob-
viously, in many of these fields, testing is done at the expense of the item being
tested. That is, the item is destroyed and hence cannot be replaced in the sample.
Thus, sampling without replacement is necessary. A simple example with playing
5.3 Hypergeometric Distribution 153
cards will serve as our first illustration.
If we wish to find the probability of observing 3 red cards in 5 draws from an
ordinary deck of 52 playing cards, the binomial distribution of Section 5.2 does not
apply unless each card is replaced and the deck reshuffled before the next draw is
made. To solve the problem of sampling without replacement, let us restate the
problem. If 5 cards are drawn at random, we are interested in the probability of
selecting 3 red cards from the 26 available in the deck and 2 black cards from the 26
available in the deck. There are
26
3

ways of selecting 3 red cards, and for each of
these ways we can choose 2 black cards in
26
2

ways. Therefore, the total number
of ways to select 3 red and 2 black cards in 5 draws is the product
26
3
26
2

. The
total number of ways to select any 5 cards from the 52 that are available is
52
5

.
Hence, the probability of selecting 5 cards without replacement of which 3 are red
and 2 are black is given by
26
3
26
2

52
5
 =
(26!/3! 23!)(26!/2! 24!)
52!/5! 47!
= 0.3251.
In general, we are interested in the probability of selecting x successes from
the k items labeled successes and n − x failures from the N − k items labeled
failures when a random sample of size n is selected from N items. This is known
as a hypergeometric experiment, that is, one that possesses the following two
properties:
1. A random sample of size n is selected without replacement from N items.
2. Of the N items, k may be classified as successes and N − k are classified as
failures.
The number X of successes of a hypergeometric experiment is called a hyper-
geometric random variable. Accordingly, the probability distribution of the
hypergeometric variable is called the hypergeometric distribution, and its val-
ues are denoted by h(x; N, n, k), since they depend on the number of successes k
in the set N from which we select n items.
Hypergeometric Distribution in Acceptance Sampling
Like the binomial distribution, the hypergeometric distribution finds applications
in acceptance sampling, where lots of materials or parts are sampled in order to
determine whether or not the entire lot is accepted.
Example 5.8: A particular part that is used as an injection device is sold in lots of 10. The
producer deems a lot acceptable if no more than one defective is in the lot. A
sampling plan involves random sampling and testing 3 of the parts out of 10. If
none of the 3 is defective, the lot is accepted. Comment on the utility of this plan.
Solution: Let us assume that the lot is truly unacceptable (i.e., that 2 out of 10 parts are
defective). The probability that the sampling plan finds the lot acceptable is
P(X = 0) =
2
0
8
3

10
3
 = 0.467.
154 Chapter 5 Some Discrete Probability Distributions
Thus, if the lot is truly unacceptable, with 2 defective parts, this sampling plan
will allow acceptance roughly 47% of the time. As a result, this plan should be
considered faulty.
Let us now generalize in order to find a formula for h(x; N, n, k). The total
number of samples of size n chosen from N items is
N
n

. These samples are
assumed to be equally likely. There are
k
x

ways of selecting x successes from the
k that are available, and for each of these ways we can choose the n − x failures in
N−k
n−x

ways. Thus, the total number of favorable samples among the
N
n

possible
samples is given by
k
x
N−k
n−x

. Hence, we have the following definition.
Hypergeometric
Distribution
The probability distribution of the hypergeometric random variable X, the num-
ber of successes in a random sample of size n selected from N items of which k
are labeled success and N − k labeled failure, is
h(x; N, n, k) =
k
x
N−k
n−x

N
n
 , max{0, n − (N − k)} ≤ x ≤ min{n, k}.
The range of x can be determined by the three binomial coefficients in the
definition, where x and n−x are no more than k and N −k, respectively, and both
of them cannot be less than 0. Usually, when both k (the number of successes)
and N − k (the number of failures) are larger than the sample size n, the range of
a hypergeometric random variable will be x = 0, 1, . . . , n.
Example 5.9: Lots of 40 components each are deemed unacceptable if they contain 3 or more
defectives. The procedure for sampling a lot is to select 5 components at random
and to reject the lot if a defective is found. What is the probability that exactly 1
defective is found in the sample if there are 3 defectives in the entire lot?
Solution: Using the hypergeometric distribution with n = 5, N = 40, k = 3, and x = 1, we
find the probability of obtaining 1 defective to be
h(1; 40, 5, 3) =
3
1
37
4

40
5
 = 0.3011.
Once again, this plan is not desirable since it detects a bad lot (3 defectives) only
about 30% of the time.
Theorem 5.2: The mean and variance of the hypergeometric distribution h(x; N, n, k) are
μ =
nk
N
and σ2
=
N − n
N − 1
· n ·
k
N

1 −
k
N

.
The proof for the mean is shown in Appendix A.24.
Example 5.10: Let us now reinvestigate Example 3.4 on page 83. The purpose of this example was
to illustrate the notion of a random variable and the corresponding sample space.
In the example, we have a lot of 100 items of which 12 are defective. What is the
probability that in a sample of 10, 3 are defective?
5.3 Hypergeometric Distribution 155
Solution: Using the hypergeometric probability function, we have
h(3; 100, 10, 12) =
12
3
88
7

100
10
 = 0.08.
Example 5.11: Find the mean and variance of the random variable of Example 5.9 and then use
Chebyshev’s theorem to interpret the interval μ ± 2σ.
Solution: Since Example 5.9 was a hypergeometric experiment with N = 40, n = 5, and
k = 3, by Theorem 5.2, we have
μ =
(5)(3)
40
=
3
8
= 0.375,
and
σ2
=

40 − 5
39

(5)

3
40
 
1 −
3
40

= 0.3113.
Taking the square root of 0.3113, we find that σ = 0.558. Hence, the required
interval is 0.375 ± (2)(0.558), or from −0.741 to 1.491. Chebyshev’s theorem
states that the number of defectives obtained when 5 components are selected at
random from a lot of 40 components of which 3 are defective has a probability of
at least 3/4 of falling between −0.741 and 1.491. That is, at least three-fourths of
the time, the 5 components include fewer than 2 defectives.
Relationship to the Binomial Distribution
In this chapter, we discuss several important discrete distributions that have wide
applicability. Many of these distributions relate nicely to each other. The beginning
student should gain a clear understanding of these relationships. There is an
interesting relationship between the hypergeometric and the binomial distribution.
As one might expect, if n is small compared to N, the nature of the N items changes
very little in each draw. So a binomial distribution can be used to approximate
the hypergeometric distribution when n is small compared to N. In fact, as a rule
of thumb, the approximation is good when n/N ≤ 0.05.
Thus, the quantity k/N plays the role of the binomial parameter p. As a
result, the binomial distribution may be viewed as a large-population version of the
hypergeometric distribution. The mean and variance then come from the formulas
μ = np =
nk
N
and σ2
= npq = n ·
k
N

1 −
k
N

.
Comparing these formulas with those of Theorem 5.2, we see that the mean is the
same but the variance differs by a correction factor of (N − n)/(N − 1), which is
negligible when n is small relative to N.
Example 5.12: A manufacturer of automobile tires reports that among a shipment of 5000 sent to
a local distributor, 1000 are slightly blemished. If one purchases 10 of these tires at
random from the distributor, what is the probability that exactly 3 are blemished?
156 Chapter 5 Some Discrete Probability Distributions
Solution: Since N = 5000 is large relative to the sample size n = 10, we shall approximate the
desired probability by using the binomial distribution. The probability of obtaining
a blemished tire is 0.2. Therefore, the probability of obtaining exactly 3 blemished
tires is
h(3; 5000, 10, 1000) ≈ b(3; 10, 0.2) = 0.8791 − 0.6778 = 0.2013.
On the other hand, the exact probability is h(3; 5000, 10, 1000) = 0.2015.
The hypergeometric distribution can be extended to treat the case where the
N items can be partitioned into k cells A1, A2, . . . , Ak with a1 elements in the first
cell, a2 elements in the second cell, . . . , ak elements in the kth cell. We are now
interested in the probability that a random sample of size n yields x1 elements
from A1, x2 elements from A2, . . . , and xk elements from Ak. Let us represent
this probability by
f(x1, x2, . . . , xk; a1, a2, . . . , ak, N, n).
To obtain a general formula, we note that the total number of samples of size
n that can be chosen from N items is still
N
n

. There are
a1
x1

ways of selecting
x1 items from the items in A1, and for each of these we can choose x2 items from
the items in A2 in
a2
x2

ways. Therefore, we can select x1 items from A1 and x2
items from A2 in
a1
x1
a2
x2

ways. Continuing in this way, we can select all n items
consisting of x1 from A1, x2 from A2, . . . , and xk from Ak in

a1
x1

a2
x2

· · ·

ak
xk

ways.
The required probability distribution is now defined as follows.
Multivariate
Hypergeometric
Distribution
If N items can be partitioned into the k cells A1, A2, . . . , Ak with a1, a2, . . . , ak
elements, respectively, then the probability distribution of the random vari-
ables X1, X2, . . . , Xk, representing the number of elements selected from
A1, A2, . . . , Ak in a random sample of size n, is
f(x1, x2, . . . , xk; a1, a2, . . . , ak, N, n) =
a1
x1
a2
x2

· · ·
ak
xk

N
n
 ,
with
k

i=1
xi = n and
k

i=1
ai = N.
Example 5.13: A group of 10 individuals is used for a biological case study. The group contains 3
people with blood type O, 4 with blood type A, and 3 with blood type B. What is
the probability that a random sample of 5 will contain 1 person with blood type
O, 2 people with blood type A, and 2 people with blood type B?
Solution: Using the extension of the hypergeometric distribution with x1 = 1, x2 = 2, x3 = 2,
a1 = 3, a2 = 4, a3 = 3, N = 10, and n = 5, we find that the desired probability is
f(1, 2, 2; 3, 4, 3, 10, 5) =
3
1
4
2
3
2

10
5
 =
3
14
.
/ /
Exercises 157
Exercises
5.29 A homeowner plants 6 bulbs selected at ran-
dom from a box containing 5 tulip bulbs and 4 daf-
fodil bulbs. What is the probability that he planted 2
daffodil bulbs and 4 tulip bulbs?
5.30 To avoid detection at customs, a traveler places
6 narcotic tablets in a bottle containing 9 vitamin
tablets that are similar in appearance. If the customs
official selects 3 of the tablets at random for analysis,
what is the probability that the traveler will be arrested
for illegal possession of narcotics?
5.31 A random committee of size 3 is selected from
4 doctors and 2 nurses. Write a formula for the prob-
ability distribution of the random variable X repre-
senting the number of doctors on the committee. Find
P(2 ≤ X ≤ 3).
5.32 From a lot of 10 missiles, 4 are selected at ran-
dom and fired. If the lot contains 3 defective missiles
that will not fire, what is the probability that
(a) all 4 will fire?
(b) at most 2 will not fire?
5.33 If 7 cards are dealt from an ordinary deck of 52
playing cards, what is the probability that
(a) exactly 2 of them will be face cards?
(b) at least 1 of them will be a queen?
5.34 What is the probability that a waitress will
refuse to serve alcoholic beverages to only 2 minors
if she randomly checks the IDs of 5 among 9 students,
4 of whom are minors?
5.35 A company is interested in evaluating its cur-
rent inspection procedure for shipments of 50 identical
items. The procedure is to take a sample of 5 and
pass the shipment if no more than 2 are found to be
defective. What proportion of shipments with 20% de-
fectives will be accepted?
5.36 A manufacturing company uses an acceptance
scheme on items from a production line before they
are shipped. The plan is a two-stage one. Boxes of 25
items are readied for shipment, and a sample of 3 items
is tested for defectives. If any defectives are found, the
entire box is sent back for 100% screening. If no defec-
tives are found, the box is shipped.
(a) What is the probability that a box containing 3
defectives will be shipped?
(b) What is the probability that a box containing only
1 defective will be sent back for screening?
5.37 Suppose that the manufacturing company of Ex-
ercise 5.36 decides to change its acceptance scheme.
Under the new scheme, an inspector takes 1 item at
random, inspects it, and then replaces it in the box;
a second inspector does likewise. Finally, a third in-
spector goes through the same procedure. The box is
not shipped if any of the three inspectors find a de-
fective. Answer the questions in Exercise 5.36 for this
new plan.
5.38 Among 150 IRS employees in a large city, only
30 are women. If 10 of the employees are chosen at
random to provide free tax assistance for the residents
of this city, use the binomial approximation to the hy-
pergeometric distribution to find the probability that
at least 3 women are selected.
5.39 An annexation suit against a county subdivision
of 1200 residences is being considered by a neighboring
city. If the occupants of half the residences object to
being annexed, what is the probability that in a ran-
dom sample of 10 at least 3 favor the annexation suit?
5.40 It is estimated that 4000 of the 10,000 voting
residents of a town are against a new sales tax. If 15
eligible voters are selected at random and asked their
opinion, what is the probability that at most 7 favor
the new tax?
5.41 A nationwide survey of 17,000 college seniors by
the University of Michigan revealed that almost 70%
disapprove of daily pot smoking. If 18 of these seniors
are selected at random and asked their opinion, what
is the probability that more than 9 but fewer than 14
disapprove of smoking pot daily?
5.42 Find the probability of being dealt a bridge hand
of 13 cards containing 5 spades, 2 hearts, 3 diamonds,
and 3 clubs.
5.43 A foreign student club lists as its members 2
Canadians, 3 Japanese, 5 Italians, and 2 Germans. If
a committee of 4 is selected at random, find the prob-
ability that
(a) all nationalities are represented;
(b) all nationalities except Italian are represented.
5.44 An urn contains 3 green balls, 2 blue balls, and
4 red balls. In a random sample of 5 balls, find the
probability that both blue balls and at least 1 red ball
are selected.
5.45 Biologists doing studies in a particular environ-
ment often tag and release subjects in order to estimate
158 Chapter 5 Some Discrete Probability Distributions
the size of a population or the prevalence of certain
features in the population. Ten animals of a certain
population thought to be extinct (or near extinction)
are caught, tagged, and released in a certain region.
After a period of time, a random sample of 15 of this
type of animal is selected in the region. What is the
probability that 5 of those selected are tagged if there
are 25 animals of this type in the region?
5.46 A large company has an inspection system for
the batches of small compressors purchased from ven-
dors. A batch typically contains 15 compressors. In the
inspection system, a random sample of 5 is selected and
all are tested. Suppose there are 2 faulty compressors
in the batch of 15.
(a) What is the probability that for a given sample
there will be 1 faulty compressor?
(b) What is the probability that inspection will dis-
cover both faulty compressors?
5.47 A government task force suspects that some
manufacturing companies are in violation of federal
pollution regulations with regard to dumping a certain
type of product. Twenty firms are under suspicion but
not all can be inspected. Suppose that 3 of the firms
are in violation.
(a) What is the probability that inspection of 5 firms
will find no violations?
(b) What is the probability that the plan above will
find two violations?
5.48 Every hour, 10,000 cans of soda are filled by a
machine, among which 300 underfilled cans are pro-
duced. Each hour, a sample of 30 cans is randomly
selected and the number of ounces of soda per can is
checked. Denote by X the number of cans selected
that are underfilled. Find the probability that at least
1 underfilled can will be among those sampled.
5.4 Negative Binomial and Geometric Distributions
Let us consider an experiment where the properties are the same as those listed for
a binomial experiment, with the exception that the trials will be repeated until a
fixed number of successes occur. Therefore, instead of the probability of x successes
in n trials, where n is fixed, we are now interested in the probability that the kth
success occurs on the xth trial. Experiments of this kind are called negative
binomial experiments.
As an illustration, consider the use of a drug that is known to be effective
in 60% of the cases where it is used. The drug will be considered a success if
it is effective in bringing some degree of relief to the patient. We are interested
in finding the probability that the fifth patient to experience relief is the seventh
patient to receive the drug during a given week. Designating a success by S and a
failure by F, a possible order of achieving the desired result is SFSSSFS, which
occurs with probability
(0.6)(0.4)(0.6)(0.6)(0.6)(0.4)(0.6) = (0.6)5
(0.4)2
.
We could list all possible orders by rearranging the F’s and S’s except for the last
outcome, which must be the fifth success. The total number of possible orders
is equal to the number of partitions of the first six trials into two groups with 2
failures assigned to the one group and 4 successes assigned to the other group.
This can be done in
6
4

= 15 mutually exclusive ways. Hence, if X represents the
outcome on which the fifth success occurs, then
P(X = 7) =

6
4

(0.6)5
(0.4)2
= 0.1866.
What Is the Negative Binomial Random Variable?
The number X of trials required to produce k successes in a negative binomial
experiment is called a negative binomial random variable, and its probability
5.4 Negative Binomial and Geometric Distributions 159
distribution is called the negative binomial distribution. Since its probabilities
depend on the number of successes desired and the probability of a success on a
given trial, we shall denote them by b∗
(x; k, p). To obtain the general formula
for b∗
(x; k, p), consider the probability of a success on the xth trial preceded by
k − 1 successes and x − k failures in some specified order. Since the trials are
independent, we can multiply all the probabilities corresponding to each desired
outcome. Each success occurs with probability p and each failure with probability
q = 1 − p. Therefore, the probability for the specified order ending in success is
pk−1
qx−k
p = pk
qx−k
.
The total number of sample points in the experiment ending in a success, after the
occurrence of k−1 successes and x−k failures in any order, is equal to the number
of partitions of x−1 trials into two groups with k−1 successes corresponding to one
group and x−k failures corresponding to the other group. This number is specified
by the term
x−1
k−1

, each mutually exclusive and occurring with equal probability
pk
qx−k
. We obtain the general formula by multiplying pk
qx−k
by
x−1
k−1

.
Negative
Binomial
Distribution
If repeated independent trials can result in a success with probability p and
a failure with probability q = 1 − p, then the probability distribution of the
random variable X, the number of the trial on which the kth success occurs, is
b∗
(x; k, p) =

x − 1
k − 1

pk
qx−k
, x = k, k + 1, k + 2, . . . .
Example 5.14: In an NBA (National Basketball Association) championship series, the team that
wins four games out of seven is the winner. Suppose that teams A and B face each
other in the championship games and that team A has probability 0.55 of winning
a game over team B.
(a) What is the probability that team A will win the series in 6 games?
(b) What is the probability that team A will win the series?
(c) If teams A and B were facing each other in a regional playoff series, which is
decided by winning three out of five games, what is the probability that team
A would win the series?
Solution: (a) b∗
(6; 4, 0.55) =
5
3

0.554
(1 − 0.55)6−4
= 0.1853
(b) P(team A wins the championship series) is
b∗
(4; 4, 0.55) + b∗
(5; 4, 0.55) + b∗
(6; 4, 0.55) + b∗
(7; 4, 0.55)
= 0.0915 + 0.1647 + 0.1853 + 0.1668 = 0.6083.
(c) P(team A wins the playoff) is
b∗
(3; 3, 0.55) + b∗
(4; 3, 0.55) + b∗
(5; 3, 0.55)
= 0.1664 + 0.2246 + 0.2021 = 0.5931.
160 Chapter 5 Some Discrete Probability Distributions
The negative binomial distribution derives its name from the fact that each
term in the expansion of pk
(1 − q)−k
corresponds to the values of b∗
(x; k, p) for
x = k, k + 1, k + 2, . . . . If we consider the special case of the negative binomial
distribution where k = 1, we have a probability distribution for the number of
trials required for a single success. An example would be the tossing of a coin until
a head occurs. We might be interested in the probability that the first head occurs
on the fourth toss. The negative binomial distribution reduces to the form
b∗
(x; 1, p) = pqx−1
, x = 1, 2, 3, . . . .
Since the successive terms constitute a geometric progression, it is customary to
refer to this special case as the geometric distribution and denote its values by
g(x; p).
Geometric
Distribution
If repeated independent trials can result in a success with probability p and
a failure with probability q = 1 − p, then the probability distribution of the
random variable X, the number of the trial on which the first success occurs, is
g(x; p) = pqx−1
, x = 1, 2, 3, . . . .
Example 5.15: For a certain manufacturing process, it is known that, on the average, 1 in every
100 items is defective. What is the probability that the fifth item inspected is the
first defective item found?
Solution: Using the geometric distribution with x = 5 and p = 0.01, we have
g(5; 0.01) = (0.01)(0.99)4
= 0.0096.
Example 5.16: At a “busy time,” a telephone exchange is very near capacity, so callers have
difficulty placing their calls. It may be of interest to know the number of attempts
necessary in order to make a connection. Suppose that we let p = 0.05 be the
probability of a connection during a busy time. We are interested in knowing the
probability that 5 attempts are necessary for a successful call.
Solution: Using the geometric distribution with x = 5 and p = 0.05 yields
P(X = x) = g(5; 0.05) = (0.05)(0.95)4
= 0.041.
Quite often, in applications dealing with the geometric distribution, the mean
and variance are important. For example, in Example 5.16, the expected number
of calls necessary to make a connection is quite important. The following theorem
states without proof the mean and variance of the geometric distribution.
Theorem 5.3: The mean and variance of a random variable following the geometric distribution
are
μ =
1
p
and σ2
=
1 − p
p2
.
5.5 Poisson Distribution and the Poisson Process 161
Applications of Negative Binomial and Geometric Distributions
Areas of application for the negative binomial and geometric distributions become
obvious when one focuses on the examples in this section and the exercises devoted
to these distributions at the end of Section 5.5. In the case of the geometric
distribution, Example 5.16 depicts a situation where engineers or managers are
attempting to determine how inefficient a telephone exchange system is during
busy times. Clearly, in this case, trials occurring prior to a success represent a
cost. If there is a high probability of several attempts being required prior to
making a connection, then plans should be made to redesign the system.
Applications of the negative binomial distribution are similar in nature. Sup-
pose attempts are costly in some sense and are occurring in sequence. A high
probability of needing a “large” number of attempts to experience a fixed number
of successes is not beneficial to the scientist or engineer. Consider the scenarios
of Review Exercises 5.90 and 5.91. In Review Exercise 5.91, the oil driller defines
a certain level of success from sequentially drilling locations for oil. If only 6 at-
tempts have been made at the point where the second success is experienced, the
profits appear to dominate substantially the investment incurred by the drilling.
5.5 Poisson Distribution and the Poisson Process
Experiments yielding numerical values of a random variable X, the number of
outcomes occurring during a given time interval or in a specified region, are called
Poisson experiments. The given time interval may be of any length, such as a
minute, a day, a week, a month, or even a year. For example, a Poisson experiment
can generate observations for the random variable X representing the number of
telephone calls received per hour by an office, the number of days school is closed
due to snow during the winter, or the number of games postponed due to rain
during a baseball season. The specified region could be a line segment, an area,
a volume, or perhaps a piece of material. In such instances, X might represent
the number of field mice per acre, the number of bacteria in a given culture, or
the number of typing errors per page. A Poisson experiment is derived from the
Poisson process and possesses the following properties.
Properties of the Poisson Process
1. The number of outcomes occurring in one time interval or specified region of
space is independent of the number that occur in any other disjoint time in-
terval or region. In this sense we say that the Poisson process has no memory.
2. The probability that a single outcome will occur during a very short time
interval or in a small region is proportional to the length of the time interval
or the size of the region and does not depend on the number of outcomes
occurring outside this time interval or region.
3. The probability that more than one outcome will occur in such a short time
interval or fall in such a small region is negligible.
The number X of outcomes occurring during a Poisson experiment is called a
Poisson random variable, and its probability distribution is called the Poisson
162 Chapter 5 Some Discrete Probability Distributions
distribution. The mean number of outcomes is computed from μ = λt, where
t is the specific “time,” “distance,” “area,” or “volume” of interest. Since the
probabilities depend on λ, the rate of occurrence of outcomes, we shall denote
them by p(x; λt). The derivation of the formula for p(x; λt), based on the three
properties of a Poisson process listed above, is beyond the scope of this book. The
following formula is used for computing Poisson probabilities.
Poisson
Distribution
The probability distribution of the Poisson random variable X, representing
the number of outcomes occurring in a given time interval or specified region
denoted by t, is
p(x; λt) =
e−λt
(λt)x
x!
, x = 0, 1, 2, . . . ,
where λ is the average number of outcomes per unit time, distance, area, or
volume and e = 2.71828 . . . .
Table A.2 contains Poisson probability sums,
P(r; λt) =
r

x=0
p(x; λt),
for selected values of λt ranging from 0.1 to 18.0. We illustrate the use of this table
with the following two examples.
Example 5.17: During a laboratory experiment, the average number of radioactive particles pass-
ing through a counter in 1 millisecond is 4. What is the probability that 6 particles
enter the counter in a given millisecond?
Solution: Using the Poisson distribution with x = 6 and λt = 4 and referring to Table A.2,
we have
p(6; 4) =
e−4
46
6!
=
6

x=0
p(x; 4) −
5

x=0
p(x; 4) = 0.8893 − 0.7851 = 0.1042.
Example 5.18: Ten is the average number of oil tankers arriving each day at a certain port. The
facilities at the port can handle at most 15 tankers per day. What is the probability
that on a given day tankers have to be turned away?
Solution: Let X be the number of tankers arriving each day. Then, using Table A.2, we have
P(X  15) = 1 − P(X ≤ 15) = 1 −
15

x=0
p(x; 10) = 1 − 0.9513 = 0.0487.
Like the binomial distribution, the Poisson distribution is used for quality con-
trol, quality assurance, and acceptance sampling. In addition, certain important
continuous distributions used in reliability theory and queuing theory depend on
the Poisson process. Some of these distributions are discussed and developed in
Chapter 6. The following theorem concerning the Poisson random variable is given
in Appendix A.25.
Theorem 5.4: Both the mean and the variance of the Poisson distribution p(x; λt) are λt.
5.5 Poisson Distribution and the Poisson Process 163
Nature of the Poisson Probability Function
Like so many discrete and continuous distributions, the form of the Poisson distri-
bution becomes more and more symmetric, even bell-shaped, as the mean grows
large. Figure 5.1 illustrates this, showing plots of the probability function for
μ = 0.1, μ = 2, and μ = 5. Note the nearness to symmetry when μ becomes
as large as 5. A similar condition exists for the binomial distribution, as will be
illustrated later in the text.
0
0.25
0.5
0.75
1.0
0 2 4 6 8 10
x
f
(x)
f
(x)
 0.1  2  5
0
0.10
0.20
0.30
f
(x)
0
0.10
0.20
0.30
x x
0 2 4 6 8 10
0 2 4 6 8 10
μ μ μ
Figure 5.1: Poisson density functions for different means.
Approximation of Binomial Distribution by a Poisson Distribution
It should be evident from the three principles of the Poisson process that the
Poisson distribution is related to the binomial distribution. Although the Poisson
usually finds applications in space and time problems, as illustrated by Examples
5.17 and 5.18, it can be viewed as a limiting form of the binomial distribution. In
the case of the binomial, if n is quite large and p is small, the conditions begin to
simulate the continuous space or time implications of the Poisson process. The in-
dependence among Bernoulli trials in the binomial case is consistent with principle
2 of the Poisson process. Allowing the parameter p to be close to 0 relates to prin-
ciple 3 of the Poisson process. Indeed, if n is large and p is close to 0, the Poisson
distribution can be used, with μ = np, to approximate binomial probabilities. If
p is close to 1, we can still use the Poisson distribution to approximate binomial
probabilities by interchanging what we have defined to be a success and a failure,
thereby changing p to a value close to 0.
Theorem 5.5: Let X be a binomial random variable with probability distribution b(x; n, p). When
n → ∞, p → 0, and np
n→∞
−→ μ remains constant,
b(x; n, p)
n→∞
−→ p(x; μ).
/ /
164 Chapter 5 Some Discrete Probability Distributions
Example 5.19: In a certain industrial facility, accidents occur infrequently. It is known that the
probability of an accident on any given day is 0.005 and accidents are independent
of each other.
(a) What is the probability that in any given period of 400 days there will be an
accident on one day?
(b) What is the probability that there are at most three days with an accident?
Solution: Let X be a binomial random variable with n = 400 and p = 0.005. Thus, np = 2.
Using the Poisson approximation,
(a) P(X = 1) = e−2
21
= 0.271 and
(b) P(X ≤ 3) =
3

x=0
e−2
2x
/x! = 0.857.
Example 5.20: In a manufacturing process where glass products are made, defects or bubbles
occur, occasionally rendering the piece undesirable for marketing. It is known
that, on average, 1 in every 1000 of these items produced has one or more bubbles.
What is the probability that a random sample of 8000 will yield fewer than 7 items
possessing bubbles?
Solution: This is essentially a binomial experiment with n = 8000 and p = 0.001. Since
p is very close to 0 and n is quite large, we shall approximate with the Poisson
distribution using
μ = (8000)(0.001) = 8.
Hence, if X represents the number of bubbles, we have
P(X  7) =
6

x=0
b(x; 8000, 0.001) ≈ p(x; 8) = 0.3134.
Exercises
5.49 The probability that a person living in a certain
city owns a dog is estimated to be 0.3. Find the prob-
ability that the tenth person randomly interviewed in
that city is the fifth one to own a dog.
5.50 Find the probability that a person flipping a coin
gets
(a) the third head on the seventh flip;
(b) the first head on the fourth flip.
5.51 Three people toss a fair coin and the odd one
pays for coffee. If the coins all turn up the same, they
are tossed again. Find the probability that fewer than
4 tosses are needed.
5.52 A scientist inoculates mice, one at a time, with
a disease germ until he finds 2 that have contracted the
disease. If the probability of contracting the disease is
1/6, what is the probability that 8 mice are required?
5.53 An inventory study determines that, on aver-
age, demands for a particular item at a warehouse are
made 5 times per day. What is the probability that on
a given day this item is requested
(a) more than 5 times?
(b) not at all?
5.54 According to a study published by a group of
University of Massachusetts sociologists, about two-
thirds of the 20 million persons in this country who
take Valium are women. Assuming this figure to be a
valid estimate, find the probability that on a given day
the fifth prescription written by a doctor for Valium is
(a) the first prescribing Valium for a woman;
/ /
Exercises 165
(b) the third prescribing Valium for a woman.
5.55 The probability that a student pilot passes the
written test for a private pilot’s license is 0.7. Find the
probability that a given student will pass the test
(a) on the third try;
(b) before the fourth try.
5.56 On average, 3 traffic accidents per month occur
at a certain intersection. What is the probability that
in any given month at this intersection
(a) exactly 5 accidents will occur?
(b) fewer than 3 accidents will occur?
(c) at least 2 accidents will occur?
5.57 On average, a textbook author makes two word-
processing errors per page on the first draft of her text-
book. What is the probability that on the next page
she will make
(a) 4 or more errors?
(b) no errors?
5.58 A certain area of the eastern United States is,
on average, hit by 6 hurricanes a year. Find the prob-
ability that in a given year that area will be hit by
(a) fewer than 4 hurricanes;
(b) anywhere from 6 to 8 hurricanes.
5.59 Suppose the probability that any given person
will believe a tale about the transgressions of a famous
actress is 0.8. What is the probability that
(a) the sixth person to hear this tale is the fourth one
to believe it?
(b) the third person to hear this tale is the first one to
believe it?
5.60 The average number of field mice per acre in
a 5-acre wheat field is estimated to be 12. Find the
probability that fewer than 7 field mice are found
(a) on a given acre;
(b) on 2 of the next 3 acres inspected.
5.61 Suppose that, on average, 1 person in 1000
makes a numerical error in preparing his or her income
tax return. If 10,000 returns are selected at random
and examined, find the probability that 6, 7, or 8 of
them contain an error.
5.62 The probability that a student at a local high
school fails the screening test for scoliosis (curvature
of the spine) is known to be 0.004. Of the next 1875
students at the school who are screened for scoliosis,
find the probability that
(a) fewer than 5 fail the test;
(b) 8, 9, or 10 fail the test.
5.63 Find the mean and variance of the random vari-
able X in Exercise 5.58, representing the number of
hurricanes per year to hit a certain area of the eastern
United States.
5.64 Find the mean and variance of the random vari-
able X in Exercise 5.61, representing the number of
persons among 10,000 who make an error in preparing
their income tax returns.
5.65 An automobile manufacturer is concerned about
a fault in the braking mechanism of a particular model.
The fault can, on rare occasions, cause a catastrophe at
high speed. The distribution of the number of cars per
year that will experience the catastrophe is a Poisson
random variable with λ = 5.
(a) What is the probability that at most 3 cars per year
will experience a catastrophe?
(b) What is the probability that more than 1 car per
year will experience a catastrophe?
5.66 Changes in airport procedures require consid-
erable planning. Arrival rates of aircraft are impor-
tant factors that must be taken into account. Suppose
small aircraft arrive at a certain airport, according to
a Poisson process, at the rate of 6 per hour. Thus, the
Poisson parameter for arrivals over a period of hours is
μ = 6t.
(a) What is the probability that exactly 4 small air-
craft arrive during a 1-hour period?
(b) What is the probability that at least 4 arrive during
a 1-hour period?
(c) If we define a working day as 12 hours, what is
the probability that at least 75 small aircraft ar-
rive during a working day?
5.67 The number of customers arriving per hour at a
certain automobile service facility is assumed to follow
a Poisson distribution with mean λ = 7.
(a) Compute the probability that more than 10 cus-
tomers will arrive in a 2-hour period.
(b) What is the mean number of arrivals during a
2-hour period?
5.68 Consider Exercise 5.62. What is the mean num-
ber of students who fail the test?
5.69 The probability that a person will die when he
or she contracts a virus infection is 0.001. Of the next
4000 people infected, what is the mean number who
will die?
/ /
166 Chapter 5 Some Discrete Probability Distributions
5.70 A company purchases large lots of a certain kind
of electronic device. A method is used that rejects a
lot if 2 or more defective units are found in a random
sample of 100 units.
(a) What is the mean number of defective units found
in a sample of 100 units if the lot is 1% defective?
(b) What is the variance?
5.71 For a certain type of copper wire, it is known
that, on the average, 1.5 flaws occur per millimeter.
Assuming that the number of flaws is a Poisson random
variable, what is the probability that no flaws occur in
a certain portion of wire of length 5 millimeters? What
is the mean number of flaws in a portion of length 5
millimeters?
5.72 Potholes on a highway can be a serious problem,
and are in constant need of repair. With a particular
type of terrain and make of concrete, past experience
suggests that there are, on the average, 2 potholes per
mile after a certain amount of usage. It is assumed
that the Poisson process applies to the random vari-
able “number of potholes.”
(a) What is the probability that no more than one pot-
hole will appear in a section of 1 mile?
(b) What is the probability that no more than 4 pot-
holes will occur in a given section of 5 miles?
5.73 Hospital administrators in large cities anguish
about traffic in emergency rooms. At a particular hos-
pital in a large city, the staff on hand cannot accom-
modate the patient traffic if there are more than 10
emergency cases in a given hour. It is assumed that
patient arrival follows a Poisson process, and historical
data suggest that, on the average, 5 emergencies arrive
per hour.
(a) What is the probability that in a given hour the
staff cannot accommodate the patient traffic?
(b) What is the probability that more than 20 emer-
gencies arrive during a 3-hour shift?
5.74 It is known that 3% of people whose luggage
is screened at an airport have questionable objects in
their luggage. What is the probability that a string of
15 people pass through screening successfully before an
individual is caught with a questionable object? What
is the expected number of people to pass through be-
fore an individual is stopped?
5.75 Computer technology has produced an environ-
ment in which robots operate with the use of micro-
processors. The probability that a robot fails during
any 6-hour shift is 0.10. What is the probability that
a robot will operate through at most 5 shifts before it
fails?
5.76 The refusal rate for telephone polls is known to
be approximately 20%. A newspaper report indicates
that 50 people were interviewed before the first refusal.
(a) Comment on the validity of the report. Use a prob-
ability in your argument.
(b) What is the expected number of people interviewed
before a refusal?
Review Exercises
5.77 During a manufacturing process, 15 units are
randomly selected each day from the production line
to check the percent defective. From historical infor-
mation it is known that the probability of a defective
unit is 0.05. Any time 2 or more defectives are found
in the sample of 15, the process is stopped. This proce-
dure is used to provide a signal in case the probability
of a defective has increased.
(a) What is the probability that on any given day the
production process will be stopped? (Assume 5%
defective.)
(b) Suppose that the probability of a defective has in-
creased to 0.07. What is the probability that on
any given day the production process will not be
stopped?
5.78 An automatic welding machine is being consid-
ered for use in a production process. It will be con-
sidered for purchase if it is successful on 99% of its
welds. Otherwise, it will not be considered efficient.
A test is to be conducted with a prototype that is to
perform 100 welds. The machine will be accepted for
manufacture if it misses no more than 3 welds.
(a) What is the probability that a good machine will
be rejected?
(b) What is the probability that an inefficient machine
with 95% welding success will be accepted?
5.79 A car rental agency at a local airport has avail-
able 5 Fords, 7 Chevrolets, 4 Dodges, 3 Hondas, and 4
Toyotas. If the agency randomly selects 9 of these cars
to chauffeur delegates from the airport to the down-
town convention center, find the probability that 2
Fords, 3 Chevrolets, 1 Dodge, 1 Honda, and 2 Toyotas
are used.
5.80 Service calls come to a maintenance center ac-
cording to a Poisson process, and on average, 2.7 calls
/ /
Review Exercises 167
are received per minute. Find the probability that
(a) no more than 4 calls come in any minute;
(b) fewer than 2 calls come in any minute;
(c) more than 10 calls come in a 5-minute period.
5.81 An electronics firm claims that the proportion of
defective units from a certain process is 5%. A buyer
has a standard procedure of inspecting 15 units selected
randomly from a large lot. On a particular occasion,
the buyer found 5 items defective.
(a) What is the probability of this occurrence, given
that the claim of 5% defective is correct?
(b) What would be your reaction if you were the buyer?
5.82 An electronic switching device occasionally mal-
functions, but the device is considered satisfactory if it
makes, on average, no more than 0.20 error per hour.
A particular 5-hour period is chosen for testing the de-
vice. If no more than 1 error occurs during the time
period, the device will be considered satisfactory.
(a) What is the probability that a satisfactory device
will be considered unsatisfactory on the basis of the
test? Assume a Poisson process.
(b) What is the probability that a device will be ac-
cepted as satisfactory when, in fact, the mean num-
ber of errors is 0.25? Again, assume a Poisson pro-
cess.
5.83 A company generally purchases large lots of a
certain kind of electronic device. A method is used
that rejects a lot if 2 or more defective units are found
in a random sample of 100 units.
(a) What is the probability of rejecting a lot that is 1%
defective?
(b) What is the probability of accepting a lot that is
5% defective?
5.84 A local drugstore owner knows that, on average,
100 people enter his store each hour.
(a) Find the probability that in a given 3-minute pe-
riod nobody enters the store.
(b) Find the probability that in a given 3-minute pe-
riod more than 5 people enter the store.
5.85 (a) Suppose that you throw 4 dice. Find the
probability that you get at least one 1.
(b) Suppose that you throw 2 dice 24 times. Find the
probability that you get at least one (1, 1), that is,
“snake-eyes.”
5.86 Suppose that out of 500 lottery tickets sold, 200
pay off at least the cost of the ticket. Now suppose
that you buy 5 tickets. Find the probability that you
will win back at least the cost of 3 tickets.
5.87 Imperfections in computer circuit boards and
computer chips lend themselves to statistical treat-
ment. For a particular type of board, the probability
of a diode failure is 0.03 and the board contains 200
diodes.
(a) What is the mean number of failures among the
diodes?
(b) What is the variance?
(c) The board will work if there are no defective diodes.
What is the probability that a board will work?
5.88 The potential buyer of a particular engine re-
quires (among other things) that the engine start suc-
cessfully 10 consecutive times. Suppose the probability
of a successful start is 0.990. Let us assume that the
outcomes of attempted starts are independent.
(a) What is the probability that the engine is accepted
after only 10 starts?
(b) What is the probability that 12 attempted starts
are made during the acceptance process?
5.89 The acceptance scheme for purchasing lots con-
taining a large number of batteries is to test no more
than 75 randomly selected batteries and to reject a lot
if a single battery fails. Suppose the probability of a
failure is 0.001.
(a) What is the probability that a lot is accepted?
(b) What is the probability that a lot is rejected on the
20th test?
(c) What is the probability that it is rejected in 10 or
fewer trials?
5.90 An oil drilling company ventures into various lo-
cations, and its success or failure is independent from
one location to another. Suppose the probability of a
success at any specific location is 0.25.
(a) What is the probability that the driller drills at 10
locations and has 1 success?
(b) The driller will go bankrupt if it drills 10 times be-
fore the first success occurs. What are the driller’s
prospects for bankruptcy?
5.91 Consider the information in Review Exercise
5.90. The drilling company feels that it will “hit it
big” if the second success occurs on or before the sixth
attempt. What is the probability that the driller will
hit it big?
5.92 A couple decides to continue to have children un-
til they have two males. Assuming that P(male) = 0.5,
what is the probability that their second male is their
fourth child?
/ /
168 Chapter 5 Some Discrete Probability Distributions
5.93 It is known by researchers that 1 in 100 people
carries a gene that leads to the inheritance of a certain
chronic disease. In a random sample of 1000 individ-
uals, what is the probability that fewer than 7 indi-
viduals carry the gene? Use a Poisson approximation.
Again, using the approximation, what is the approxi-
mate mean number of people out of 1000 carrying the
gene?
5.94 A production process produces electronic com-
ponent parts. It is presumed that the probability of a
defective part is 0.01. During a test of this presump-
tion, 500 parts are sampled randomly and 15 defectives
are observed.
(a) What is your response to the presumption that the
process is 1% defective? Be sure that a computed
probability accompanies your comment.
(b) Under the presumption of a 1% defective process,
what is the probability that only 3 parts will be
found defective?
(c) Do parts (a) and (b) again using the Poisson ap-
proximation.
5.95 A production process outputs items in lots of 50.
Sampling plans exist in which lots are pulled aside pe-
riodically and exposed to a certain type of inspection.
It is usually assumed that the proportion defective is
very small. It is important to the company that lots
containing defectives be a rare event. The current in-
spection plan is to periodically sample randomly 10 out
of the 50 items in a lot and, if none are defective, to
perform no intervention.
(a) Suppose in a lot chosen at random, 2 out of 50 are
defective. What is the probability that at least 1
in the sample of 10 from the lot is defective?
(b) From your answer to part (a), comment on the
quality of this sampling plan.
(c) What is the mean number of defects found out of
10 items sampled?
5.96 Consider the situation of Review Exercise 5.95.
It has been determined that the sampling plan should
be extensive enough that there is a high probability,
say 0.9, that if as many as 2 defectives exist in the lot
of 50 being sampled, at least 1 will be found in the
sampling. With these restrictions, how many of the 50
items should be sampled?
5.97 National security requires that defense technol-
ogy be able to detect incoming projectiles or missiles.
To make the defense system successful, multiple radar
screens are required. Suppose that three independent
screens are to be operated and the probability that any
one screen will detect an incoming missile is 0.8. Ob-
viously, if no screens detect an incoming projectile, the
system is unworthy and must be improved.
(a) What is the probability that an incoming missile
will not be detected by any of the three screens?
(b) What is the probability that the missile will be de-
tected by only one screen?
(c) What is the probability that it will be detected by
at least two out of three screens?
5.98 Suppose it is important that the overall missile
defense system be as near perfect as possible.
(a) Assuming the quality of the screens is as indicated
in Review Exercise 5.97, how many are needed
to ensure that the probability that a missile gets
through undetected is 0.0001?
(b) Suppose it is decided to stay with only 3 screens
and attempt to improve the screen detection abil-
ity. What must the individual screen effectiveness
(i.e., probability of detection) be in order to achieve
the effectiveness required in part (a)?
5.99 Go back to Review Exercise 5.95(a). Re-
compute the probability using the binomial distribu-
tion. Comment.
5.100 There are two vacancies in a certain university
statistics department. Five individuals apply. Two
have expertise in linear models, and one has exper-
tise in applied probability. The search committee is
instructed to choose the two applicants randomly.
(a) What is the probability that the two chosen are
those with expertise in linear models?
(b) What is the probability that of the two chosen, one
has expertise in linear models and one has expertise
in applied probability?
5.101 The manufacturer of a tricycle for children has
received complaints about defective brakes in the prod-
uct. According to the design of the product and consid-
erable preliminary testing, it had been determined that
the probability of the kind of defect in the complaint
was 1 in 10,000 (i.e., 0.0001). After a thorough investi-
gation of the complaints, it was determined that during
a certain period of time, 200 products were randomly
chosen from production and 5 had defective brakes.
(a) Comment on the “1 in 10,000” claim by the man-
ufacturer. Use a probabilistic argument. Use the
binomial distribution for your calculations.
(b) Repeat part (a) using the Poisson approximation.
5.102 Group Project: Divide the class into two
groups of approximately equal size. The students in
group 1 will each toss a coin 10 times (n1) and count
the number of heads obtained. The students in group 2
will each toss a coin 40 times (n2) and again count the
5.6 Potential Misconceptions and Hazards 169
number of heads. The students in each group should
individually compute the proportion of heads observed,
which is an estimate of p, the probability of observing
a head. Thus, there will be a set of values of p1 (from
group 1) and a set of values p2 (from group 2). All of
the values of p1 and p2 are estimates of 0.5, which is
the true value of the probability of observing a head
for a fair coin.
(a) Which set of values is consistently closer to 0.5, the
values of p1 or p2? Consider the proof of Theorem
5.1 on page 147 with regard to the estimates of the
parameter p = 0.5. The values of p1 were obtained
with n = n1 = 10, and the values of p2 were ob-
tained with n = n2 = 40. Using the notation of the
proof, the estimates are given by
p1 =
x1
n1
=
I1 + · · · + In1
n1
,
where I1, . . . , In1 are 0s and 1s and n1 = 10, and
p2 =
x2
n2
=
I1 + · · · + In2
n2
,
where I1, . . . , In2 , again, are 0s and 1s and n2 = 40.
(b) Referring again to Theorem 5.1, show that
E(p1) = E(p2) = p = 0.5.
(c) Show that σ2
p1
=
σ2
X1
n1
is 4 times the value of
σ2
p2
=
σ2
X2
n2
. Then explain further why the values
of p2 from group 2 are more consistently closer to
the true value, p = 0.5, than the values of p1 from
group 1.
You will continue to learn more and more about
parameter estimation beginning in Chapter 9. At
that point emphasis will put on the importance of
the mean and variance of an estimator of a param-
eter.
5.6 Potential Misconceptions and Hazards;
Relationship to Material in Other Chapters
The discrete distributions discussed in this chapter occur with great frequency in
engineering and the biological and physical sciences. The exercises and examples
certainly suggest this. Industrial sampling plans and many engineering judgments
are based on the binomial and Poisson distributions as well as on the hypergeo-
metric distribution. While the geometric and negative binomial distributions are
used to a somewhat lesser extent, they also find applications. In particular, a neg-
ative binomial random variable can be viewed as a mixture of Poisson and gamma
random variables (the gamma distribution will be discussed in Chapter 6).
Despite the rich heritage that these distributions find in real life, they can
be misused unless the scientific practitioner is prudent and cautious. Of course,
any probability calculation for the distributions discussed in this chapter is made
under the assumption that the parameter value is known. Real-world applications
often result in a parameter value that may “move around” due to factors that are
difficult to control in the process or because of interventions in the process that
have not been taken into account. For example, in Review Exercise 5.77, “historical
information” is used. But is the process that exists now the same as that under
which the historical data were collected? The use of the Poisson distribution can
suffer even more from this kind of difficulty. For example, in Review Exercise 5.80,
the questions in parts (a), (b), and (c) are based on the use of μ = 2.7 calls per
minute. Based on historical records, this is the number of calls that occur “on
average.” But in this and many other applications of the Poisson distribution,
there are slow times and busy times and so there are times in which the conditions
170 Chapter 5 Some Discrete Probability Distributions
for the Poisson process may appear to hold when in fact they do not. Thus,
the probability calculations may be incorrect. In the case of the binomial, the
assumption that may fail in certain applications (in addition to nonconstancy of p)
is the independence assumption, stating that the Bernoulli trials are independent.
One of the most famous misuses of the binomial distribution occurred in the
1961 baseball season, when Mickey Mantle and Roger Maris were engaged in a
friendly battle to break Babe Ruth’s all-time record of 60 home runs. A famous
magazine article made a prediction, based on probability theory, that Mantle would
break the record. The prediction was based on probability calculation with the use
of the binomial distribution. The classic error made was to estimate the param-
eter p (one for each player) based on relative historical frequency of home runs
throughout the players’ careers. Maris, unlike Mantle, had not been a prodigious
home run hitter prior to 1961 so his estimate of p was quite low. As a result, the
calculated probability of breaking the record was quite high for Mantle and low for
Maris. The end result: Mantle failed to break the record and Maris succeeded.
Chapter 6
Some Continuous Probability
Distributions
6.1 Continuous Uniform Distribution
One of the simplest continuous distributions in all of statistics is the continuous
uniform distribution. This distribution is characterized by a density function
that is “flat,” and thus the probability is uniform in a closed interval, say [A, B].
Although applications of the continuous uniform distribution are not as abundant
as those for other distributions discussed in this chapter, it is appropriate for the
novice to begin this introduction to continuous distributions with the uniform
distribution.
Uniform
Distribution
The density function of the continuous uniform random variable X on the in-
terval [A, B] is
f(x; A, B) =

1
B−A , A ≤ x ≤ B,
0, elsewhere.
The density function forms a rectangle with base B−A and constant height 1
B−A .
As a result, the uniform distribution is often called the rectangular distribution.
Note, however, that the interval may not always be closed: [A, B]. It can be (A, B)
as well. The density function for a uniform random variable on the interval [1, 3]
is shown in Figure 6.1.
Probabilities are simple to calculate for the uniform distribution because of the
simple nature of the density function. However, note that the application of this
distribution is based on the assumption that the probability of falling in an interval
of fixed length within [A, B] is constant.
Example 6.1: Suppose that a large conference room at a certain company can be reserved for no
more than 4 hours. Both long and short conferences occur quite often. In fact, it
can be assumed that the length X of a conference has a uniform distribution on
the interval [0, 4].
171
172 Chapter 6 Some Continuous Probability Distributions
x
f (x)
0 3
1
1
2
Figure 6.1: The density function for a random variable on the interval [1, 3].
(a) What is the probability density function?
(b) What is the probability that any given conference lasts at least 3 hours?
Solution: (a) The appropriate density function for the uniformly distributed random vari-
able X in this situation is
f(x) =

1
4 , 0 ≤ x ≤ 4,
0, elsewhere.
(b) P[X ≥ 3] =
 4
3
1
4 dx = 1
4 .
Theorem 6.1: The mean and variance of the uniform distribution are
μ =
A + B
2
and σ2
=
(B − A)2
12
.
The proofs of the theorems are left to the reader. See Exercise 6.1 on page 185.
6.2 Normal Distribution
The most important continuous probability distribution in the entire field of statis-
tics is the normal distribution. Its graph, called the normal curve, is the
bell-shaped curve of Figure 6.2, which approximately describes many phenomena
that occur in nature, industry, and research. For example, physical measurements
in areas such as meteorological experiments, rainfall studies, and measurements
of manufactured parts are often more than adequately explained with a normal
distribution. In addition, errors in scientific measurements are extremely well ap-
proximated by a normal distribution. In 1733, Abraham DeMoivre developed the
mathematical equation of the normal curve. It provided a basis from which much
of the theory of inductive statistics is founded. The normal distribution is of-
ten referred to as the Gaussian distribution, in honor of Karl Friedrich Gauss
6.2 Normal Distribution 173
x
μ
σ
Figure 6.2: The normal curve.
(1777–1855), who also derived its equation from a study of errors in repeated mea-
surements of the same quantity.
A continuous random variable X having the bell-shaped distribution of Figure
6.2 is called a normal random variable. The mathematical equation for the
probability distribution of the normal variable depends on the two parameters μ
and σ, its mean and standard deviation, respectively. Hence, we denote the values
of the density of X by n(x; μ, σ).
Normal
Distribution
The density of the normal random variable X, with mean μ and variance σ2
, is
n(x; μ, σ) =
1
√
2πσ
e− 1
2σ2 (x−μ)2
, − ∞  x  ∞,
where π = 3.14159 . . . and e = 2.71828 . . . .
Once μ and σ are specified, the normal curve is completely determined. For exam-
ple, if μ = 50 and σ = 5, then the ordinates n(x; 50, 5) can be computed for various
values of x and the curve drawn. In Figure 6.3, we have sketched two normal curves
having the same standard deviation but different means. The two curves are iden-
tical in form but are centered at different positions along the horizontal axis.
x
1  2
σ σ
1 2
μ
μ
Figure 6.3: Normal curves with μ1  μ2 and σ1 = σ2.
174 Chapter 6 Some Continuous Probability Distributions
x
1  2
1
2
μ μ
σ
σ
Figure 6.4: Normal curves with μ1 = μ2 and σ1  σ2.
In Figure 6.4, we have sketched two normal curves with the same mean but
different standard deviations. This time we see that the two curves are centered
at exactly the same position on the horizontal axis, but the curve with the larger
standard deviation is lower and spreads out farther. Remember that the area under
a probability curve must be equal to 1, and therefore the more variable the set of
observations, the lower and wider the corresponding curve will be.
Figure 6.5 shows two normal curves having different means and different stan-
dard deviations. Clearly, they are centered at different positions on the horizontal
axis and their shapes reflect the two different values of σ.
x
1
2
2
μ
1
μ
σ
σ
Figure 6.5: Normal curves with μ1  μ2 and σ1  σ2.
Based on inspection of Figures 6.2 through 6.5 and examination of the first
and second derivatives of n(x; μ, σ), we list the following properties of the normal
curve:
1. The mode, which is the point on the horizontal axis where the curve is a
maximum, occurs at x = μ.
2. The curve is symmetric about a vertical axis through the mean μ.
3. The curve has its points of inflection at x = μ ± σ; it is concave downward if
μ − σ  X  μ + σ and is concave upward otherwise.
6.2 Normal Distribution 175
4. The normal curve approaches the horizontal axis asymptotically as we proceed
in either direction away from the mean.
5. The total area under the curve and above the horizontal axis is equal to 1.
Theorem 6.2: The mean and variance of n(x; μ, σ) are μ and σ2
, respectively. Hence, the stan-
dard deviation is σ.
Proof: To evaluate the mean, we first calculate
E(X − μ) =
∞
−∞
x − μ
√
2πσ
e− 1
2 (x−μ
σ )
2
dx.
Setting z = (x − μ)/σ and dx = σ dz, we obtain
E(X − μ) =
1
√
2π
∞
−∞
ze− 1
2 z2
dz = 0,
since the integrand above is an odd function of z. Using Theorem 4.5 on page 128,
we conclude that
E(X) = μ.
The variance of the normal distribution is given by
E[(X − μ)2
] =
1
√
2πσ
∞
−∞
(x − μ)2
e− 1
2 [(x−μ)/σ]2
dx.
Again setting z = (x − μ)/σ and dx = σ dz, we obtain
E[(X − μ)2
] =
σ2
√
2π
∞
−∞
z2
e− z2
2 dz.
Integrating by parts with u = z and dv = ze−z2
/2
dz so that du = dz and v =
−e−z2
/2
, we find that
E[(X − μ)2
] =
σ2
√
2π

−ze−z2
/2



∞
−∞
+
∞
−∞
e−z2
/2
dz

= σ2
(0 + 1) = σ2
.
Many random variables have probability distributions that can be described
adequately by the normal curve once μ and σ2
are specified. In this chapter, we
shall assume that these two parameters are known, perhaps from previous inves-
tigations. Later, we shall make statistical inferences when μ and σ2
are unknown
and have been estimated from the available experimental data.
We pointed out earlier the role that the normal distribution plays as a reason-
able approximation of scientific variables in real-life experiments. There are other
applications of the normal distribution that the reader will appreciate as he or she
moves on in the book. The normal distribution finds enormous application as a
limiting distribution. Under certain conditions, the normal distribution provides a
good continuous approximation to the binomial and hypergeometric distributions.
The case of the approximation to the binomial is covered in Section 6.5. In Chap-
ter 8, the reader will learn about sampling distributions. It turns out that the
limiting distribution of sample averages is normal. This provides a broad base
for statistical inference that proves very valuable to the data analyst interested in
176 Chapter 6 Some Continuous Probability Distributions
estimation and hypothesis testing. Theory in the important areas such as analysis
of variance (Chapters 13, 14, and 15) and quality control (Chapter 17) is based on
assumptions that make use of the normal distribution.
In Section 6.3, examples demonstrate the use of tables of the normal distribu-
tion. Section 6.4 follows with examples of applications of the normal distribution.
6.3 Areas under the Normal Curve
The curve of any continuous probability distribution or density function is con-
structed so that the area under the curve bounded by the two ordinates x = x1
and x = x2 equals the probability that the random variable X assumes a value
between x = x1 and x = x2. Thus, for the normal curve in Figure 6.6,
P(x1  X  x2) =
x2
x1
n(x; μ, σ) dx =
1
√
2πσ
x2
x1
e− 1
2σ2 (x−μ)2
dx
is represented by the area of the shaded region.
x
x1 x2
μ
Figure 6.6: P(x1  X  x2) = area of the shaded region.
In Figures 6.3, 6.4, and 6.5 we saw how the normal curve is dependent on
the mean and the standard deviation of the distribution under investigation. The
area under the curve between any two ordinates must then also depend on the
values μ and σ. This is evident in Figure 6.7, where we have shaded regions cor-
responding to P(x1  X  x2) for two curves with different means and variances.
P(x1  X  x2), where X is the random variable describing distribution A, is
indicated by the shaded area below the curve of A. If X is the random variable de-
scribing distribution B, then P(x1  X  x2) is given by the entire shaded region.
Obviously, the two shaded regions are different in size; therefore, the probability
associated with each distribution will be different for the two given values of X.
There are many types of statistical software that can be used in calculating
areas under the normal curve. The difficulty encountered in solving integrals of
normal density functions necessitates the tabulation of normal curve areas for quick
reference. However, it would be a hopeless task to attempt to set up separate tables
for every conceivable value of μ and σ. Fortunately, we are able to transform all
the observations of any normal random variable X into a new set of observations
6.3 Areas under the Normal Curve 177
x
x1 x2
A
B
Figure 6.7: P(x1  X  x2) for different normal curves.
of a normal random variable Z with mean 0 and variance 1. This can be done by
means of the transformation
Z =
X − μ
σ
.
Whenever X assumes a value x, the corresponding value of Z is given by z =
(x − μ)/σ. Therefore, if X falls between the values x = x1 and x = x2, the
random variable Z will fall between the corresponding values z1 = (x1 − μ)/σ and
z2 = (x2 − μ)/σ. Consequently, we may write
P(x1  X  x2) =
1
√
2πσ
x2
x1
e− 1
2σ2 (x−μ)2
dx =
1
√
2π
z2
z1
e− 1
2 z2
dz
=
z2
z1
n(z; 0, 1) dz = P(z1  Z  z2),
where Z is seen to be a normal random variable with mean 0 and variance 1.
Definition 6.1: The distribution of a normal random variable with mean 0 and variance 1 is called
a standard normal distribution.
The original and transformed distributions are illustrated in Figure 6.8. Since
all the values of X falling between x1 and x2 have corresponding z values between
z1 and z2, the area under the X-curve between the ordinates x = x1 and x = x2 in
Figure 6.8 equals the area under the Z-curve between the transformed ordinates
z = z1 and z = z2.
We have now reduced the required number of tables of normal-curve areas to
one, that of the standard normal distribution. Table A.3 indicates the area under
the standard normal curve corresponding to P(Z  z) for values of z ranging from
−3.49 to 3.49. To illustrate the use of this table, let us find the probability that Z is
less than 1.74. First, we locate a value of z equal to 1.7 in the left column; then we
move across the row to the column under 0.04, where we read 0.9591. Therefore,
P(Z  1.74) = 0.9591. To find a z value corresponding to a given probability, the
process is reversed. For example, the z value leaving an area of 0.2148 under the
curve to the left of z is seen to be −0.79.
178 Chapter 6 Some Continuous Probability Distributions
x
μ
x1 x2
σ
σ
z
0
z1 z2
1
Figure 6.8: The original and transformed normal distributions.
Example 6.2: Given a standard normal distribution, find the area under the curve that lies
(a) to the right of z = 1.84 and
(b) between z = −1.97 and z = 0.86.
z
0 1.84
(a)
z
1.97 0 0.86
(b)
Figure 6.9: Areas for Example 6.2.
Solution: See Figure 6.9 for the specific areas.
(a) The area in Figure 6.9(a) to the right of z = 1.84 is equal to 1 minus the area
in Table A.3 to the left of z = 1.84, namely, 1 − 0.9671 = 0.0329.
(b) The area in Figure 6.9(b) between z = −1.97 and z = 0.86 is equal to the
area to the left of z = 0.86 minus the area to the left of z = −1.97. From
Table A.3 we find the desired area to be 0.8051 − 0.0244 = 0.7807.
6.3 Areas under the Normal Curve 179
Example 6.3: Given a standard normal distribution, find the value of k such that
(a) P(Z  k) = 0.3015 and
(b) P(k  Z  −0.18) = 0.4197.
x
0 k
(a)
0.3015
x
k −0.18
(b)
0.4197
Figure 6.10: Areas for Example 6.3.
Solution: Distributions and the desired areas are shown in Figure 6.10.
(a) In Figure 6.10(a), we see that the k value leaving an area of 0.3015 to the
right must then leave an area of 0.6985 to the left. From Table A.3 it follows
that k = 0.52.
(b) From Table A.3 we note that the total area to the left of −0.18 is equal to
0.4286. In Figure 6.10(b), we see that the area between k and −0.18 is 0.4197,
so the area to the left of k must be 0.4286 − 0.4197 = 0.0089. Hence, from
Table A.3, we have k = −2.37.
Example 6.4: Given a random variable X having a normal distribution with μ = 50 and σ = 10,
find the probability that X assumes a value between 45 and 62.
x
0
0.5 1.2
Figure 6.11: Area for Example 6.4.
Solution: The z values corresponding to x1 = 45 and x2 = 62 are
z1 =
45 − 50
10
= −0.5 and z2 =
62 − 50
10
= 1.2.
180 Chapter 6 Some Continuous Probability Distributions
Therefore,
P(45  X  62) = P(−0.5  Z  1.2).
P(−0.5  Z  1.2) is shown by the area of the shaded region in Figure 6.11. This
area may be found by subtracting the area to the left of the ordinate z = −0.5
from the entire area to the left of z = 1.2. Using Table A.3, we have
P(45  X  62) = P(−0.5  Z  1.2) = P(Z  1.2) − P(Z  −0.5)
= 0.8849 − 0.3085 = 0.5764.
Example 6.5: Given that X has a normal distribution with μ = 300 and σ = 50, find the
probability that X assumes a value greater than 362.
Solution: The normal probability distribution with the desired area shaded is shown in
Figure 6.12. To find P(X  362), we need to evaluate the area under the normal
curve to the right of x = 362. This can be done by transforming x = 362 to the
corresponding z value, obtaining the area to the left of z from Table A.3, and then
subtracting this area from 1. We find that
z =
362 − 300
50
= 1.24.
Hence,
P(X  362) = P(Z  1.24) = 1 − P(Z  1.24) = 1 − 0.8925 = 0.1075.
x
300 362
 50
σ
Figure 6.12: Area for Example 6.5.
According to Chebyshev’s theorem on page 137, the probability that a random
variable assumes a value within 2 standard deviations of the mean is at least 3/4.
If the random variable has a normal distribution, the z values corresponding to
x1 = μ − 2σ and x2 = μ + 2σ are easily computed to be
z1 =
(μ − 2σ) − μ
σ
= −2 and z2 =
(μ + 2σ) − μ
σ
= 2.
Hence,
P(μ − 2σ  X  μ + 2σ) = P(−2  Z  2) = P(Z  2) − P(Z  −2)
= 0.9772 − 0.0228 = 0.9544,
which is a much stronger statement than that given by Chebyshev’s theorem.
6.3 Areas under the Normal Curve 181
Using the Normal Curve in Reverse
Sometimes, we are required to find the value of z corresponding to a specified
probability that falls between values listed in Table A.3 (see Example 6.6). For
convenience, we shall always choose the z value corresponding to the tabular prob-
ability that comes closest to the specified probability.
The preceding two examples were solved by going first from a value of x to a z
value and then computing the desired area. In Example 6.6, we reverse the process
and begin with a known area or probability, find the z value, and then determine
x by rearranging the formula
z =
x − μ
σ
to give x = σz + μ.
Example 6.6: Given a normal distribution with μ = 40 and σ = 6, find the value of x that has
(a) 45% of the area to the left and
(b) 14% of the area to the right.
x
40
(a)
σ = 6 σ = 6
0.45
x
40
(b)
0.14
Figure 6.13: Areas for Example 6.6.
Solution: (a) An area of 0.45 to the left of the desired x value is shaded in Figure 6.13(a).
We require a z value that leaves an area of 0.45 to the left. From Table A.3
we find P(Z  −0.13) = 0.45, so the desired z value is −0.13. Hence,
x = (6)(−0.13) + 40 = 39.22.
(b) In Figure 6.13(b), we shade an area equal to 0.14 to the right of the desired
x value. This time we require a z value that leaves 0.14 of the area to the
right and hence an area of 0.86 to the left. Again, from Table A.3, we find
P(Z  1.08) = 0.86, so the desired z value is 1.08 and
x = (6)(1.08) + 40 = 46.48.
182 Chapter 6 Some Continuous Probability Distributions
6.4 Applications of the Normal Distribution
Some of the many problems for which the normal distribution is applicable are
treated in the following examples. The use of the normal curve to approximate
binomial probabilities is considered in Section 6.5.
Example 6.7: A certain type of storage battery lasts, on average, 3.0 years with a standard
deviation of 0.5 year. Assuming that battery life is normally distributed, find the
probability that a given battery will last less than 2.3 years.
Solution: First construct a diagram such as Figure 6.14, showing the given distribution of
battery lives and the desired area. To find P(X  2.3), we need to evaluate the
area under the normal curve to the left of 2.3. This is accomplished by finding the
area to the left of the corresponding z value. Hence, we find that
z =
2.3 − 3
0.5
= −1.4,
and then, using Table A.3, we have
P(X  2.3) = P(Z  −1.4) = 0.0808.
x
3
2.3
 0.5
σ
Figure 6.14: Area for Example 6.7.
x
800
778 834
 40
σ
Figure 6.15: Area for Example 6.8.
Example 6.8: An electrical firm manufactures light bulbs that have a life, before burn-out, that
is normally distributed with mean equal to 800 hours and a standard deviation of
40 hours. Find the probability that a bulb burns between 778 and 834 hours.
Solution: The distribution of light bulb life is illustrated in Figure 6.15. The z values corre-
sponding to x1 = 778 and x2 = 834 are
z1 =
778 − 800
40
= −0.55 and z2 =
834 − 800
40
= 0.85.
Hence,
P(778  X  834) = P(−0.55  Z  0.85) = P(Z  0.85) − P(Z  −0.55)
= 0.8023 − 0.2912 = 0.5111.
Example 6.9: In an industrial process, the diameter of a ball bearing is an important measure-
ment. The buyer sets specifications for the diameter to be 3.0 ± 0.01 cm. The
6.4 Applications of the Normal Distribution 183
implication is that no part falling outside these specifications will be accepted. It
is known that in the process the diameter of a ball bearing has a normal distribu-
tion with mean μ = 3.0 and standard deviation σ = 0.005. On average, how many
manufactured ball bearings will be scrapped?
Solution: The distribution of diameters is illustrated by Figure 6.16. The values correspond-
ing to the specification limits are x1 = 2.99 and x2 = 3.01. The corresponding z
values are
z1 =
2.99 − 3.0
0.005
= −2.0 and z2 =
3.01 − 3.0
0.005
= +2.0.
Hence,
P(2.99  X  3.01) = P(−2.0  Z  2.0).
From Table A.3, P(Z  −2.0) = 0.0228. Due to symmetry of the normal distribu-
tion, we find that
P(Z  −2.0) + P(Z  2.0) = 2(0.0228) = 0.0456.
As a result, it is anticipated that, on average, 4.56% of manufactured ball bearings
will be scrapped.
x
3.0
2.99 3.01
σ = 0.005
0.0228
0.0228
Figure 6.16: Area for Example 6.9.
x
1.500
1.108 1.892
σ = 0.2
0.025 0.025
Figure 6.17: Specifications for Example 6.10.
Example 6.10: Gauges are used to reject all components for which a certain dimension is not
within the specification 1.50 ± d. It is known that this measurement is normally
distributed with mean 1.50 and standard deviation 0.2. Determine the value d
such that the specifications “cover” 95% of the measurements.
Solution: From Table A.3 we know that
P(−1.96  Z  1.96) = 0.95.
Therefore,
1.96 =
(1.50 + d) − 1.50
0.2
,
from which we obtain
d = (0.2)(1.96) = 0.392.
An illustration of the specifications is shown in Figure 6.17.
184 Chapter 6 Some Continuous Probability Distributions
Example 6.11: A certain machine makes electrical resistors having a mean resistance of 40 ohms
and a standard deviation of 2 ohms. Assuming that the resistance follows a normal
distribution and can be measured to any degree of accuracy, what percentage of
resistors will have a resistance exceeding 43 ohms?
Solution: A percentage is found by multiplying the relative frequency by 100%. Since the
relative frequency for an interval is equal to the probability of a value falling in the
interval, we must find the area to the right of x = 43 in Figure 6.18. This can be
done by transforming x = 43 to the corresponding z value, obtaining the area to
the left of z from Table A.3, and then subtracting this area from 1. We find
z =
43 − 40
2
= 1.5.
Therefore,
P(X  43) = P(Z  1.5) = 1 − P(Z  1.5) = 1 − 0.9332 = 0.0668.
Hence, 6.68% of the resistors will have a resistance exceeding 43 ohms.
x
40 43
 2.0
σ
Figure 6.18: Area for Example 6.11.
x
40 43.5
 2.0
σ
Figure 6.19: Area for Example 6.12.
Example 6.12: Find the percentage of resistances exceeding 43 ohms for Example 6.11 if resistance
is measured to the nearest ohm.
Solution: This problem differs from that in Example 6.11 in that we now assign a measure-
ment of 43 ohms to all resistors whose resistances are greater than 42.5 and less
than 43.5. We are actually approximating a discrete distribution by means of a
continuous normal distribution. The required area is the region shaded to the right
of 43.5 in Figure 6.19. We now find that
z =
43.5 − 40
2
= 1.75.
Hence,
P(X  43.5) = P(Z  1.75) = 1 − P(Z  1.75) = 1 − 0.9599 = 0.0401.
Therefore, 4.01% of the resistances exceed 43 ohms when measured to the nearest
ohm. The difference 6.68% − 4.01% = 2.67% between this answer and that of
Example 6.11 represents all those resistance values greater than 43 and less than
43.5 that are now being recorded as 43 ohms.
/ /
Exercises 185
Example 6.13: The average grade for an exam is 74, and the standard deviation is 7. If 12% of
the class is given As, and the grades are curved to follow a normal distribution,
what is the lowest possible A and the highest possible B?
Solution: In this example, we begin with a known area of probability, find the z value, and
then determine x from the formula x = σz + μ. An area of 0.12, corresponding
to the fraction of students receiving As, is shaded in Figure 6.20. We require a z
value that leaves 0.12 of the area to the right and, hence, an area of 0.88 to the
left. From Table A.3, P(Z  1.18) has the closest value to 0.88, so the desired z
value is 1.18. Hence,
x = (7)(1.18) + 74 = 82.26.
Therefore, the lowest A is 83 and the highest B is 82.
x
74
σ = 7
0.12
Figure 6.20: Area for Example 6.13.
x
74 D6
σ = 7
0.6
Figure 6.21: Area for Example 6.14.
Example 6.14: Refer to Example 6.13 and find the sixth decile.
Solution: The sixth decile, written D6, is the x value that leaves 60% of the area to the left,
as shown in Figure 6.21. From Table A.3 we find P(Z  0.25) ≈ 0.6, so the desired
z value is 0.25. Now x = (7)(0.25) + 74 = 75.75. Hence, D6 = 75.75. That is, 60%
of the grades are 75 or less.
Exercises
6.1 Given a continuous uniform distribution, show
that
(a) μ = A+B
2
and
(b) σ2
= (B−A)2
12
.
6.2 Suppose X follows a continuous uniform distribu-
tion from 1 to 5. Determine the conditional probability
P(X  2.5 | X ≤ 4).
6.3 The daily amount of coffee, in liters, dispensed
by a machine located in an airport lobby is a random
variable X having a continuous uniform distribution
with A = 7 and B = 10. Find the probability that
on a given day the amount of coffee dispensed by this
machine will be
(a) at most 8.8 liters;
(b) more than 7.4 liters but less than 9.5 liters;
(c) at least 8.5 liters.
6.4 A bus arrives every 10 minutes at a bus stop. It
is assumed that the waiting time for a particular indi-
vidual is a random variable with a continuous uniform
distribution.
/ /
186 Chapter 6 Some Continuous Probability Distributions
(a) What is the probability that the individual waits
more than 7 minutes?
(b) What is the probability that the individual waits
between 2 and 7 minutes?
6.5 Given a standard normal distribution, find the
area under the curve that lies
(a) to the left of z = −1.39;
(b) to the right of z = 1.96;
(c) between z = −2.16 and z = −0.65;
(d) to the left of z = 1.43;
(e) to the right of z = −0.89;
(f) between z = −0.48 and z = 1.74.
6.6 Find the value of z if the area under a standard
normal curve
(a) to the right of z is 0.3622;
(b) to the left of z is 0.1131;
(c) between 0 and z, with z  0, is 0.4838;
(d) between −z and z, with z  0, is 0.9500.
6.7 Given a standard normal distribution, find the
value of k such that
(a) P(Z  k) = 0.2946;
(b) P(Z  k) = 0.0427;
(c) P(−0.93  Z  k) = 0.7235.
6.8 Given a normal distribution with μ = 30 and
σ = 6, find
(a) the normal curve area to the right of x = 17;
(b) the normal curve area to the left of x = 22;
(c) the normal curve area between x = 32 and x = 41;
(d) the value of x that has 80% of the normal curve
area to the left;
(e) the two values of x that contain the middle 75% of
the normal curve area.
6.9 Given the normally distributed variable X with
mean 18 and standard deviation 2.5, find
(a) P(X  15);
(b) the value of k such that P(X  k) = 0.2236;
(c) the value of k such that P(X  k) = 0.1814;
(d) P(17  X  21).
6.10 According to Chebyshev’s theorem, the proba-
bility that any random variable assumes a value within
3 standard deviations of the mean is at least 8/9. If it
is known that the probability distribution of a random
variable X is normal with mean μ and variance σ2
,
what is the exact value of P(μ − 3σ  X  μ + 3σ)?
6.11 A soft-drink machine is regulated so that it dis-
charges an average of 200 milliliters per cup. If the
amount of drink is normally distributed with a stan-
dard deviation equal to 15 milliliters,
(a) what fraction of the cups will contain more than
224 milliliters?
(b) what is the probability that a cup contains between
191 and 209 milliliters?
(c) how many cups will probably overflow if 230-
milliliter cups are used for the next 1000 drinks?
(d) below what value do we get the smallest 25% of the
drinks?
6.12 The loaves of rye bread distributed to local
stores by a certain bakery have an average length of 30
centimeters and a standard deviation of 2 centimeters.
Assuming that the lengths are normally distributed,
what percentage of the loaves are
(a) longer than 31.7 centimeters?
(b) between 29.3 and 33.5 centimeters in length?
(c) shorter than 25.5 centimeters?
6.13 A research scientist reports that mice will live an
average of 40 months when their diets are sharply re-
stricted and then enriched with vitamins and proteins.
Assuming that the lifetimes of such mice are normally
distributed with a standard deviation of 6.3 months,
find the probability that a given mouse will live
(a) more than 32 months;
(b) less than 28 months;
(c) between 37 and 49 months.
6.14 The finished inside diameter of a piston ring is
normally distributed with a mean of 10 centimeters and
a standard deviation of 0.03 centimeter.
(a) What proportion of rings will have inside diameters
exceeding 10.075 centimeters?
(b) What is the probability that a piston ring will have
an inside diameter between 9.97 and 10.03 centime-
ters?
(c) Below what value of inside diameter will 15% of the
piston rings fall?
6.15 A lawyer commutes daily from his suburban
home to his midtown office. The average time for a
one-way trip is 24 minutes, with a standard deviation
of 3.8 minutes. Assume the distribution of trip times
to be normally distributed.
(a) What is the probability that a trip will take at least
1/2 hour?
(b) If the office opens at 9:00 A.M. and the lawyer leaves
his house at 8:45 A.M. daily, what percentage of the
time is he late for work?
6.5 Normal Approximation to the Binomial 187
(c) If he leaves the house at 8:35 A.M. and coffee is
served at the office from 8:50 A.M. until 9:00 A.M.,
what is the probability that he misses coffee?
(d) Find the length of time above which we find the
slowest 15% of the trips.
(e) Find the probability that 2 of the next 3 trips will
take at least 1/2 hour.
6.16 In the November 1990 issue of Chemical Engi-
neering Progress, a study discussed the percent purity
of oxygen from a certain supplier. Assume that the
mean was 99.61 with a standard deviation of 0.08. As-
sume that the distribution of percent purity was ap-
proximately normal.
(a) What percentage of the purity values would you
expect to be between 99.5 and 99.7?
(b) What purity value would you expect to exceed ex-
actly 5% of the population?
6.17 The average life of a certain type of small motor
is 10 years with a standard deviation of 2 years. The
manufacturer replaces free all motors that fail while
under guarantee. If she is willing to replace only 3% of
the motors that fail, how long a guarantee should be
offered? Assume that the lifetime of a motor follows a
normal distribution.
6.18 The heights of 1000 students are normally dis-
tributed with a mean of 174.5 centimeters and a stan-
dard deviation of 6.9 centimeters. Assuming that the
heights are recorded to the nearest half-centimeter,
how many of these students would you expect to have
heights
(a) less than 160.0 centimeters?
(b) between 171.5 and 182.0 centimeters inclusive?
(c) equal to 175.0 centimeters?
(d) greater than or equal to 188.0 centimeters?
6.19 A company pays its employees an average wage
of $15.90 an hour with a standard deviation of $1.50. If
the wages are approximately normally distributed and
paid to the nearest cent,
(a) what percentage of the workers receive wages be-
tween $13.75 and $16.22 an hour inclusive?
(b) the highest 5% of the employee hourly wages is
greater than what amount?
6.20 The weights of a large number of miniature poo-
dles are approximately normally distributed with a
mean of 8 kilograms and a standard deviation of 0.9
kilogram. If measurements are recorded to the nearest
tenth of a kilogram, find the fraction of these poodles
with weights
(a) over 9.5 kilograms;
(b) of at most 8.6 kilograms;
(c) between 7.3 and 9.1 kilograms inclusive.
6.21 The tensile strength of a certain metal compo-
nent is normally distributed with a mean of 10,000 kilo-
grams per square centimeter and a standard deviation
of 100 kilograms per square centimeter. Measurements
are recorded to the nearest 50 kilograms per square
centimeter.
(a) What proportion of these components exceed
10,150 kilograms per square centimeter in tensile
strength?
(b) If specifications require that all components have
tensile strength between 9800 and 10,200 kilograms
per square centimeter inclusive, what proportion of
pieces would we expect to scrap?
6.22 If a set of observations is normally distributed,
what percent of these differ from the mean by
(a) more than 1.3σ?
(b) less than 0.52σ?
6.23 The IQs of 600 applicants to a certain college
are approximately normally distributed with a mean
of 115 and a standard deviation of 12. If the college
requires an IQ of at least 95, how many of these stu-
dents will be rejected on this basis of IQ, regardless of
their other qualifications? Note that IQs are recorded
to the nearest integers.
6.5 Normal Approximation to the Binomial
Probabilities associated with binomial experiments are readily obtainable from the
formula b(x; n, p) of the binomial distribution or from Table A.1 when n is small.
In addition, binomial probabilities are readily available in many computer software
packages. However, it is instructive to learn the relationship between the binomial
and the normal distribution. In Section 5.5, we illustrated how the Poisson dis-
tribution can be used to approximate binomial probabilities when n is quite large
and p is very close to 0 or 1. Both the binomial and the Poisson distributions
188 Chapter 6 Some Continuous Probability Distributions
are discrete. The first application of a continuous probability distribution to ap-
proximate probabilities over a discrete sample space was demonstrated in Example
6.12, where the normal curve was used. The normal distribution is often a good
approximation to a discrete distribution when the latter takes on a symmetric bell
shape. From a theoretical point of view, some distributions converge to the normal
as their parameters approach certain limits. The normal distribution is a conve-
nient approximating distribution because the cumulative distribution function is
so easily tabled. The binomial distribution is nicely approximated by the normal
in practical problems when one works with the cumulative distribution function.
We now state a theorem that allows us to use areas under the normal curve to
approximate binomial properties when n is sufficiently large.
Theorem 6.3: If X is a binomial random variable with mean μ = np and variance σ2
= npq,
then the limiting form of the distribution of
Z =
X − np
√
npq
,
as n → ∞, is the standard normal distribution n(z; 0, 1).
It turns out that the normal distribution with μ = np and σ2
= np(1 − p) not
only provides a very accurate approximation to the binomial distribution when
n is large and p is not extremely close to 0 or 1 but also provides a fairly good
approximation even when n is small and p is reasonably close to 1/2.
To illustrate the normal approximation to the binomial distribution, we first
draw the histogram for b(x; 15, 0.4) and then superimpose the particular normal
curve having the same mean and variance as the binomial variable X. Hence, we
draw a normal curve with
μ = np = (15)(0.4) = 6 and σ2
= npq = (15)(0.4)(0.6) = 3.6.
The histogram of b(x; 15, 0.4) and the corresponding superimposed normal curve,
which is completely determined by its mean and variance, are illustrated in Figure
6.22.
11
0 1 2 3 4 5 6 7 8 9 13 15
x
Figure 6.22: Normal approximation of b(x; 15, 0.4).
6.5 Normal Approximation to the Binomial 189
The exact probability that the binomial random variable X assumes a given
value x is equal to the area of the bar whose base is centered at x. For example, the
exact probability that X assumes the value 4 is equal to the area of the rectangle
with base centered at x = 4. Using Table A.1, we find this area to be
P(X = 4) = b(4; 15, 0.4) = 0.1268,
which is approximately equal to the area of the shaded region under the normal
curve between the two ordinates x1 = 3.5 and x2 = 4.5 in Figure 6.23. Converting
to z values, we have
z1 =
3.5 − 6
1.897
= −1.32 and z2 =
4.5 − 6
1.897
= −0.79.
11
0 1 2 3 4 5 6 7 8 9 13 15
x
Figure 6.23: Normal approximation of b(x; 15, 0.4) and
9

x=7
b(x; 15, 0.4).
If X is a binomial random variable and Z a standard normal variable, then
P(X = 4) = b(4; 15, 0.4) ≈ P(−1.32  Z  −0.79)
= P(Z  −0.79) − P(Z  −1.32) = 0.2148 − 0.0934 = 0.1214.
This agrees very closely with the exact value of 0.1268.
The normal approximation is most useful in calculating binomial sums for large
values of n. Referring to Figure 6.23, we might be interested in the probability
that X assumes a value from 7 to 9 inclusive. The exact probability is given by
P(7 ≤ X ≤ 9) =
9

x=0
b(x; 15, 0.4) −
6

x=0
b(x; 15, 0.4)
= 0.9662 − 0.6098 = 0.3564,
which is equal to the sum of the areas of the rectangles with bases centered at
x = 7, 8, and 9. For the normal approximation, we find the area of the shaded
region under the curve between the ordinates x1 = 6.5 and x2 = 9.5 in Figure 6.23.
The corresponding z values are
z1 =
6.5 − 6
1.897
= 0.26 and z2 =
9.5 − 6
1.897
= 1.85.
190 Chapter 6 Some Continuous Probability Distributions
Now,
P(7 ≤ X ≤ 9) ≈ P(0.26  Z  1.85) = P(Z  1.85) − P(Z  0.26)
= 0.9678 − 0.6026 = 0.3652.
Once again, the normal curve approximation provides a value that agrees very
closely with the exact value of 0.3564. The degree of accuracy, which depends on
how well the curve fits the histogram, will increase as n increases. This is particu-
larly true when p is not very close to 1/2 and the histogram is no longer symmetric.
Figures 6.24 and 6.25 show the histograms for b(x; 6, 0.2) and b(x; 15, 0.2), respec-
tively. It is evident that a normal curve would fit the histogram considerably better
when n = 15 than when n = 6.
0 1
x
2 3 4 5 6
Figure 6.24: Histogram for b(x; 6, 0.2).
0
x
1 2 3 4 5 6 7 8 9 11 13 15
Figure 6.25: Histogram for b(x; 15, 0.2).
In our illustrations of the normal approximation to the binomial, it became
apparent that if we seek the area under the normal curve to the left of, say, x,
it is more accurate to use x + 0.5. This is a correction to accommodate the fact
that a discrete distribution is being approximated by a continuous distribution.
The correction +0.5 is called a continuity correction. The foregoing discussion
leads to the following formal normal approximation to the binomial.
Normal
Approximation to
the Binomial
Distribution
Let X be a binomial random variable with parameters n and p. For large n, X
has approximately a normal distribution with μ = np and σ2
= npq = np(1−p)
and
P(X ≤ x) =
x

k=0
b(k; n, p)
≈ area under normal curve to the left of x + 0.5
= P

Z ≤
x + 0.5 − np
√
npq

,
and the approximation will be good if np and n(1− p) are greater than or equal
to 5.
As we indicated earlier, the quality of the approximation is quite good for large
n. If p is close to 1/2, a moderate or small sample size will be sufficient for a
reasonable approximation. We offer Table 6.1 as an indication of the quality of the
6.5 Normal Approximation to the Binomial 191
approximation. Both the normal approximation and the true binomial cumulative
probabilities are given. Notice that at p = 0.05 and p = 0.10, the approximation
is fairly crude for n = 10. However, even for n = 10, note the improvement for
p = 0.50. On the other hand, when p is fixed at p = 0.05, note the improvement
of the approximation as we go from n = 20 to n = 100.
Table 6.1: Normal Approximation and True Cumulative Binomial Probabilities
p = 0.05, n = 10 p = 0.10, n = 10 p = 0.50, n = 10
r Binomial Normal Binomial Normal Binomial Normal
0 0.5987 0.5000 0.3487 0.2981 0.0010 0.0022
1 0.9139 0.9265 0.7361 0.7019 0.0107 0.0136
2 0.9885 0.9981 0.9298 0.9429 0.0547 0.0571
3 0.9990 1.0000 0.9872 0.9959 0.1719 0.1711
4 1.0000 1.0000 0.9984 0.9999 0.3770 0.3745
5 1.0000 1.0000 0.6230 0.6255
6 0.8281 0.8289
7 0.9453 0.9429
8 0.9893 0.9864
9 0.9990 0.9978
10 1.0000 0.9997
p = 0.05
n = 20 n = 50 n = 100
r Binomial Normal Binomial Normal Binomial Normal
0 0.3585 0.3015 0.0769 0.0968 0.0059 0.0197
1 0.7358 0.6985 0.2794 0.2578 0.0371 0.0537
2 0.9245 0.9382 0.5405 0.5000 0.1183 0.1251
3 0.9841 0.9948 0.7604 0.7422 0.2578 0.2451
4 0.9974 0.9998 0.8964 0.9032 0.4360 0.4090
5 0.9997 1.0000 0.9622 0.9744 0.6160 0.5910
6 1.0000 1.0000 0.9882 0.9953 0.7660 0.7549
7 0.9968 0.9994 0.8720 0.8749
8 0.9992 0.9999 0.9369 0.9463
9 0.9998 1.0000 0.9718 0.9803
10 1.0000 1.0000 0.9885 0.9941
Example 6.15: The probability that a patient recovers from a rare blood disease is 0.4. If 100
people are known to have contracted this disease, what is the probability that
fewer than 30 survive?
Solution: Let the binomial variable X represent the number of patients who survive. Since
n = 100, we should obtain fairly accurate results using the normal-curve approxi-
mation with
μ = np = (100)(0.4) = 40 and σ =
√
npq =

(100)(0.4)(0.6) = 4.899.
To obtain the desired probability, we have to find the area to the left of x = 29.5.
192 Chapter 6 Some Continuous Probability Distributions
The z value corresponding to 29.5 is
z =
29.5 − 40
4.899
= −2.14,
and the probability of fewer than 30 of the 100 patients surviving is given by the
shaded region in Figure 6.26. Hence,
P(X  30) ≈ P(Z  −2.14) = 0.0162.
0
2.14
x
 1
σ
Figure 6.26: Area for Example 6.15.
0 1.16 2.71
x
 1
σ
Figure 6.27: Area for Example 6.16.
Example 6.16: A multiple-choice quiz has 200 questions, each with 4 possible answers of which
only 1 is correct. What is the probability that sheer guesswork yields from 25 to
30 correct answers for the 80 of the 200 problems about which the student has no
knowledge?
Solution: The probability of guessing a correct answer for each of the 80 questions is p = 1/4.
If X represents the number of correct answers resulting from guesswork, then
P(25 ≤ X ≤ 30) =
30

x=25
b(x; 80, 1/4).
Using the normal curve approximation with
μ = np = (80)

1
4

= 20
and
σ =
√
npq =

(80)(1/4)(3/4) = 3.873,
we need the area between x1 = 24.5 and x2 = 30.5. The corresponding z values
are
z1 =
24.5 − 20
3.873
= 1.16 and z2 =
30.5 − 20
3.873
= 2.71.
The probability of correctly guessing from 25 to 30 questions is given by the shaded
region in Figure 6.27. From Table A.3 we find that
P(25 ≤ X ≤ 30) =
30

x=25
b(x; 80, 0.25) ≈ P(1.16  Z  2.71)
= P(Z  2.71) − P(Z  1.16) = 0.9966 − 0.8770 = 0.1196.
/ /
Exercises 193
Exercises
6.24 A coin is tossed 400 times. Use the normal curve
approximation to find the probability of obtaining
(a) between 185 and 210 heads inclusive;
(b) exactly 205 heads;
(c) fewer than 176 or more than 227 heads.
6.25 A process for manufacturing an electronic com-
ponent yields items of which 1% are defective. A qual-
ity control plan is to select 100 items from the process,
and if none are defective, the process continues. Use
the normal approximation to the binomial to find
(a) the probability that the process continues given the
sampling plan described;
(b) the probability that the process continues even if
the process has gone bad (i.e., if the frequency
of defective components has shifted to 5.0% defec-
tive).
6.26 A process yields 10% defective items. If 100
items are randomly selected from the process, what
is the probability that the number of defectives
(a) exceeds 13?
(b) is less than 8?
6.27 The probability that a patient recovers from a
delicate heart operation is 0.9. Of the next 100 patients
having this operation, what is the probability that
(a) between 84 and 95 inclusive survive?
(b) fewer than 86 survive?
6.28 Researchers at George Washington University
and the National Institutes of Health claim that ap-
proximately 75% of people believe “tranquilizers work
very well to make a person more calm and relaxed.” Of
the next 80 people interviewed, what is the probability
that
(a) at least 50 are of this opinion?
(b) at most 56 are of this opinion?
6.29 If 20% of the residents in a U.S. city prefer a
white telephone over any other color available, what is
the probability that among the next 1000 telephones
installed in that city
(a) between 170 and 185 inclusive will be white?
(b) at least 210 but not more than 225 will be white?
6.30 A drug manufacturer claims that a certain drug
cures a blood disease, on the average, 80% of the time.
To check the claim, government testers use the drug on
a sample of 100 individuals and decide to accept the
claim if 75 or more are cured.
(a) What is the probability that the claim will be re-
jected when the cure probability is, in fact, 0.8?
(b) What is the probability that the claim will be ac-
cepted by the government when the cure probabil-
ity is as low as 0.7?
6.31 One-sixth of the male freshmen entering a large
state school are out-of-state students. If the students
are assigned at random to dormitories, 180 to a build-
ing, what is the probability that in a given dormitory
at least one-fifth of the students are from out of state?
6.32 A pharmaceutical company knows that approx-
imately 5% of its birth-control pills have an ingredient
that is below the minimum strength, thus rendering
the pill ineffective. What is the probability that fewer
than 10 in a sample of 200 pills will be ineffective?
6.33 Statistics released by the National Highway
Traffic Safety Administration and the National Safety
Council show that on an average weekend night, 1 out
of every 10 drivers on the road is drunk. If 400 drivers
are randomly checked next Saturday night, what is the
probability that the number of drunk drivers will be
(a) less than 32?
(b) more than 49?
(c) at least 35 but less than 47?
6.34 A pair of dice is rolled 180 times. What is the
probability that a total of 7 occurs
(a) at least 25 times?
(b) between 33 and 41 times inclusive?
(c) exactly 30 times?
6.35 A company produces component parts for an en-
gine. Parts specifications suggest that 95% of items
meet specifications. The parts are shipped to cus-
tomers in lots of 100.
(a) What is the probability that more than 2 items in
a given lot will be defective?
(b) What is the probability that more than 10 items in
a lot will be defective?
6.36 A common practice of airline companies is to
sell more tickets for a particular flight than there are
seats on the plane, because customers who buy tickets
do not always show up for the flight. Suppose that
the percentage of no-shows at flight time is 2%. For
a particular flight with 197 seats, a total of 200 tick-
194 Chapter 6 Some Continuous Probability Distributions
ets were sold. What is the probability that the airline
overbooked this flight?
6.37 The serum cholesterol level X in 14-year-old
boys has approximately a normal distribution with
mean 170 and standard deviation 30.
(a) Find the probability that the serum cholesterol
level of a randomly chosen 14-year-old boy exceeds
230.
(b) In a middle school there are 300 14-year-old boys.
Find the probability that at least 8 boys have a
serum cholesterol level that exceeds 230.
6.38 A telemarketing company has a special letter-
opening machine that opens and removes the contents
of an envelope. If the envelope is fed improperly into
the machine, the contents of the envelope may not be
removed or may be damaged. In this case, the machine
is said to have “failed.”
(a) If the machine has a probability of failure of 0.01,
what is the probability of more than 1 failure oc-
curring in a batch of 20 envelopes?
(b) If the probability of failure of the machine is 0.01
and a batch of 500 envelopes is to be opened, what
is the probability that more than 8 failures will
occur?
6.6 Gamma and Exponential Distributions
Although the normal distribution can be used to solve many problems in engineer-
ing and science, there are still numerous situations that require different types of
density functions. Two such density functions, the gamma and exponential
distributions, are discussed in this section.
It turns out that the exponential distribution is a special case of the gamma dis-
tribution. Both find a large number of applications. The exponential and gamma
distributions play an important role in both queuing theory and reliability prob-
lems. Time between arrivals at service facilities and time to failure of component
parts and electrical systems often are nicely modeled by the exponential distribu-
tion. The relationship between the gamma and the exponential allows the gamma
to be used in similar types of problems. More details and illustrations will be
supplied later in the section.
The gamma distribution derives its name from the well-known gamma func-
tion, studied in many areas of mathematics. Before we proceed to the gamma
distribution, let us review this function and some of its important properties.
Definition 6.2: The gamma function is defined by
Γ(α) =
∞
0
xα−1
e−x
dx, for α  0.
The following are a few simple properties of the gamma function.
(a) Γ(n) = (n − 1)(n − 2) · · · (1)Γ(1), for a positive integer n.
To see the proof, integrating by parts with u = xα−1
and dv = e−x
dx, we obtain
Γ(α) = −e−x
xα−1

∞
0
+
∞
0
e−x
(α − 1)xα−2
dx = (α − 1)
∞
0
xα−2
e−x
dx,
for α  1, which yields the recursion formula
Γ(α) = (α − 1)Γ(α − 1).
The result follows after repeated application of the recursion formula. Using this
result, we can easily show the following two properties.
6.6 Gamma and Exponential Distributions 195
(b) Γ(n) = (n − 1)! for a positive integer n.
(c) Γ(1) = 1.
Furthermore, we have the following property of Γ(α), which is left for the reader
to verify (see Exercise 6.39 on page 206).
(d) Γ(1/2) =
√
π.
The following is the definition of the gamma distribution.
Gamma
Distribution
The continuous random variable X has a gamma distribution, with param-
eters α and β, if its density function is given by
f(x; α, β) =

1
βαΓ(α) xα−1
e−x/β
, x  0,
0, elsewhere,
where α  0 and β  0.
Graphs of several gamma distributions are shown in Figure 6.28 for certain
specified values of the parameters α and β. The special gamma distribution for
which α = 1 is called the exponential distribution.
0 1 2 3 4 5 6
0.5
1.0
f(x)
x
= 1
α
β = 1
= 2
α
β = 1
= 4
α
β = 1
Figure 6.28: Gamma distributions.
Exponential
Distribution
The continuous random variable X has an exponential distribution, with
parameter β, if its density function is given by
f(x; β) =

1
β e−x/β
, x  0,
0, elsewhere,
where β  0.
196 Chapter 6 Some Continuous Probability Distributions
The following theorem and corollary give the mean and variance of the gamma and
exponential distributions.
Theorem 6.4: The mean and variance of the gamma distribution are
μ = αβ and σ2
= αβ2
.
The proof of this theorem is found in Appendix A.26.
Corollary 6.1: The mean and variance of the exponential distribution are
μ = β and σ2
= β2
.
Relationship to the Poisson Process
We shall pursue applications of the exponential distribution and then return to the
gamma distribution. The most important applications of the exponential distribu-
tion are situations where the Poisson process applies (see Section 5.5). The reader
should recall that the Poisson process allows for the use of the discrete distribu-
tion called the Poisson distribution. Recall that the Poisson distribution is used to
compute the probability of specific numbers of “events” during a particular period
of time or span of space. In many applications, the time period or span of space
is the random variable. For example, an industrial engineer may be interested in
modeling the time T between arrivals at a congested intersection during rush hour
in a large city. An arrival represents the Poisson event.
The relationship between the exponential distribution (often called the negative
exponential) and the Poisson process is quite simple. In Chapter 5, the Poisson
distribution was developed as a single-parameter distribution with parameter λ,
where λ may be interpreted as the mean number of events per unit “time.” Con-
sider now the random variable described by the time required for the first event
to occur. Using the Poisson distribution, we find that the probability of no events
occurring in the span up to time t is given by
p(0; λt) =
e−λt
(λt)0
0!
= e−λt
.
We can now make use of the above and let X be the time to the first Poisson
event. The probability that the length of time until the first event will exceed x is
the same as the probability that no Poisson events will occur in x. The latter, of
course, is given by e−λx
. As a result,
P(X  x) = e−λx
.
Thus, the cumulative distribution function for X is given by
P(0 ≤ X ≤ x) = 1 − e−λx
.
Now, in order that we may recognize the presence of the exponential distribution,
we differentiate the cumulative distribution function above to obtain the density
6.6 Gamma and Exponential Distributions 197
function
f(x) = λe−λx
,
which is the density function of the exponential distribution with λ = 1/β.
Applications of the Exponential and Gamma Distributions
In the foregoing, we provided the foundation for the application of the exponential
distribution in “time to arrival” or time to Poisson event problems. We will illus-
trate some applications here and then proceed to discuss the role of the gamma
distribution in these modeling applications. Notice that the mean of the exponen-
tial distribution is the parameter β, the reciprocal of the parameter in the Poisson
distribution. The reader should recall that it is often said that the Poisson distri-
bution has no memory, implying that occurrences in successive time periods are
independent. The important parameter β is the mean time between events. In
reliability theory, where equipment failure often conforms to this Poisson process,
β is called mean time between failures. Many equipment breakdowns do follow
the Poisson process, and thus the exponential distribution does apply. Other ap-
plications include survival times in biomedical experiments and computer response
time.
In the following example, we show a simple application of the exponential dis-
tribution to a problem in reliability. The binomial distribution also plays a role in
the solution.
Example 6.17: Suppose that a system contains a certain type of component whose time, in years,
to failure is given by T. The random variable T is modeled nicely by the exponential
distribution with mean time to failure β = 5. If 5 of these components are installed
in different systems, what is the probability that at least 2 are still functioning at
the end of 8 years?
Solution: The probability that a given component is still functioning after 8 years is given
by
P(T  8) =
1
5
∞
8
e−t/5
dt = e−8/5
≈ 0.2.
Let X represent the number of components functioning after 8 years. Then using
the binomial distribution, we have
P(X ≥ 2) =
5

x=2
b(x; 5, 0.2) = 1 −
1

x=0
b(x; 5, 0.2) = 1 − 0.7373 = 0.2627.
There are exercises and examples in Chapter 3 where the reader has already
encountered the exponential distribution. Others involving waiting time and reli-
ability include Example 6.24 and some of the exercises and review exercises at the
end of this chapter.
The Memoryless Property and Its Effect on the Exponential Distribution
The types of applications of the exponential distribution in reliability and compo-
nent or machine lifetime problems are influenced by the memoryless (or lack-of-
memory) property of the exponential distribution. For example, in the case of,
198 Chapter 6 Some Continuous Probability Distributions
say, an electronic component where lifetime has an exponential distribution, the
probability that the component lasts, say, t hours, that is, P(X ≥ t), is the same
as the conditional probability
P(X ≥ t0 + t | X ≥ t0).
So if the component “makes it” to t0 hours, the probability of lasting an additional
t hours is the same as the probability of lasting t hours. There is no “punish-
ment” through wear that may have ensued for lasting the first t0 hours. Thus,
the exponential distribution is more appropriate when the memoryless property is
justified. But if the failure of the component is a result of gradual or slow wear (as
in mechanical wear), then the exponential does not apply and either the gamma
or the Weibull distribution (Section 6.10) may be more appropriate.
The importance of the gamma distribution lies in the fact that it defines a
family of which other distributions are special cases. But the gamma itself has
important applications in waiting time and reliability theory. Whereas the expo-
nential distribution describes the time until the occurrence of a Poisson event (or
the time between Poisson events), the time (or space) occurring until a specified
number of Poisson events occur is a random variable whose density function is
described by the gamma distribution. This specific number of events is the param-
eter α in the gamma density function. Thus, it becomes easy to understand that
when α = 1, the special case of the exponential distribution occurs. The gamma
density can be developed from its relationship to the Poisson process in much the
same manner as we developed the exponential density. The details are left to the
reader. The following is a numerical example of the use of the gamma distribution
in a waiting-time application.
Example 6.18: Suppose that telephone calls arriving at a particular switchboard follow a Poisson
process with an average of 5 calls coming per minute. What is the probability that
up to a minute will elapse by the time 2 calls have come in to the switchboard?
Solution: The Poisson process applies, with time until 2 Poisson events following a gamma
distribution with β = 1/5 and α = 2. Denote by X the time in minutes that
transpires before 2 calls come. The required probability is given by
P(X ≤ 1) =
1
0
1
β2
xe−x/β
dx = 25
1
0
xe−5x
dx = 1 − e−5
(1 + 5) = 0.96.
While the origin of the gamma distribution deals in time (or space) until the
occurrence of α Poisson events, there are many instances where a gamma distri-
bution works very well even though there is no clear Poisson structure. This is
particularly true for survival time problems in both engineering and biomedical
applications.
Example 6.19: In a biomedical study with rats, a dose-response investigation is used to determine
the effect of the dose of a toxicant on their survival time. The toxicant is one that
is frequently discharged into the atmosphere from jet fuel. For a certain dose of
the toxicant, the study determines that the survival time, in weeks, has a gamma
distribution with α = 5 and β = 10. What is the probability that a rat survives
no longer than 60 weeks?
6.6 Gamma and Exponential Distributions 199
Solution: Let the random variable X be the survival time (time to death). The required
probability is
P(X ≤ 60) =
1
β5
60
0
xα−1
e−x/β
Γ(5)
dx.
The integral above can be solved through the use of the incomplete gamma
function, which becomes the cumulative distribution function for the gamma dis-
tribution. This function is written as
F(x; α) =
x
0
yα−1
e−y
Γ(α)
dy.
If we let y = x/β, so x = βy, we have
P(X ≤ 60) =
6
0
y4
e−y
Γ(5)
dy,
which is denoted as F(6; 5) in the table of the incomplete gamma function in
Appendix A.23. Note that this allows a quick computation of probabilities for the
gamma distribution. Indeed, for this problem, the probability that the rat survives
no longer than 60 days is given by
P(X ≤ 60) = F(6; 5) = 0.715.
Example 6.20: It is known, from previous data, that the length of time in months between cus-
tomer complaints about a certain product is a gamma distribution with α = 2
and β = 4. Changes were made to tighten quality control requirements. Following
these changes, 20 months passed before the first complaint. Does it appear as if
the quality control tightening was effective?
Solution: Let X be the time to the first complaint, which, under conditions prior to the
changes, followed a gamma distribution with α = 2 and β = 4. The question
centers around how rare X ≥ 20 is, given that α and β remain at values 2 and 4,
respectively. In other words, under the prior conditions is a “time to complaint”
as large as 20 months reasonable? Thus, following the solution to Example 6.19,
P(X ≥ 20) = 1 −
1
βα
20
0
xα−1
e−x/β
Γ(α)
dx.
Again, using y = x/β, we have
P(X ≥ 20) = 1 −
5
0
ye−y
Γ(2)
dy = 1 − F(5; 2) = 1 − 0.96 = 0.04,
where F(5; 2) = 0.96 is found from Table A.23.
As a result, we could conclude that the conditions of the gamma distribution
with α = 2 and β = 4 are not supported by the data that an observed time to
complaint is as large as 20 months. Thus, it is reasonable to conclude that the
quality control work was effective.
Example 6.21: Consider Exercise 3.31 on page 94. Based on extensive testing, it is determined
that the time Y in years before a major repair is required for a certain washing
machine is characterized by the density function
f(y) =

1
4 e−y/4
, y ≥ 0,
0, elsewhere.
200 Chapter 6 Some Continuous Probability Distributions
Note that Y is an exponential random variable with μ = 4 years. The machine is
considered a bargain if it is unlikely to require a major repair before the sixth year.
What is the probability P(Y  6)? What is the probability that a major repair is
required in the first year?
Solution: Consider the cumulative distribution function F(y) for the exponential distribution,
F(y) =
1
β
y
0
e−t/β
dt = 1 − e−y/β
.
Then
P(Y  6) = 1 − F(6) = e−3/2
= 0.2231.
Thus, the probability that the washing machine will require major repair after year
six is 0.223. Of course, it will require repair before year six with probability 0.777.
Thus, one might conclude the machine is not really a bargain. The probability
that a major repair is necessary in the first year is
P(Y  1) = 1 − e−1/4
= 1 − 0.779 = 0.221.
6.7 Chi-Squared Distribution
Another very important special case of the gamma distribution is obtained by
letting α = v/2 and β = 2, where v is a positive integer. The result is called the
chi-squared distribution. The distribution has a single parameter, v, called the
degrees of freedom.
Chi-Squared
Distribution
The continuous random variable X has a chi-squared distribution, with v
degrees of freedom, if its density function is given by
f(x; v) =

1
2v/2Γ(v/2)
xv/2−1
e−x/2
, x  0,
0, elsewhere,
where v is a positive integer.
The chi-squared distribution plays a vital role in statistical inference. It has
considerable applications in both methodology and theory. While we do not discuss
applications in detail in this chapter, it is important to understand that Chapters
8, 9, and 16 contain important applications. The chi-squared distribution is an
important component of statistical hypothesis testing and estimation.
Topics dealing with sampling distributions, analysis of variance, and nonpara-
metric statistics involve extensive use of the chi-squared distribution.
Theorem 6.5: The mean and variance of the chi-squared distribution are
μ = v and σ2
= 2v.
6.9 Lognormal Distribution 201
6.8 Beta Distribution
An extension to the uniform distribution is a beta distribution. Let us start by
defining a beta function.
Definition 6.3: A beta function is defined by
B(α, β) =
1
0
xα−1
(1 − x)β−1
dx =
Γ(α)Γ(β)
Γ(α + β)
, for α, β  0,
where Γ(α) is the gamma function.
Beta Distribution The continuous random variable X has a beta distribution with parameters
α  0 and β  0 if its density function is given by
f(x) =

1
B(α,β) xα−1
(1 − x)β−1
, 0  x  1,
0, elsewhere.
Note that the uniform distribution on (0, 1) is a beta distribution with parameters
α = 1 and β = 1.
Theorem 6.6: The mean and variance of a beta distribution with parameters α and β are
μ =
α
α + β
and σ2
=
αβ
(α + β)2(α + β + 1)
,
respectively.
For the uniform distribution on (0, 1), the mean and variance are
μ =
1
1 + 1
=
1
2
and σ2
=
(1)(1)
(1 + 1)2(1 + 1 + 1)
=
1
12
,
respectively.
6.9 Lognormal Distribution
The lognormal distribution is used for a wide variety of applications. The dis-
tribution applies in cases where a natural log transformation results in a normal
distribution.
Lognormal
Distribution
The continuous random variable X has a lognormal distribution if the ran-
dom variable Y = ln(X) has a normal distribution with mean μ and standard
deviation σ. The resulting density function of X is
f(x; μ, σ) =

1
√
2πσx
e− 1
2σ2 [ln(x)−μ]2
, x ≥ 0,
0, x  0.
202 Chapter 6 Some Continuous Probability Distributions
0.2
0.4
0.6
f(x)
x
μ
σ
= 0
= 1
μ
σ
= 1
= 1
0 1 2 3 4 5
Figure 6.29: Lognormal distributions.
The graphs of the lognormal distributions are illustrated in Figure 6.29.
Theorem 6.7: The mean and variance of the lognormal distribution are
μ = eμ+σ2
/2
and σ2
= e2μ+σ2
(eσ2
− 1).
The cumulative distribution function is quite simple due to its relationship to the
normal distribution. The use of the distribution function is illustrated by the
following example.
Example 6.22: Concentrations of pollutants produced by chemical plants historically are known to
exhibit behavior that resembles a lognormal distribution. This is important when
one considers issues regarding compliance with government regulations. Suppose
it is assumed that the concentration of a certain pollutant, in parts per million,
has a lognormal distribution with parameters μ = 3.2 and σ = 1. What is the
probability that the concentration exceeds 8 parts per million?
Solution: Let the random variable X be pollutant concentration. Then
P(X  8) = 1 − P(X ≤ 8).
Since ln(X) has a normal distribution with mean μ = 3.2 and standard deviation
σ = 1,
P(X ≤ 8) = Φ

ln(8) − 3.2
1

= Φ(−1.12) = 0.1314.
Here, we use Φ to denote the cumulative distribution function of the standard
normal distribution. As a result, the probability that the pollutant concentration
exceeds 8 parts per million is 0.1314.
6.10 Weibull Distribution (Optional) 203
Example 6.23: The life, in thousands of miles, of a certain type of electronic control for locomotives
has an approximately lognormal distribution with μ = 5.149 and σ = 0.737. Find
the 5th percentile of the life of such an electronic control.
Solution: From Table A.3, we know that P(Z  −1.645) = 0.05. Denote by X the life
of such an electronic control. Since ln(X) has a normal distribution with mean
μ = 5.149 and σ = 0.737, the 5th percentile of X can be calculated as
ln(x) = 5.149 + (0.737)(−1.645) = 3.937.
Hence, x = 51.265. This means that only 5% of the controls will have lifetimes less
than 51,265 miles.
6.10 Weibull Distribution (Optional)
Modern technology has enabled engineers to design many complicated systems
whose operation and safety depend on the reliability of the various components
making up the systems. For example, a fuse may burn out, a steel column may
buckle, or a heat-sensing device may fail. Identical components subjected to iden-
tical environmental conditions will fail at different and unpredictable times. We
have seen the role that the gamma and exponential distributions play in these
types of problems. Another distribution that has been used extensively in recent
years to deal with such problems is the Weibull distribution, introduced by the
Swedish physicist Waloddi Weibull in 1939.
Weibull
Distribution
The continuous random variable X has a Weibull distribution, with param-
eters α and β, if its density function is given by
f(x; α, β) =

αβxβ−1
e−αxβ
, x  0,
0, elsewhere,
where α  0 and β  0.
The graphs of the Weibull distribution for α = 1 and various values of the param-
eter β are illustrated in Figure 6.30. We see that the curves change considerably
in shape for different values of the parameter β. If we let β = 1, the Weibull dis-
tribution reduces to the exponential distribution. For values of β  1, the curves
become somewhat bell shaped and resemble the normal curve but display some
skewness.
The mean and variance of the Weibull distribution are stated in the following
theorem. The reader is asked to provide the proof in Exercise 6.52 on page 206.
Theorem 6.8: The mean and variance of the Weibull distribution are
μ = α−1/β
Γ

1 +
1
β

and σ2
= α−2/β

Γ

1 +
2
β

−

Γ

1 +
1
β
2

.
Like the gamma and exponential distributions, the Weibull distribution is also
applied to reliability and life-testing problems such as the time to failure or
204 Chapter 6 Some Continuous Probability Distributions
0 0.5 1.0 1.5 2.0
f (x)
x
 1  2
 3.5
β
β
β
Figure 6.30: Weibull distributions (α = 1).
life length of a component, measured from some specified time until it fails.
Let us represent this time to failure by the continuous random variable T, with
probability density function f(t), where f(t) is the Weibull distribution. The
Weibull distribution has inherent flexibility in that it does not require the lack
of memory property of the exponential distribution. The cumulative distribution
function (cdf) for the Weibull can be written in closed form and certainly is useful
in computing probabilities.
cdf for Weibull
Distribution
The cumulative distribution function for the Weibull distribution is
given by
F(x) = 1 − e−αxβ
, for x ≥ 0,
for α  0 and β  0.
Example 6.24: The length of life X, in hours, of an item in a machine shop has a Weibull distri-
bution with α = 0.01 and β = 2. What is the probability that it fails before eight
hours of usage?
Solution: P(X  8) = F(8) = 1 − e−(0.01)82
= 1 − 0.527 = 0.473.
The Failure Rate for the Weibull Distribution
When the Weibull distribution applies, it is often helpful to determine the fail-
ure rate (sometimes called the hazard rate) in order to get a sense of wear or
deterioration of the component. Let us first define the reliability of a component
or product as the probability that it will function properly for at least a specified
time under specified experimental conditions. Therefore, if R(t) is defined to be
6.10 Weibull Distribution (Optional) 205
the reliability of the given component at time t, we may write
R(t) = P(T  t) =
∞
t
f(t) dt = 1 − F(t),
where F(t) is the cumulative distribution function of T. The conditional probability
that a component will fail in the interval from T = t to T = t + Δt, given that it
survived to time t, is
F(t + Δt) − F(t)
R(t)
.
Dividing this ratio by Δt and taking the limit as Δt → 0, we get the failure rate,
denoted by Z(t). Hence,
Z(t) = lim
Δt→0
F(t + Δt) − F(t)
Δt
1
R(t)
=
F
(t)
R(t)
=
f(t)
R(t)
=
f(t)
1 − F(t)
,
which expresses the failure rate in terms of the distribution of the time to failure.
Since Z(t) = f(t)/[1 − F(t)], the failure rate is given as follows:
Failure Rate for
Weibull
Distribution
The failure rate at time t for the Weibull distribution is given by
Z(t) = αβtβ−1
, t  0.
Interpretation of the Failure Rate
The quantity Z(t) is aptly named as a failure rate since it does quantify the rate
of change over time of the conditional probability that the component lasts an
additional Δt given that it has lasted to time t. The rate of decrease (or increase)
with time is important. The following are crucial points.
(a) If β = 1, the failure rate = α, a constant. This, as indicated earlier, is the
special case of the exponential distribution in which lack of memory prevails.
(b) If β  1, Z(t) is an increasing function of time t, which indicates that the
component wears over time.
(c) If β  1, Z(t) is a decreasing function of time t and hence the component
strengthens or hardens over time.
For example, the item in the machine shop in Example 6.24 has β = 2, and
hence it wears over time. In fact, the failure rate function is given by Z(t) = 0.02t.
On the other hand, suppose the parameters were β = 3/4 and α = 2. In that case,
Z(t) = 1.5/t1/4
and hence the component gets stronger over time.
/ /
206 Chapter 6 Some Continuous Probability Distributions
Exercises
6.39 Use the gamma function with y =
√
2x to show
that Γ(1/2) =
√
π.
6.40 In a certain city, the daily consumption of water
(in millions of liters) follows approximately a gamma
distribution with α = 2 and β = 3. If the daily capac-
ity of that city is 9 million liters of water, what is the
probability that on any given day the water supply is
inadequate?
6.41 If a random variable X has the gamma distribu-
tion with α = 2 and β = 1, find P(1.8  X  2.4).
6.42 Suppose that the time, in hours, required to
repair a heat pump is a random variable X having
a gamma distribution with parameters α = 2 and
β = 1/2. What is the probability that on the next
service call
(a) at most 1 hour will be required to repair the heat
pump?
(b) at least 2 hours will be required to repair the heat
pump?
6.43 (a) Find the mean and variance of the daily wa-
ter consumption in Exercise 6.40.
(b) According to Chebyshev’s theorem, there is a prob-
ability of at least 3/4 that the water consumption
on any given day will fall within what interval?
6.44 In a certain city, the daily consumption of elec-
tric power, in millions of kilowatt-hours, is a random
variable X having a gamma distribution with mean
μ = 6 and variance σ2
= 12.
(a) Find the values of α and β.
(b) Find the probability that on any given day the daily
power consumption will exceed 12 million kilowatt-
hours.
6.45 The length of time for one individual to be
served at a cafeteria is a random variable having an ex-
ponential distribution with a mean of 4 minutes. What
is the probability that a person is served in less than 3
minutes on at least 4 of the next 6 days?
6.46 The life, in years, of a certain type of electrical
switch has an exponential distribution with an average
life β = 2. If 100 of these switches are installed in dif-
ferent systems, what is the probability that at most 30
fail during the first year?
6.47 Suppose that the service life, in years, of a hear-
ing aid battery is a random variable having a Weibull
distribution with α = 1/2 and β = 2.
(a) How long can such a battery be expected to last?
(b) What is the probability that such a battery will be
operating after 2 years?
6.48 Derive the mean and variance of the beta distri-
bution.
6.49 Suppose the random variable X follows a beta
distribution with α = 1 and β = 3.
(a) Determine the mean and median of X.
(b) Determine the variance of X.
(c) Find the probability that X  1/3.
6.50 If the proportion of a brand of television set re-
quiring service during the first year of operation is a
random variable having a beta distribution with α = 3
and β = 2, what is the probability that at least 80% of
the new models of this brand sold this year will require
service during their first year of operation?
6.51 The lives of a certain automobile seal have the
Weibull distribution with failure rate Z(t) =1/
√
t.
Find the probability that such a seal is still intact after
4 years.
6.52 Derive the mean and variance of the Weibull dis-
tribution.
6.53 In a biomedical research study, it was deter-
mined that the survival time, in weeks, of an animal
subjected to a certain exposure of gamma radiation has
a gamma distribution with α = 5 and β = 10.
(a) What is the mean survival time of a randomly se-
lected animal of the type used in the experiment?
(b) What is the standard deviation of survival time?
(c) What is the probability that an animal survives
more than 30 weeks?
6.54 The lifetime, in weeks, of a certain type of tran-
sistor is known to follow a gamma distribution with
mean 10 weeks and standard deviation
√
50 weeks.
(a) What is the probability that a transistor of this
type will last at most 50 weeks?
(b) What is the probability that a transistor of this
type will not survive the first 10 weeks?
6.55 Computer response time is an important appli-
cation of the gamma and exponential distributions.
Suppose that a study of a certain computer system
reveals that the response time, in seconds, has an ex-
ponential distribution with a mean of 3 seconds.
/ /
Review Exercises 207
(a) What is the probability that response time exceeds
5 seconds?
(b) What is the probability that response time exceeds
10 seconds?
6.56 Rate data often follow a lognormal distribution.
Average power usage (dB per hour) for a particular
company is studied and is known to have a lognormal
distribution with parameters μ = 4 and σ = 2. What
is the probability that the company uses more than 270
dB during any particular hour?
6.57 For Exercise 6.56, what is the mean power usage
(average dB per hour)? What is the variance?
6.58 The number of automobiles that arrive at a cer-
tain intersection per minute has a Poisson distribution
with a mean of 5. Interest centers around the time that
elapses before 10 automobiles appear at the intersec-
tion.
(a) What is the probability that more than 10 auto-
mobiles appear at the intersection during any given
minute of time?
(b) What is the probability that more than 2 minutes
elapse before 10 cars arrive?
6.59 Consider the information in Exercise 6.58.
(a) What is the probability that more than 1 minute
elapses between arrivals?
(b) What is the mean number of minutes that elapse
between arrivals?
6.60 Show that the failure-rate function is given by
Z(t) = αβtβ−1
, t  0,
if and only if the time to failure distribution is the
Weibull distribution
f(t) = αβtβ−1
e−αtβ
, t  0.
Review Exercises
6.61 According to a study published by a group of so-
ciologists at the University of Massachusetts, approx-
imately 49% of the Valium users in the state of Mas-
sachusetts are white-collar workers. What is the prob-
ability that between 482 and 510, inclusive, of the next
1000 randomly selected Valium users from this state
are white-collar workers?
6.62 The exponential distribution is frequently ap-
plied to the waiting times between successes in a Pois-
son process. If the number of calls received per hour
by a telephone answering service is a Poisson random
variable with parameter λ = 6, we know that the time,
in hours, between successive calls has an exponential
distribution with parameter β =1/6. What is the prob-
ability of waiting more than 15 minutes between any
two successive calls?
6.63 When α is a positive integer n, the gamma dis-
tribution is also known as the Erlang distribution.
Setting α = n in the gamma distribution on page 195,
the Erlang distribution is
f(x) =

xn−1
e−x/β
βn(n−1)!
, x  0,
0, elsewhere.
It can be shown that if the times between successive
events are independent, each having an exponential
distribution with parameter β, then the total elapsed
waiting time X until all n events occur has the Erlang
distribution. Referring to Review Exercise 6.62, what
is the probability that the next 3 calls will be received
within the next 30 minutes?
6.64 A manufacturer of a certain type of large ma-
chine wishes to buy rivets from one of two manufac-
turers. It is important that the breaking strength of
each rivet exceed 10,000 psi. Two manufacturers (A
and B) offer this type of rivet and both have rivets
whose breaking strength is normally distributed. The
mean breaking strengths for manufacturers A and B
are 14,000 psi and 13,000 psi, respectively. The stan-
dard deviations are 2000 psi and 1000 psi, respectively.
Which manufacturer will produce, on the average, the
fewest number of defective rivets?
6.65 According to a recent census, almost 65% of all
households in the United States were composed of only
one or two persons. Assuming that this percentage is
still valid today, what is the probability that between
590 and 625, inclusive, of the next 1000 randomly se-
lected households in America consist of either one or
two persons?
6.66 A certain type of device has an advertised fail-
ure rate of 0.01 per hour. The failure rate is constant
and the exponential distribution applies.
(a) What is the mean time to failure?
(b) What is the probability that 200 hours will pass
before a failure is observed?
6.67 In a chemical processing plant, it is important
that the yield of a certain type of batch product stay
/ /
208 Chapter 6 Some Continuous Probability Distributions
above 80%. If it stays below 80% for an extended pe-
riod of time, the company loses money. Occasional
defective batches are of little concern. But if several
batches per day are defective, the plant shuts down
and adjustments are made. It is known that the yield
is normally distributed with standard deviation 4%.
(a) What is the probability of a “false alarm” (yield
below 80%) when the mean yield is 85%?
(b) What is the probability that a batch will have a
yield that exceeds 80% when in fact the mean yield
is 79%?
6.68 For an electrical component with a failure rate
of once every 5 hours, it is important to consider the
time that it takes for 2 components to fail.
(a) Assuming that the gamma distribution applies,
what is the mean time that it takes for 2 compo-
nents to fail?
(b) What is the probability that 12 hours will elapse
before 2 components fail?
6.69 The elongation of a steel bar under a particular
load has been established to be normally distributed
with a mean of 0.05 inch and σ = 0.01 inch. Find the
probability that the elongation is
(a) above 0.1 inch;
(b) below 0.04 inch;
(c) between 0.025 and 0.065 inch.
6.70 A controlled satellite is known to have an error
(distance from target) that is normally distributed with
mean zero and standard deviation 4 feet. The manu-
facturer of the satellite defines a success as a firing in
which the satellite comes within 10 feet of the target.
Compute the probability that the satellite fails.
6.71 A technician plans to test a certain type of resin
developed in the laboratory to determine the nature
of the time required before bonding takes place. It
is known that the mean time to bonding is 3 hours
and the standard deviation is 0.5 hour. It will be con-
sidered an undesirable product if the bonding time is
either less than 1 hour or more than 4 hours. Com-
ment on the utility of the resin. How often would its
performance be considered undesirable? Assume that
time to bonding is normally distributed.
6.72 Consider the information in Review Exercise
6.66. What is the probability that less than 200 hours
will elapse before 2 failures occur?
6.73 For Review Exercise 6.72, what are the mean
and variance of the time that elapses before 2 failures
occur?
6.74 The average rate of water usage (thousands of
gallons per hour) by a certain community is known
to involve the lognormal distribution with parameters
μ = 5 and σ = 2. It is important for planning purposes
to get a sense of periods of high usage. What is the
probability that, for any given hour, 50,000 gallons of
water are used?
6.75 For Review Exercise 6.74, what is the mean of
the average water usage per hour in thousands of gal-
lons?
6.76 In Exercise 6.54 on page 206, the lifetime of a
transistor is assumed to have a gamma distribution
with mean 10 weeks and standard deviation
√
50 weeks.
Suppose that the gamma distribution assumption is in-
correct. Assume that the distribution is normal.
(a) What is the probability that a transistor will last
at most 50 weeks?
(b) What is the probability that a transistor will not
survive for the first 10 weeks?
(c) Comment on the difference between your results
here and those found in Exercise 6.54 on page 206.
6.77 The beta distribution has considerable applica-
tion in reliability problems in which the basic random
variable is a proportion, as in the practical scenario il-
lustrated in Exercise 6.50 on page 206. In that regard,
consider Review Exercise 3.73 on page 108. Impurities
in batches of product of a chemical process reflect a
serious problem. It is known that the proportion of
impurities Y in a batch has the density function
f(y) =
10(1 − y)9
, 0 ≤ y ≤ 1,
0, elsewhere.
(a) Verify that the above is a valid density function.
(b) What is the probability that a batch is considered
not acceptable (i.e., Y  0.6)?
(c) What are the parameters α and β of the beta dis-
tribution illustrated here?
(d) The mean of the beta distribution is α
α+β
. What is
the mean proportion of impurities in the batch?
(e) The variance of a beta distributed random variable
is
σ2
=
αβ
(α + β)2(α + β + 1)
.
What is the variance of Y in this problem?
6.78 Consider now Review Exercise 3.74 on page 108.
The density function of the time Z in minutes between
calls to an electrical supply store is given by
f(z) =
1
10
e−z/10
, 0  z  ∞,
0, elsewhere.
6.11 Potential Misconceptions and Hazards 209
(a) What is the mean time between calls?
(b) What is the variance in the time between calls?
(c) What is the probability that the time between calls
exceeds the mean?
6.79 Consider Review Exercise 6.78. Given the as-
sumption of the exponential distribution, what is the
mean number of calls per hour? What is the variance
in the number of calls per hour?
6.80 In a human factor experimental project, it has
been determined that the reaction time of a pilot to a
visual stimulus is normally distributed with a mean of
1/2 second and standard deviation of 2/5 second.
(a) What is the probability that a reaction from the
pilot takes more than 0.3 second?
(b) What reaction time is that which is exceeded 95%
of the time?
6.81 The length of time between breakdowns of an es-
sential piece of equipment is important in the decision
of the use of auxiliary equipment. An engineer thinks
that the best model for time between breakdowns of a
generator is the exponential distribution with a mean
of 15 days.
(a) If the generator has just broken down, what is the
probability that it will break down in the next 21
days?
(b) What is the probability that the generator will op-
erate for 30 days without a breakdown?
6.82 The length of life, in hours, of a drill bit in a
mechanical operation has a Weibull distribution with
α = 2 and β = 50. Find the probability that the bit
will fail before 10 hours of usage.
6.83 Derive the cdf for the Weibull distribution.
[Hint: In the definition of a cdf, make the transfor-
mation z = yβ
.]
6.84 Explain why the nature of the scenario in Re-
view Exercise 6.82 would likely not lend itself to the
exponential distribution.
6.85 From the relationship between the chi-squared
random variable and the gamma random variable,
prove that the mean of the chi-squared random variable
is v and the variance is 2v.
6.86 The length of time, in seconds, that a computer
user takes to read his or her e-mail is distributed as a
lognormal random variable with μ = 1.8 and σ2
= 4.0.
(a) What is the probability that a user reads e-mail for
more than 20 seconds? More than a minute?
(b) What is the probability that a user reads e-mail for
a length of time that is equal to the mean of the
underlying lognormal distribution?
6.87 Group Project: Have groups of students ob-
serve the number of people who enter a specific coffee
shop or fast food restaurant over the course of an hour,
beginning at the same time every day, for two weeks.
The hour should be a time of peak traffic at the shop
or restaurant. The data collected will be the number
of customers who enter the shop in each half hour of
time. Thus, two data points will be collected each day.
Let us assume that the random variable X, the num-
ber of people entering each half hour, follows a Poisson
distribution. The students should calculate the sam-
ple mean and variance of X using the 28 data points
collected.
(a) What evidence indicates that the Poisson distribu-
tion assumption may or may not be correct?
(b) Given that X is Poisson, what is the distribution of
T, the time between arrivals into the shop during
a half hour period? Give a numerical estimate of
the parameter of that distribution.
(c) Give an estimate of the probability that the time
between two arrivals is less than 15 minutes.
(d) What is the estimated probability that the time
between two arrivals is more than 10 minutes?
(e) What is the estimated probability that 20 minutes
after the start of data collection not one customer
has appeared?
6.11 Potential Misconceptions and Hazards;
Relationship to Material in Other Chapters
Many of the hazards in the use of material in this chapter are quite similar to
those of Chapter 5. One of the biggest misuses of statistics is the assumption of
an underlying normal distribution in carrying out a type of statistical inference
when indeed it is not normal. The reader will be exposed to tests of hypotheses in
Chapters 10 through 15 in which the normality assumption is made. In addition,
210 Chapter 6 Some Continuous Probability Distributions
however, the reader will be reminded that there are tests of goodness of fit as
well as graphical routines discussed in Chapters 8 and 10 that allow for checks on
data to determine if the normality assumption is reasonable.
Similar warnings should be conveyed regarding assumptions that are often made
concerning other distributions, apart from the normal. This chapter has presented
examples in which one is required to calculate probabilities to failure of a certain
item or the probability that one observes a complaint during a certain time period.
Assumptions are made concerning a certain distribution type as well as values of
parameters of the distributions. Note that parameter values (for example, the
value of β for the exponential distribution) were given in the example problems.
However, in real-life problems, parameter values must be estimates from real-life
experience or data. Note the emphasis placed on estimation in the projects that
appear in Chapters 1, 5, and 6. Note also the reference in Chapter 5 to parameter
estimation, which will be discussed extensively beginning in Chapter 9.
Chapter 7
Functions of Random Variables
(Optional)
7.1 Introduction
This chapter contains a broad spectrum of material. Chapters 5 and 6 deal with
specific types of distributions, both discrete and continuous. These are distribu-
tions that find use in many subject matter applications, including reliability, quality
control, and acceptance sampling. In the present chapter, we begin with a more
general topic, that of distributions of functions of random variables. General tech-
niques are introduced and illustrated by examples. This discussion is followed by
coverage of a related concept, moment-generating functions, which can be helpful
in learning about distributions of linear functions of random variables.
In standard statistical methods, the result of statistical hypothesis testing, es-
timation, or even statistical graphics does not involve a single random variable
but, rather, functions of one or more random variables. As a result, statistical
inference requires the distributions of these functions. For example, the use of
averages of random variables is common. In addition, sums and more general
linear combinations are important. We are often interested in the distribution of
sums of squares of random variables, particularly in the use of analysis of variance
techniques discussed in Chapters 11–14.
7.2 Transformations of Variables
Frequently in statistics, one encounters the need to derive the probability distribu-
tion of a function of one or more random variables. For example, suppose that X is
a discrete random variable with probability distribution f(x), and suppose further
that Y = u(X) defines a one-to-one transformation between the values of X and
Y . We wish to find the probability distribution of Y . It is important to note that
the one-to-one transformation implies that each value x is related to one, and only
one, value y = u(x) and that each value y is related to one, and only one, value
x = w(y), where w(y) is obtained by solving y = u(x) for x in terms of y.
211
212 Chapter 7 Functions of Random Variables (Optional)
From our discussion of discrete probability distributions in Chapter 3, it is clear
that the random variable Y assumes the value y when X assumes the value w(y).
Consequently, the probability distribution of Y is given by
g(y) = P(Y = y) = P[X = w(y)] = f[w(y)].
Theorem 7.1: Suppose that X is a discrete random variable with probability distribution f(x).
Let Y = u(X) define a one-to-one transformation between the values of X and
Y so that the equation y = u(x) can be uniquely solved for x in terms of y, say
x = w(y). Then the probability distribution of Y is
g(y) = f[w(y)].
Example 7.1: Let X be a geometric random variable with probability distribution
f(x) =
3
4

1
4
x−1
, x = 1, 2, 3, . . . .
Find the probability distribution of the random variable Y = X2
.
Solution: Since the values of X are all positive, the transformation defines a one-to-one
correspondence between the x and y values, y = x2
and x =
√
y. Hence
g(y) =

f(
√
y) = 3
4
1
4
√
y−1
, y = 1, 4, 9, . . . ,
0, elsewhere.
Similarly, for a two-dimension transformation, we have the result in Theorem
7.2.
Theorem 7.2: Suppose that X1 and X2 are discrete random variables with joint probability
distribution f(x1, x2). Let Y1 = u1(X1, X2) and Y2 = u2(X1, X2) define a one-to-
one transformation between the points (x1, x2) and (y1, y2) so that the equations
y1 = u1(x1, x2) and y2 = u2(x1, x2)
may be uniquely solved for x1 and x2 in terms of y1 and y2, say x1 = w1(y1, y2)
and x2 = w2(y1, y2). Then the joint probability distribution of Y1 and Y2 is
g(y1, y2) = f[w1(y1, y2), w2(y1, y2)].
Theorem 7.2 is extremely useful for finding the distribution of some random
variable Y1 = u1(X1, X2), where X1 and X2 are discrete random variables with
joint probability distribution f(x1, x2). We simply define a second function, say
Y2 = u2(X1, X2), maintaining a one-to-one correspondence between the points
(x1, x2) and (y1, y2), and obtain the joint probability distribution g(y1, y2). The
distribution of Y1 is just the marginal distribution of g(y1, y2), found by summing
over the y2 values. Denoting the distribution of Y1 by h(y1), we can then write
h(y1) =

y2
g(y1, y2).
7.2 Transformations of Variables 213
Example 7.2: Let X1 and X2 be two independent random variables having Poisson distributions
with parameters μ1 and μ2, respectively. Find the distribution of the random
variable Y1 = X1 + X2.
Solution: Since X1 and X2 are independent, we can write
f(x1, x2) = f(x1)f(x2) =
e−μ1
μx1
1
x1!
e−μ2
μx2
2
x2!
=
e−(μ1+μ2)
μx1
1 μx2
2
x1!x2!
,
where x1 = 0, 1, 2, . . . and x2 = 0, 1, 2, . . . . Let us now define a second random
variable, say Y2 = X2. The inverse functions are given by x1 = y1 −y2 and x2 = y2.
Using Theorem 7.2, we find the joint probability distribution of Y1 and Y2 to be
g(y1, y2) =
e−(μ1+μ2)
μy1−y2
1 μy2
2
(y1 − y2)!y2!
,
where y1 = 0, 1, 2, . . . and y2 = 0, 1, 2, . . . , y1. Note that since x1  0, the trans-
formation x1 = y1 − x2 implies that y2 and hence x2 must always be less than or
equal to y1. Consequently, the marginal probability distribution of Y1 is
h(y1) =
y1

y2=0
g(y1, y2) = e−(μ1+μ2)
y1

y2=0
μy1−y2
1 μy2
2
(y1 − y2)!y2!
=
e−(μ1+μ2)
y1!
y1

y2=0
y1!
y2!(y1 − y2)!
μy1−y2
1 μy2
2
=
e−(μ1+μ2)
y1!
y1

y2=0

y1
y2

μy1−y2
1 μy2
2 .
Recognizing this sum as the binomial expansion of (μ1 + μ2)y1
we obtain
h(y1) =
e−(μ1+μ2)
(μ1 + μ2)y1
y1!
, y1 = 0, 1, 2, . . . ,
from which we conclude that the sum of the two independent random variables
having Poisson distributions, with parameters μ1 and μ2, has a Poisson distribution
with parameter μ1 + μ2.
To find the probability distribution of the random variable Y = u(X) when
X is a continuous random variable and the transformation is one-to-one, we shall
need Theorem 7.3. The proof of the theorem is left to the reader.
Theorem 7.3: Suppose that X is a continuous random variable with probability distribution
f(x). Let Y = u(X) define a one-to-one correspondence between the values of X
and Y so that the equation y = u(x) can be uniquely solved for x in terms of y,
say x = w(y). Then the probability distribution of Y is
g(y) = f[w(y)]|J|,
where J = w
(y) and is called the Jacobian of the transformation.
214 Chapter 7 Functions of Random Variables (Optional)
Example 7.3: Let X be a continuous random variable with probability distribution
f(x) =

x
12 , 1  x  5,
0, elsewhere.
Find the probability distribution of the random variable Y = 2X − 3.
Solution: The inverse solution of y = 2x − 3 yields x = (y + 3)/2, from which we obtain
J = w
(y) = dx/dy = 1/2. Therefore, using Theorem 7.3, we find the density
function of Y to be
g(y) =

(y+3)/2
12
1
2

= y+3
48 , −1  y  7,
0, elsewhere.
To find the joint probability distribution of the random variables Y1 = u1(X1, X2)
and Y2 = u2(X1, X2) when X1 and X2 are continuous and the transformation is
one-to-one, we need an additional theorem, analogous to Theorem 7.2, which we
state without proof.
Theorem 7.4: Suppose that X1 and X2 are continuous random variables with joint probability
distribution f(x1, x2). Let Y1 = u1(X1, X2) and Y2 = u2(X1, X2) define a one-to-
one transformation between the points (x1, x2) and (y1, y2) so that the equations
y1 = u1(x1, x2) and y2 = u2(x1, x2) may be uniquely solved for x1 and x2 in terms
of y1 and y2, say x1 = w1(yl, y2) and x2 = w2(y1, y2). Then the joint probability
distribution of Y1 and Y2 is
g(y1, y2) = f[w1(y1, y2), w2(y1, y2)]|J|,
where the Jacobian is the 2 × 2 determinant
J =






∂x1
∂y1
∂x1
∂y2
∂x2
∂y1
∂x2
∂y2






and ∂x1
∂y1
is simply the derivative of x1 = w1(y1, y2) with respect to y1 with y2 held
constant, referred to in calculus as the partial derivative of x1 with respect to y1.
The other partial derivatives are defined in a similar manner.
Example 7.4: Let X1 and X2 be two continuous random variables with joint probability distri-
bution
f(x1, x2) =

4x1x2, 0  x1  1, 0  x2  1,
0, elsewhere.
Find the joint probability distribution of Y1 = X2
1 and Y2 = X1X2.
Solution: The inverse solutions of y1 = x2
1 and y2 = x1x2 are x1 =
√
y1 and x2 = y2/
√
y1,
from which we obtain
J =




1/(2
√
y1) 0
−y2/2y
3/2
1 1/
√
y1



 =
1
2y1
.
7.2 Transformations of Variables 215
To determine the set B of points in the y1y2 plane into which the set A of points
in the x1x2 plane is mapped, we write
x1 =
√
y1 and x2 = y2/
√
y1.
Then setting x1 = 0, x2 = 0, x1 = 1, and x2 = 1, the boundaries of set A
are transformed to y1 = 0, y2 = 0, y1 = 1, and y2 =
√
y1, or y2
2 = y1. The
two regions are illustrated in Figure 7.1. Clearly, the transformation is one-to-
one, mapping the set A = {(x1, x2) | 0  x1  1, 0  x2  1} into the set
B = {(y1, y2) | y2
2  y1  1, 0  y2  1}. From Theorem 7.4 the joint probability
distribution of Y1 and Y2 is
g(y1, y2) = 4(
√
y1)
y2
√
y1
1
2y1
=

2y2
y1
, y2
2  y1  1, 0  y2  1,
0, elsewhere.
x1
x2
A
0 1
1
x2 = 0
x2 = 1
x
1
=
0
x
1
=
1
y1
y2
B
0 1
1
y2 = 0
y 2
2 = y 1
y
1
=
0
y
1
=
1
Figure 7.1: Mapping set A into set B.
Problems frequently arise when we wish to find the probability distribution
of the random variable Y = u(X) when X is a continuous random variable and
the transformation is not one-to-one. That is, to each value x there corresponds
exactly one value y, but to each y value there corresponds more than one x value.
For example, suppose that f(x) is positive over the interval −1  x  2 and
zero elsewhere. Consider the transformation y = x2
. In this case, x = ±
√
y for
0  y  1 and x =
√
y for 1  y  4. For the interval 1  y  4, the probability
distribution of Y is found as before, using Theorem 7.3. That is,
g(y) = f[w(y)]|J| =
f(
√
y)
2
√
y
, 1  y  4.
However, when 0  y  1, we may partition the interval −1  x  1 to obtain the
two inverse functions
x = −
√
y, −1  x  0, and x =
√
y, 0  x  1.
216 Chapter 7 Functions of Random Variables (Optional)
Then to every y value there corresponds a single x value for each partition. From
Figure 7.2 we see that
P(a  Y  b) = P(−
√
b  X  −
√
a) + P(
√
a  X 
√
b)
=
−
√
a
−
√
b
f(x) dx +
√
b
√
a
f(x) dx.
x
y
1 1
a
b
y  x2
 b  a a b
Figure 7.2: Decreasing and increasing function.
Changing the variable of integration from x to y, we obtain
P(a  Y  b) =
a
b
f(−
√
y)J1 dy +
b
a
f(
√
y)J2 dy
= −
b
a
f(−
√
y)J1 dy +
b
a
f(
√
y)J2 dy,
where
J1 =
d(−
√
y)
dy
=
−1
2
√
y
= −|J1|
and
J2 =
d(
√
y)
dy
=
1
2
√
y
= |J2|.
Hence, we can write
P(a  Y  b) =
b
a
[f(−
√
y)|J1| + f(
√
y)|J2|] dy,
and then
g(y) = f(−
√
y)|J1| + f(
√
y)|J2| =
f(−
√
y) + f(
√
y)
2
√
y
, 0  y  1.
7.2 Transformations of Variables 217
The probability distribution of Y for 0  y  4 may now be written
g(y) =
⎧
⎪
⎪
⎨
⎪
⎪
⎩
f(−
√
y)+f(
√
y)
2
√
y , 0  y  1,
f(
√
y)
2
√
y , 1  y  4,
0, elsewhere.
This procedure for finding g(y) when 0  y  1 is generalized in Theorem 7.5
for k inverse functions. For transformations not one-to-one of functions of several
variables, the reader is referred to Introduction to Mathematical Statistics by Hogg,
McKean, and Craig (2005; see the Bibliography).
Theorem 7.5: Suppose that X is a continuous random variable with probability distribution
f(x). Let Y = u(X) define a transformation between the values of X and Y that
is not one-to-one. If the interval over which X is defined can be partitioned into
k mutually disjoint sets such that each of the inverse functions
x1 = w1(y), x2 = w2(y), . . . , xk = wk(y)
of y = u(x) defines a one-to-one correspondence, then the probability distribution
of Y is
g(y) =
k

i=1
f[wi(y)]|Ji|,
where Ji = w

i(y), i = 1, 2, . . . , k.
Example 7.5: Show that Y = (X−μ)2
/σ2
has a chi-squared distribution with 1 degree of freedom
when X has a normal distribution with mean μ and variance σ2
.
Solution: Let Z = (X − μ)/σ, where the random variable Z has the standard normal distri-
bution
f(z) =
1
√
2π
e−z2
/2
, −∞  z  ∞.
We shall now find the distribution of the random variable Y = Z2
. The inverse
solutions of y = z2
are z = ±
√
y. If we designate z1 = −
√
y and z2 =
√
y, then
J1 = −1/2
√
y and J2 = 1/2
√
y. Hence, by Theorem 7.5, we have
g(y) =
1
√
2π
e−y/2




−1
2
√
y



 +
1
√
2π
e−y/2




1
2
√
y



 =
1
√
2π
y1/2−1
e−y/2
, y  0.
Since g(y) is a density function, it follows that
1 =
1
√
2π
∞
0
y1/2−1
e−y/2
dy =
Γ(1/2)
√
π
∞
0
y1/2−1
e−y/2
√
2Γ(1/2)
dy =
Γ(1/2)
√
π
,
the integral being the area under a gamma probability curve with parameters
α = 1/2 and β = 2. Hence,
√
π = Γ(1/2) and the density of Y is given by
g(y) =

1
√
2Γ(1/2)
y1/2−1
e−y/2
, y  0,
0, elsewhere,
which is seen to be a chi-squared distribution with 1 degree of freedom.
218 Chapter 7 Functions of Random Variables (Optional)
7.3 Moments and Moment-Generating Functions
In this section, we concentrate on applications of moment-generating functions.
The obvious purpose of the moment-generating function is in determining moments
of random variables. However, the most important contribution is to establish
distributions of functions of random variables.
If g(X) = Xr
for r = 0, 1, 2, 3, . . . , Definition 7.1 yields an expected value called
the rth moment about the origin of the random variable X, which we denote
by μ
r.
Definition 7.1: The rth moment about the origin of the random variable X is given by
μ
r = E(Xr
) =
⎧
⎨
⎩

x
xr
f(x), if X is discrete,
 ∞
−∞
xr
f(x) dx, if X is continuous.
Since the first and second moments about the origin are given by μ
1 = E(X) and
μ
2 = E(X2
), we can write the mean and variance of a random variable as
μ = μ
1 and σ2
= μ
2 − μ2
.
Although the moments of a random variable can be determined directly from
Definition 7.1, an alternative procedure exists. This procedure requires us to utilize
a moment-generating function.
Definition 7.2: The moment-generating function of the random variable X is given by E(etX
)
and is denoted by MX(t). Hence,
MX(t) = E(etX
) =
⎧
⎨
⎩

x
etx
f(x), if X is discrete,
 ∞
−∞
etx
f(x) dx, if X is continuous.
Moment-generating functions will exist only if the sum or integral of Definition
7.2 converges. If a moment-generating function of a random variable X does exist,
it can be used to generate all the moments of that variable. The method is described
in Theorem 7.6 without proof.
Theorem 7.6: Let X be a random variable with moment-generating function MX (t). Then
dr
MX(t)
dtr




t=0
= μ
r.
Example 7.6: Find the moment-generating function of the binomial random variable X and then
use it to verify that μ = np and σ2
= npq.
Solution: From Definition 7.2 we have
MX(t) =
n

x=0
etx

n
x

px
qn−x
=
n

x=0

n
x

(pet
)x
qn−x
.
7.3 Moments and Moment-Generating Functions 219
Recognizing this last sum as the binomial expansion of (pet
+ q)n
, we obtain
MX(t) = (pet
+ q)n
.
Now
dMX(t)
dt
= n(pet
+ q)n−1
pet
and
d2
MX(t)
dt2
= np[et
(n − 1)(pet
+ q)n−2
pet
+ (pet
+ q)n−1
et
].
Setting t = 0, we get
μ
1 = np and μ
2 = np[(n − 1)p + 1].
Therefore,
μ = μ
1 = np and σ2
= μ
2 − μ2
= np(1 − p) = npq,
which agrees with the results obtained in Chapter 5.
Example 7.7: Show that the moment-generating function of the random variable X having a
normal probability distribution with mean μ and variance σ2
is given by
MX (t) = exp

μt +
1
2
σ2
t2

.
Solution: From Definition 7.2 the moment-generating function of the normal random variable
X is
MX (t) =
∞
−∞
etx 1
√
2πσ
exp

−
1
2

x − μ
σ
2

dx
=
∞
−∞
1
√
2πσ
exp

−
x2
− 2(μ + tσ2
)x + μ2
2σ2

dx.
Completing the square in the exponent, we can write
x2
− 2(μ + tσ2
)x + μ2
= [x − (μ + tσ2
)]2
− 2μtσ2
− t2
σ4
and then
MX(t) =
∞
−∞
1
√
2πσ
exp

−
[x − (μ + tσ2
)]2
− 2μtσ2
− t2
σ4
2σ2

dx
= exp

2μt + σ2
t2
2
 ∞
−∞
1
√
2πσ
exp

−
[x − (μ + tσ2
)]2
2σ2

dx.
Let w = [x − (μ + tσ2
)]/σ; then dx = σ dw and
MX(t) = exp

μt +
1
2
σ2
t2
 ∞
−∞
1
√
2π
e−w2
/2
dw = exp

μt +
1
2
σ2
t2

,
220 Chapter 7 Functions of Random Variables (Optional)
since the last integral represents the area under a standard normal density curve
and hence equals 1.
Although the method of transforming variables provides an effective way of
finding the distribution of a function of several variables, there is an alternative
and often preferred procedure when the function in question is a linear combination
of independent random variables. This procedure utilizes the properties of moment-
generating functions discussed in the following four theorems. In keeping with the
mathematical scope of this book, we state Theorem 7.7 without proof.
Theorem 7.7: (Uniqueness Theorem) Let X and Y be two random variables with moment-
generating functions MX(t) and MY (t), respectively. If MX(t) = MY (t) for all
values of t, then X and Y have the same probability distribution.
Theorem 7.8: MX+a(t) = eat
MX(t).
Proof: MX+a(t) = E[et(X+a)
] = eat
E(etX
) = eat
MX (t).
Theorem 7.9: MaX (t) = MX (at).
Proof: MaX(t) = E[et(aX)
] = E[e(at)X
] = MX(at).
Theorem 7.10: If X1, X2, . . . , Xn are independent random variables with moment-generating func-
tions MX1 (t), MX2 (t), . . . , MXn (t), respectively, and Y = X1 +X2 +· · ·+Xn, then
MY (t) = MX1 (t)MX2 (t) · · · MXn (t).
The proof of Theorem 7.10 is left for the reader.
Theorems 7.7 through 7.10 are vital for understanding moment-generating func-
tions. An example follows to illustrate. There are many situations in which we need
to know the distribution of the sum of random variables. We may use Theorems
7.7 and 7.10 and the result of Exercise 7.19 on page 224 to find the distribution
of a sum of two independent Poisson random variables with moment-generating
functions given by
MX1 (t) = eμ1(et
−1)
and MX2 (t) = eμ2(et
−1)
,
respectively. According to Theorem 7.10, the moment-generating function of the
random variable Y1 = X1 + X2 is
MY1 (t) = MX1 (t)MX2 (t) = eμ1(et
−1)
eμ2(et
−1)
= e(μ1+μ2)(et
−1)
,
which we immediately identify as the moment-generating function of a random
variable having a Poisson distribution with the parameter μ1 + μ2. Hence, accord-
ing to Theorem 7.7, we again conclude that the sum of two independent random
variables having Poisson distributions, with parameters μ1 and μ2, has a Poisson
distribution with parameter μ1 + μ2.
7.3 Moments and Moment-Generating Functions 221
Linear Combinations of Random Variables
In applied statistics one frequently needs to know the probability distribution of
a linear combination of independent normal random variables. Let us obtain the
distribution of the random variable Y = a1X1 +a2X2 when X1 is a normal variable
with mean μ1 and variance σ2
1 and X2 is also a normal variable but independent
of X1 with mean μ2 and variance σ2
2. First, by Theorem 7.10, we find
MY (t) = Ma1X1 (t)Ma2X2 (t),
and then, using Theorem 7.9, we find
MY (t) = MX1 (a1t)MX2 (a2t).
Substituting a1t for t and then a2t for t in a moment-generating function of the
normal distribution derived in Example 7.7, we have
MY (t) = exp(a1μ1t + a2
1σ2
1t2
/2 + a2μ2t + a2
2σ2
2t2
/2)
= exp[(a1μ1 + a2μ2)t + (a2
1σ2
1 + a2
2σ2
2)t2
/2],
which we recognize as the moment-generating function of a distribution that is
normal with mean a1μ1 + a2μ2 and variance a2
1σ2
1 + a2
2σ2
2.
Generalizing to the case of n independent normal variables, we state the fol-
lowing result.
Theorem 7.11: If X1, X2, . . . , Xn are independent random variables having normal distributions
with means μ1, μ2, . . . , μn and variances σ2
1, σ2
2, . . . , σ2
n, respectively, then the ran-
dom variable
Y = a1X1 + a2X2 + · · · + anXn
has a normal distribution with mean
μY = a1μ1 + a2μ2 + · · · + anμn
and variance
σ2
Y = a2
1σ2
1 + a2
2σ2
2 + · · · + a2
nσ2
n.
It is now evident that the Poisson distribution and the normal distribution
possess a reproductive property in that the sum of independent random variables
having either of these distributions is a random variable that also has the same type
of distribution. The chi-squared distribution also has this reproductive property.
Theorem 7.12: If X1, X2, . . . , Xn are mutually independent random variables that have, respec-
tively, chi-squared distributions with v1, v2, . . . , vn degrees of freedom, then the
random variable
Y = X1 + X2 + · · · + Xn
has a chi-squared distribution with v = v1 + v2 + · · · + vn degrees of freedom.
Proof: By Theorem 7.10 and Exercise 7.21,
MY (t) = MX1 (t)MX2 (t) · · · MXn (t) and MXi (t) = (1 − 2t)−vi/2
, i = 1, 2, . . . , n.
/ /
222 Chapter 7 Functions of Random Variables (Optional)
Therefore,
MY (t) = (1 − 2t)−v1/2
(1 − 2t)−v2/2
· · · (1 − 2t)−vn/2
= (1 − 2t)−(v1+v2+···+vn)/2
,
which we recognize as the moment-generating function of a chi-squared distribution
with v = v1 + v2 + · · · + vn degrees of freedom.
Corollary 7.1: If X1, X2, . . . , Xn are independent random variables having identical normal dis-
tributions with mean μ and variance σ2
, then the random variable
Y =
n

i=1

Xi − μ
σ
2
has a chi-squared distribution with v = n degrees of freedom.
This corollary is an immediate consequence of Example 7.5. It establishes a re-
lationship between the very important chi-squared distribution and the normal
distribution. It also should provide the reader with a clear idea of what we mean
by the parameter that we call degrees of freedom. In future chapters, the notion
of degrees of freedom will play an increasingly important role.
Corollary 7.2: If X1, X2, . . . , Xn are independent random variables and Xi follows a normal dis-
tribution with mean μi and variance σ2
i for i = 1, 2, . . . , n, then the random
variable
Y =
n

i=1

Xi − μi
σi
2
has a chi-squared distribution with v = n degrees of freedom.
Exercises
7.1 Let X be a random variable with probability
f(x) =
1
3
, x = 1, 2, 3,
0, elsewhere.
Find the probability distribution of the random vari-
able Y = 2X − 1.
7.2 Let X be a binomial random variable with prob-
ability distribution
f(x) =
3
x
 2
5
x 3
5
3−x
, x = 0, 1, 2, 3,
0, elsewhere.
Find the probability distribution of the random vari-
able Y = X2
.
7.3 Let X1 and X2 be discrete random variables with
the joint multinomial distribution
f(x1, x2)
=
2
x1, x2, 2 − x1 − x2

1
4
x1

1
3
x2

5
12
2−x1−x2
for x1 = 0, 1, 2; x2 = 0, 1, 2; x1 + x2 ≤ 2; and zero
elsewhere. Find the joint probability distribution of
Y1 = X1 + X2 and Y2 = X1 − X2.
7.4 Let X1 and X2 be discrete random variables with
joint probability distribution
f(x1, x2) =
x1x2
18
, x1 = 1, 2; x2 = 1, 2, 3,
0, elsewhere.
Find the probability distribution of the random vari-
able Y = X1X2.
/ /
Exercises 223
7.5 Let X have the probability distribution
f(x) =
1, 0  x  1,
0, elsewhere.
Show that the random variable Y = −2 ln X has a chi-
squared distribution with 2 degrees of freedom.
7.6 Given the random variable X with probability
distribution
f(x) =
2x, 0  x  1,
0, elsewhere,
find the probability distribution of Y = 8X3
.
7.7 The speed of a molecule in a uniform gas at equi-
librium is a random variable V whose probability dis-
tribution is given by
f(v) =

kv2
e−bv2
, v  0,
0, elsewhere,
where k is an appropriate constant and b depends on
the absolute temperature and mass of the molecule.
Find the probability distribution of the kinetic energy
of the molecule W, where W = mV 2
/2.
7.8 A dealer’s profit, in units of $5000, on a new au-
tomobile is given by Y = X2
, where X is a random
variable having the density function
f(x) =
2(1 − x), 0  x  1,
0, elsewhere.
(a) Find the probability density function of the random
variable Y .
(b) Using the density function of Y , find the probabil-
ity that the profit on the next new automobile sold
by this dealership will be less than $500.
7.9 The hospital period, in days, for patients follow-
ing treatment for a certain type of kidney disorder is a
random variable Y = X + 4, where X has the density
function
f(x) =

32
(x+4)3 , x  0,
0, elsewhere.
(a) Find the probability density function of the random
variable Y .
(b) Using the density function of Y , find the probabil-
ity that the hospital period for a patient following
this treatment will exceed 8 days.
7.10 The random variables X and Y , representing
the weights of creams and toffees, respectively, in 1-
kilogram boxes of chocolates containing a mixture of
creams, toffees, and cordials, have the joint density
function
f(x, y) =
24xy, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, x + y ≤ 1,
0, elsewhere.
(a) Find the probability density function of the random
variable Z = X + Y .
(b) Using the density function of Z, find the probabil-
ity that, in a given box, the sum of the weights of
creams and toffees accounts for at least 1/2 but less
than 3/4 of the total weight.
7.11 The amount of kerosene, in thousands of liters,
in a tank at the beginning of any day is a random
amount Y from which a random amount X is sold dur-
ing that day. Assume that the joint density function
of these variables is given by
f(x, y) =
2, 0  x  y, 0  y  1,
0, elsewhere.
Find the probability density function for the amount
of kerosene left in the tank at the end of the day.
7.12 Let X1 and X2 be independent random variables
each having the probability distribution
f(x) =
e−x
, x  0,
0, elsewhere.
Show that the random variables Y1 and Y2 are inde-
pendent when Y1 = X1 + X2 and Y2 = X1/(X1 + X2).
7.13 A current of I amperes flowing through a resis-
tance of R ohms varies according to the probability
distribution
f(i) =
6i(1 − i), 0  i  1,
0, elsewhere.
If the resistance varies independently of the current ac-
cording to the probability distribution
g(r) =
2r, 0  r  1,
0, elsewhere,
find the probability distribution for the power W =
I2
R watts.
7.14 Let X be a random variable with probability
distribution
f(x) =
1+x
2
, −1  x  1,
0, elsewhere.
Find the probability distribution of the random vari-
able Y = X2
.
224 Chapter 7 Functions of Random Variables (Optional)
7.15 Let X have the probability distribution
f(x) =

2(x+1)
9
, −1  x  2,
0, elsewhere.
Find the probability distribution of the random vari-
able Y = X2
.
7.16 Show that the rth moment about the origin of
the gamma distribution is
μ
r =
βr
Γ(α + r)
Γ(α)
.
[Hint: Substitute y = x/β in the integral defining μ
r
and then use the gamma function to evaluate the inte-
gral.]
7.17 A random variable X has the discrete uniform
distribution
f(x; k) =
1
k
, x = 1, 2, . . . , k,
0, elsewhere.
Show that the moment-generating function of X is
MX (t) =
et
(1 − ekt
)
k(1 − et)
.
7.18 A random variable X has the geometric distri-
bution g(x; p) = pqx−1
for x = 1, 2, 3, . . . . Show that
the moment-generating function of X is
MX (t) =
pet
1 − qet
, t  ln q,
and then use MX (t) to find the mean and variance of
the geometric distribution.
7.19 A random variable X has the Poisson distribu-
tion p(x; μ) = e−μ
μx
/x! for x = 0, 1, 2, . . . . Show that
the moment-generating function of X is
MX (t) = eμ(et
−1)
.
Using MX (t), find the mean and variance of the Pois-
son distribution.
7.20 The moment-generating function of a certain
Poisson random variable X is given by
MX (t) = e4(et
−1)
.
Find P(μ − 2σ  X  μ + 2σ).
7.21 Show that the moment-generating function of
the random variable X having a chi-squared distribu-
tion with v degrees of freedom is
MX (t) = (1 − 2t)−v/2
.
7.22 Using the moment-generating function of Exer-
cise 7.21, show that the mean and variance of the chi-
squared distribution with v degrees of freedom are, re-
spectively, v and 2v.
7.23 If both X and Y , distributed independently, fol-
low exponential distributions with mean parameter 1,
find the distributions of
(a) U = X + Y ;
(b) V = X/(X + Y ).
7.24 By expanding etx
in a Maclaurin series and in-
tegrating term by term, show that
MX (t) =
 ∞
−∞
etx
f(x) dx
= 1 + μt + μ
2
t2
2!
+ · · · + μ
r
tr
r!
+ · · · .
Chapter 8
Fundamental Sampling
Distributions and Data Descriptions
8.1 Random Sampling
The outcome of a statistical experiment may be recorded either as a numerical
value or as a descriptive representation. When a pair of dice is tossed and the total
is the outcome of interest, we record a numerical value. However, if the students
of a certain school are given blood tests and the type of blood is of interest, then a
descriptive representation might be more useful. A person’s blood can be classified
in 8 ways: AB, A, B, or O, each with a plus or minus sign, depending on the
presence or absence of the Rh antigen.
In this chapter, we focus on sampling from distributions or populations and
study such important quantities as the sample mean and sample variance, which
will be of vital importance in future chapters. In addition, we attempt to give the
reader an introduction to the role that the sample mean and variance will play
in statistical inference in later chapters. The use of modern high-speed computers
allows the scientist or engineer to greatly enhance his or her use of formal statistical
inference with graphical techniques. Much of the time, formal inference appears
quite dry and perhaps even abstract to the practitioner or to the manager who
wishes to let statistical analysis be a guide to decision-making.
Populations and Samples
We begin this section by discussing the notions of populations and samples. Both
are mentioned in a broad fashion in Chapter 1. However, much more needs to be
presented about them here, particularly in the context of the concept of random
variables. The totality of observations with which we are concerned, whether their
number be finite or infinite, constitutes what we call a population. There was a
time when the word population referred to observations obtained from statistical
studies about people. Today, statisticians use the term to refer to observations
relevant to anything of interest, whether it be groups of people, animals, or all
possible outcomes from some complicated biological or engineering system.
225
226 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
Definition 8.1: A population consists of the totality of the observations with which we are
concerned.
The number of observations in the population is defined to be the size of the
population. If there are 600 students in the school whom we classified according
to blood type, we say that we have a population of size 600. The numbers on
the cards in a deck, the heights of residents in a certain city, and the lengths of
fish in a particular lake are examples of populations with finite size. In each case,
the total number of observations is a finite number. The observations obtained by
measuring the atmospheric pressure every day, from the past on into the future,
or all measurements of the depth of a lake, from any conceivable position, are
examples of populations whose sizes are infinite. Some finite populations are so
large that in theory we assume them to be infinite. This is true in the case of the
population of lifetimes of a certain type of storage battery being manufactured for
mass distribution throughout the country.
Each observation in a population is a value of a random variable X having some
probability distribution f(x). If one is inspecting items coming off an assembly line
for defects, then each observation in the population might be a value 0 or 1 of the
Bernoulli random variable X with probability distribution
b(x; 1, p) = px
q1−x
, x = 0, 1
where 0 indicates a nondefective item and 1 indicates a defective item. Of course,
it is assumed that p, the probability of any item being defective, remains constant
from trial to trial. In the blood-type experiment, the random variable X represents
the type of blood and is assumed to take on values from 1 to 8. Each student is
given one of the values of the discrete random variable. The lives of the storage
batteries are values assumed by a continuous random variable having perhaps a
normal distribution. When we refer hereafter to a “binomial population,” a “nor-
mal population,” or, in general, the “population f(x),” we shall mean a population
whose observations are values of a random variable having a binomial distribution,
a normal distribution, or the probability distribution f(x). Hence, the mean and
variance of a random variable or probability distribution are also referred to as the
mean and variance of the corresponding population.
In the field of statistical inference, statisticians are interested in arriving at con-
clusions concerning a population when it is impossible or impractical to observe the
entire set of observations that make up the population. For example, in attempting
to determine the average length of life of a certain brand of light bulb, it would
be impossible to test all such bulbs if we are to have any left to sell. Exorbitant
costs can also be a prohibitive factor in studying an entire population. Therefore,
we must depend on a subset of observations from the population to help us make
inferences concerning that same population. This brings us to consider the notion
of sampling.
Definition 8.2: A sample is a subset of a population.
If our inferences from the sample to the population are to be valid, we must
obtain samples that are representative of the population. All too often we are
8.2 Some Important Statistics 227
tempted to choose a sample by selecting the most convenient members of the
population. Such a procedure may lead to erroneous inferences concerning the
population. Any sampling procedure that produces inferences that consistently
overestimate or consistently underestimate some characteristic of the population is
said to be biased. To eliminate any possibility of bias in the sampling procedure,
it is desirable to choose a random sample in the sense that the observations are
made independently and at random.
In selecting a random sample of size n from a population f(x), let us define the
random variable Xi, i = 1, 2, . . . , n, to represent the ith measurement or sample
value that we observe. The random variables X1, X2, . . . , Xn will then constitute
a random sample from the population f(x) with numerical values x1, x2, . . . , xn if
the measurements are obtained by repeating the experiment n independent times
under essentially the same conditions. Because of the identical conditions under
which the elements of the sample are selected, it is reasonable to assume that the n
random variables X1, X2, . . . , Xn are independent and that each has the same prob-
ability distribution f(x). That is, the probability distributions of X1, X2, . . . , Xn
are, respectively, f(x1), f(x2), . . . , f(xn), and their joint probability distribution
is f(x1, x2, . . . , xn) = f(x1)f(x2) · · · f(xn). The concept of a random sample is
described formally by the following definition.
Definition 8.3: Let X1, X2, . . . , Xn be n independent random variables, each having the same
probability distribution f(x). Define X1, X2, . . . , Xn to be a random sample of
size n from the population f(x) and write its joint probability distribution as
f(x1, x2, . . . , xn) = f(x1)f(x2) · · · f(xn).
If one makes a random selection of n = 8 storage batteries from a manufacturing
process that has maintained the same specification throughout and records the
length of life for each battery, with the first measurement x1 being a value of X1,
the second measurement x2 a value of X2, and so forth, then x1, x2, . . . , x8 are
the values of the random sample X1, X2, . . . , X8. If we assume the population of
battery lives to be normal, the possible values of any Xi, i = 1, 2, . . . , 8, will be
precisely the same as those in the original population, and hence Xi has the same
identical normal distribution as X.
8.2 Some Important Statistics
Our main purpose in selecting random samples is to elicit information about the
unknown population parameters. Suppose, for example, that we wish to arrive at
a conclusion concerning the proportion of coffee-drinkers in the United States who
prefer a certain brand of coffee. It would be impossible to question every coffee-
drinking American in order to compute the value of the parameter p representing
the population proportion. Instead, a large random sample is selected and the
proportion p̂ of people in this sample favoring the brand of coffee in question is
calculated. The value p̂ is now used to make an inference concerning the true
proportion p.
Now, p̂ is a function of the observed values in the random sample; since many
228 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
random samples are possible from the same population, we would expect p̂ to vary
somewhat from sample to sample. That is, p̂ is a value of a random variable that
we represent by P. Such a random variable is called a statistic.
Definition 8.4: Any function of the random variables constituting a random sample is called a
statistic.
Location Measures of a Sample: The Sample Mean, Median, and Mode
In Chapter 4 we introduced the two parameters μ and σ2
, which measure the center
of location and the variability of a probability distribution. These are constant
population parameters and are in no way affected or influenced by the observations
of a random sample. We shall, however, define some important statistics that
describe corresponding measures of a random sample. The most commonly used
statistics for measuring the center of a set of data, arranged in order of magnitude,
are the mean, median, and mode. Although the first two of these statistics were
defined in Chapter 1, we repeat the definitions here. Let X1, X2, . . . , Xn represent
n random variables.
(a) Sample mean:
X̄ =
1
n
n

i=1
Xi.
Note that the statistic X̄ assumes the value x̄ = 1
n
n

i=1
xi when X1 assumes the
value x1, X2 assumes the value x2, and so forth. The term sample mean is applied
to both the statistic X̄ and its computed value x̄.
(b) Sample median:
x̃ =

x(n+1)/2, if n is odd,
1
2 (xn/2 + xn/2+1), if n is even.
The sample median is also a location measure that shows the middle value of the
sample. Examples for both the sample mean and the sample median can be found
in Section 1.3. The sample mode is defined as follows.
(c) The sample mode is the value of the sample that occurs most often.
Example 8.1: Suppose a data set consists of the following observations:
0.32 0.53 0.28 0.37 0.47 0.43 0.36 0.42 0.38 0.43.
The sample mode is 0.43, since this value occurs more than any other value.
As we suggested in Chapter 1, a measure of location or central tendency in a
sample does not by itself give a clear indication of the nature of the sample. Thus,
a measure of variability in the sample must also be considered.
8.2 Some Important Statistics 229
Variability Measures of a Sample: The Sample Variance, Standard Deviation,
and Range
The variability in a sample displays how the observations spread out from the
average. The reader is referred to Chapter 1 for more discussion. It is possible to
have two sets of observations with the same mean or median that differ considerably
in the variability of their measurements about the average.
Consider the following measurements, in liters, for two samples of orange juice
bottled by companies A and B:
Sample A 0.97 1.00 0.94 1.03 1.06
Sample B 1.06 1.01 0.88 0.91 1.14
Both samples have the same mean, 1.00 liter. It is obvious that company A
bottles orange juice with a more uniform content than company B. We say that
the variability, or the dispersion, of the observations from the average is less
for sample A than for sample B. Therefore, in buying orange juice, we would feel
more confident that the bottle we select will be close to the advertised average if
we buy from company A.
In Chapter 1 we introduced several measures of sample variability, including
the sample variance, sample standard deviation, and sample range. In
this chapter, we will focus mainly on the sample variance. Again, let X1, . . . , Xn
represent n random variables.
(a) Sample variance:
S2
=
1
n − 1
n

i=1
(Xi − X̄)2
. (8.2.1)
The computed value of S2
for a given sample is denoted by s2
. Note that
S2
is essentially defined to be the average of the squares of the deviations of the
observations from their mean. The reason for using n − 1 as a divisor rather than
the more obvious choice n will become apparent in Chapter 9.
Example 8.2: A comparison of coffee prices at 4 randomly selected grocery stores in San Diego
showed increases from the previous month of 12, 15, 17, and 20 cents for a 1-pound
bag. Find the variance of this random sample of price increases.
Solution: Calculating the sample mean, we get
x̄ =
12 + 15 + 17 + 20
4
= 16 cents.
Therefore,
s2
=
1
3
4

i=1
(xi − 16)2
=
(12 − 16)2
+ (15 − 16)2
+ (17 − 16)2
+ (20 − 16)2
3
=
(−4)2
+ (−1)2
+ (1)2
+ (4)2
3
=
34
3
.
Whereas the expression for the sample variance best illustrates that S2
is a
measure of variability, an alternative expression does have some merit and thus
the reader should be aware of it. The following theorem contains this expression.
/ /
230 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
Theorem 8.1: If S2
is the variance of a random sample of size n, we may write
S2
=
1
n(n − 1)
⎡
⎣n
n

i=1
X2
i −
 n

i=1
Xi
2
⎤
⎦ .
Proof: By definition,
S2
=
1
n − 1
n

i=1
(Xi − X̄)2
=
1
n − 1
n

i=1
(X2
i − 2X̄Xi + X̄2
)
=
1
n − 1
 n

i=1
X2
i − 2X̄
n

i=1
Xi + nX̄2

.
As in Chapter 1, the sample standard deviation and the sample range are
defined below.
(b) Sample standard deviation:
S =
√
S2,
where S2
is the sample variance.
Let Xmax denote the largest of the Xi values and Xmin the smallest.
(c) Sample range:
R = Xmax − Xmin.
Example 8.3: Find the variance of the data 3, 4, 5, 6, 6, and 7, representing the number of trout
caught by a random sample of 6 fishermen on June 19, 1996, at Lake Muskoka.
Solution: We find that
6

i=1
x2
i = 171,
6

i=1
xi = 31, and n = 6. Hence,
s2
=
1
(6)(5)
[(6)(171) − (31)2
] =
13
6
.
Thus, the sample standard deviation s =

13/6 = 1.47 and the sample range is
7 − 3 = 4.
Exercises
8.1 Define suitable populations from which the fol-
lowing samples are selected:
(a) Persons in 200 homes in the city of Richmond are
called on the phone and asked to name the candi-
date they favor for election to the school board.
(b) A coin is tossed 100 times and 34 tails are recorded.
(c) Two hundred pairs of a new type of tennis shoe
were tested on the professional tour and, on aver-
age, lasted 4 months.
(d) On five different occasions it took a lawyer 21, 26,
24, 22, and 21 minutes to drive from her suburban
home to her midtown office.
/ /
Exercises 231
8.2 The lengths of time, in minutes, that 10 patients
waited in a doctor’s office before receiving treatment
were recorded as follows: 5, 11, 9, 5, 10, 15, 6, 10, 5,
and 10. Treating the data as a random sample, find
(a) the mean;
(b) the median;
(c) the mode.
8.3 The reaction times for a random sample of 9 sub-
jects to a stimulant were recorded as 2.5, 3.6, 3.1, 4.3,
2.9. 2.3, 2.6, 4.1, and 3.4 seconds. Calculate
(a) the mean;
(b) the median.
8.4 The number of tickets issued for traffic violations
by 8 state troopers during the Memorial Day weekend
are 5, 4, 7, 7, 6, 3, 8, and 6.
(a) If these values represent the number of tickets is-
sued by a random sample of 8 state troopers from
Montgomery County in Virginia, define a suitable
population.
(b) If the values represent the number of tickets issued
by a random sample of 8 state troopers from South
Carolina, define a suitable population.
8.5 The numbers of incorrect answers on a true-false
competency test for a random sample of 15 students
were recorded as follows: 2, 1, 3, 0, 1, 3, 6, 0, 3, 3, 5,
2, 1, 4, and 2. Find
(a) the mean;
(b) the median;
(c) the mode.
8.6 Find the mean, median, and mode for the sample
whose observations, 15, 7, 8, 95, 19, 12, 8, 22, and 14,
represent the number of sick days claimed on 9 fed-
eral income tax returns. Which value appears to be
the best measure of the center of these data? State
reasons for your preference.
8.7 A random sample of employees from a local man-
ufacturing plant pledged the following donations, in
dollars, to the United Fund: 100, 40, 75, 15, 20, 100,
75, 50, 30, 10, 55, 75, 25, 50, 90, 80, 15, 25, 45, and
100. Calculate
(a) the mean;
(b) the mode.
8.8 According to ecology writer Jacqueline Killeen,
phosphates contained in household detergents pass
right through our sewer systems, causing lakes to turn
into swamps that eventually dry up into deserts. The
following data show the amount of phosphates per load
of laundry, in grams, for a random sample of various
types of detergents used according to the prescribed
directions:
Laundry Phosphates per Load
Detergent (grams)
A  P Blue Sail 48
Dash 47
Concentrated All 42
Cold Water All 42
Breeze 41
Oxydol 34
Ajax 31
Sears 30
Fab 29
Cold Power 29
Bold 29
Rinso 26
For the given phosphate data, find
(a) the mean;
(b) the median;
(c) the mode.
8.9 Consider the data in Exercise 8.2, find
(a) the range;
(b) the standard deviation.
8.10 For the sample of reaction times in Exercise 8.3,
calculate
(a) the range;
(b) the variance, using the formula of form (8.2.1).
8.11 For the data of Exercise 8.5, calculate the vari-
ance using the formula
(a) of form (8.2.1);
(b) in Theorem 8.1.
8.12 The tar contents of 8 brands of cigarettes se-
lected at random from the latest list released by the
Federal Trade Commission are as follows: 7.3, 8.6, 10.4,
16.1, 12.2, 15.1, 14.5, and 9.3 milligrams. Calculate
(a) the mean;
(b) the variance.
8.13 The grade-point averages of 20 college seniors
selected at random from a graduating class are as fol-
lows:
3.2 1.9 2.7 2.4 2.8
2.9 3.8 3.0 2.5 3.3
1.8 2.5 3.7 2.8 2.0
3.2 2.3 2.1 2.5 1.9
Calculate the standard deviation.
8.14 (a) Show that the sample variance is unchanged
if a constant c is added to or subtracted from each
232 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
value in the sample.
(b) Show that the sample variance becomes c2
times
its original value if each observation in the sample
is multiplied by c.
8.15 Verify that the variance of the sample 4, 9, 3,
6, 4, and 7 is 5.1, and using this fact, along with the
results of Exercise 8.14, find
(a) the variance of the sample 12, 27, 9, 18, 12, and 21;
(b) the variance of the sample 9, 14, 8, 11, 9, and 12.
8.16 In the 2004-05 football season, University of
Southern California had the following score differences
for the 13 games it played.
11 49 32 3 6 38 38 30 8 40 31 5 36
Find
(a) the mean score difference;
(b) the median score difference.
8.3 Sampling Distributions
The field of statistical inference is basically concerned with generalizations and
predictions. For example, we might claim, based on the opinions of several people
interviewed on the street, that in a forthcoming election 60% of the eligible voters
in the city of Detroit favor a certain candidate. In this case, we are dealing with
a random sample of opinions from a very large finite population. As a second il-
lustration we might state that the average cost to build a residence in Charleston,
South Carolina, is between $330,000 and $335,000, based on the estimates of 3
contractors selected at random from the 30 now building in this city. The popu-
lation being sampled here is again finite but very small. Finally, let us consider a
soft-drink machine designed to dispense, on average, 240 milliliters per drink. A
company official who computes the mean of 40 drinks obtains x̄ = 236 milliliters
and, on the basis of this value, decides that the machine is still dispensing drinks
with an average content of μ = 240 milliliters. The 40 drinks represent a sam-
ple from the infinite population of possible drinks that will be dispensed by this
machine.
Inference about the Population from Sample Information
In each of the examples above, we computed a statistic from a sample selected from
the population, and from this statistic we made various statements concerning the
values of population parameters that may or may not be true. The company official
made the decision that the soft-drink machine dispenses drinks with an average
content of 240 milliliters, even though the sample mean was 236 milliliters, because
he knows from sampling theory that, if μ = 240 milliliters, such a sample value
could easily occur. In fact, if he ran similar tests, say every hour, he would expect
the values of the statistic x̄ to fluctuate above and below μ = 240 milliliters. Only
when the value of x̄ is substantially different from 240 milliliters will the company
official initiate action to adjust the machine.
Since a statistic is a random variable that depends only on the observed sample,
it must have a probability distribution.
Definition 8.5: The probability distribution of a statistic is called a sampling distribution.
The sampling distribution of a statistic depends on the distribution of the pop-
ulation, the size of the samples, and the method of choosing the samples. In the
8.4 Sampling Distribution of Means and the Central Limit Theorem 233
remainder of this chapter we study several of the important sampling distribu-
tions of frequently used statistics. Applications of these sampling distributions to
problems of statistical inference are considered throughout most of the remaining
chapters. The probability distribution of X̄ is called the sampling distribution
of the mean.
What Is the Sampling Distribution of X̄?
We should view the sampling distributions of X̄ and S2
as the mechanisms from
which we will be able to make inferences on the parameters μ and σ2
. The sam-
pling distribution of X̄ with sample size n is the distribution that results when
an experiment is conducted over and over (always with sample size n) and
the many values of X̄ result. This sampling distribution, then, describes the
variability of sample averages around the population mean μ. In the case of the
soft-drink machine, knowledge of the sampling distribution of X̄ arms the analyst
with the knowledge of a “typical” discrepancy between an observed x̄ value and
true μ. The same principle applies in the case of the distribution of S2
. The sam-
pling distribution produces information about the variability of s2
values around
σ2
in repeated experiments.
8.4 Sampling Distribution of Means and the Central Limit
Theorem
The first important sampling distribution to be considered is that of the mean
X̄. Suppose that a random sample of n observations is taken from a normal
population with mean μ and variance σ2
. Each observation Xi, i = 1, 2, . . . , n, of
the random sample will then have the same normal distribution as the population
being sampled. Hence, by the reproductive property of the normal distribution
established in Theorem 7.11, we conclude that
X̄ =
1
n
(X1 + X2 + · · · + Xn)
has a normal distribution with mean
μX̄ =
1
n
(μ + μ + · · · + μ
  
n terms
) = μ and variance σ2
X̄ =
1
n2
(σ2
+ σ2
+ · · · + σ2
  
n terms
) =
σ2
n
.
If we are sampling from a population with unknown distribution, either finite
or infinite, the sampling distribution of X̄ will still be approximately normal with
mean μ and variance σ2
/n, provided that the sample size is large. This amazing
result is an immediate consequence of the following theorem, called the Central
Limit Theorem.
234 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
The Central Limit Theorem
Theorem 8.2: Central Limit Theorem: If X̄ is the mean of a random sample of size n taken
from a population with mean μ and finite variance σ2
, then the limiting form of
the distribution of
Z =
X̄ − μ
σ/
√
n
,
as n → ∞, is the standard normal distribution n(z; 0, 1).
The normal approximation for X̄ will generally be good if n ≥ 30, provided
the population distribution is not terribly skewed. If n  30, the approximation is
good only if the population is not too different from a normal distribution and, as
stated above, if the population is known to be normal, the sampling distribution
of X̄ will follow a normal distribution exactly, no matter how small the size of the
samples.
The sample size n = 30 is a guideline to use for the Central Limit Theorem.
However, as the statement of the theorem implies, the presumption of normality
on the distribution of X̄ becomes more accurate as n grows larger. In fact, Figure
8.1 illustrates how the theorem works. It shows how the distribution of X̄ becomes
closer to normal as n grows larger, beginning with the clearly nonsymmetric dis-
tribution of an individual observation (n = 1). It also illustrates that the mean of
X̄ remains μ for any sample size and the variance of X̄ gets smaller as n increases.
μ
Large n (near normal)
Small to moderate n
n = 1 (population)
Figure 8.1: Illustration of the Central Limit Theorem (distribution of X̄ for n = 1,
moderate n, and large n).
Example 8.4: An electrical firm manufactures light bulbs that have a length of life that is ap-
proximately normally distributed, with mean equal to 800 hours and a standard
deviation of 40 hours. Find the probability that a random sample of 16 bulbs will
have an average life of less than 775 hours.
Solution: The sampling distribution of X̄ will be approximately normal, with μX̄ = 800 and
σX̄ = 40/
√
16 = 10. The desired probability is given by the area of the shaded
8.4 Sampling Distribution of Means and the Central Limit Theorem 235
region in Figure 8.2.
x
775 800
σ x = 10
Figure 8.2: Area for Example 8.4.
Corresponding to x̄ = 775, we find that
z =
775 − 800
10
= −2.5,
and therefore
P(X̄  775) = P(Z  −2.5) = 0.0062.
Inferences on the Population Mean
One very important application of the Central Limit Theorem is the determination
of reasonable values of the population mean μ. Topics such as hypothesis testing,
estimation, quality control, and many others make use of the Central Limit Theo-
rem. The following example illustrates the use of the Central Limit Theorem with
regard to its relationship with μ, the mean of the population, although the formal
application to the foregoing topics is relegated to future chapters.
In the following case study, an illustration is given which draws an inference
that makes use of the sampling distribution of X̄. In this simple illustration, μ
and σ are both known. The Central Limit Theorem and the general notion of
sampling distributions are often used to produce evidence about some important
aspect of a distribution such as a parameter of the distribution. In the case of the
Central Limit Theorem, the parameter of interest is the mean μ. The inference
made concerning μ may take one of many forms. Often there is a desire on the part
of the analyst that the data (in the form of x̄) support (or not) some predetermined
conjecture concerning the value of μ. The use of what we know about the sampling
distribution can contribute to answering this type of question. In the following case
study, the concept of hypothesis testing leads to a formal objective that we will
highlight in future chapters.
Case Study 8.1: Automobile Parts:An important manufacturing process produces cylindrical com-
ponent parts for the automotive industry. It is important that the process produce
236 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
parts having a mean diameter of 5.0 millimeters. The engineer involved conjec-
tures that the population mean is 5.0 millimeters. An experiment is conducted in
which 100 parts produced by the process are selected randomly and the diameter
measured on each. It is known that the population standard deviation is σ = 0.1
millimeter. The experiment indicates a sample average diameter of x̄ = 5.027 mil-
limeters. Does this sample information appear to support or refute the engineer’s
conjecture?
Solution: This example reflects the kind of problem often posed and solved with hypothesis
testing machinery introduced in future chapters. We will not use the formality
associated with hypothesis testing here, but we will illustrate the principles and
logic used.
Whether the data support or refute the conjecture depends on the probability
that data similar to those obtained in this experiment (x̄ = 5.027) can readily
occur when in fact μ = 5.0 (Figure 8.3). In other words, how likely is it that
one can obtain x̄ ≥ 5.027 with n = 100 if the population mean is μ = 5.0? If
this probability suggests that x̄ = 5.027 is not unreasonable, the conjecture is not
refuted. If the probability is quite low, one can certainly argue that the data do not
support the conjecture that μ = 5.0. The probability that we choose to compute
is given by P(|X̄ − 5| ≥ 0.027).
x
4.973 5.027
5.0
Figure 8.3: Area for Case Study 8.1.
In other words, if the mean μ is 5, what is the chance that X̄ will deviate by
as much as 0.027 millimeter?
P(|X̄ − 5| ≥ 0.027) = P(X̄ − 5 ≥ 0.027) + P(X̄ − 5 ≤ −0.027)
= 2P

X̄ − 5
0.1/
√
100
≥ 2.7

.
Here we are simply standardizing X̄ according to the Central Limit Theorem. If
the conjecture μ = 5.0 is true, X̄−5
0.1/
√
100
should follow N(0, 1). Thus,
2P

X̄ − 5
0.1/
√
100
≥ 2.7

= 2P(Z ≥ 2.7) = 2(0.0035) = 0.007.
8.4 Sampling Distribution of Means and the Central Limit Theorem 237
Therefore, one would experience by chance that an x̄ would be 0.027 millimeter
from the mean in only 7 in 1000 experiments. As a result, this experiment with
x̄ = 5.027 certainly does not give supporting evidence to the conjecture that μ =
5.0. In fact, it strongly refutes the conjecture!
Example 8.5: Traveling between two campuses of a university in a city via shuttle bus takes,
on average, 28 minutes with a standard deviation of 5 minutes. In a given week,
a bus transported passengers 40 times. What is the probability that the average
transport time was more than 30 minutes? Assume the mean time is measured to
the nearest minute.
Solution: In this case, μ = 28 and σ = 3. We need to calculate the probability P(X̄  30)
with n = 40. Since the time is measured on a continuous scale to the nearest
minute, an x̄ greater than 30 is equivalent to x̄ ≥ 30.5. Hence,
P(X̄  30) = P

X̄ − 28
5/
√
40
≥
30.5 − 28
5/
√
40

= P(Z ≥ 3.16) = 0.0008.
There is only a slight chance that the average time of one bus trip will exceed 30
minutes. An illustrative graph is shown in Figure 8.4.
x
30.5
28.0
Figure 8.4: Area for Example 8.5.
Sampling Distribution of the Difference between Two Means
The illustration in Case Study 8.1 deals with notions of statistical inference on a
single mean μ. The engineer was interested in supporting a conjecture regarding
a single population mean. A far more important application involves two popula-
tions. A scientist or engineer may be interested in a comparative experiment in
which two manufacturing methods, 1 and 2, are to be compared. The basis for
that comparison is μ1 − μ2, the difference in the population means.
Suppose that we have two populations, the first with mean μ1 and variance
σ2
1, and the second with mean μ2 and variance σ2
2. Let the statistic X̄1 represent
the mean of a random sample of size n1 selected from the first population, and
the statistic X̄2 represent the mean of a random sample of size n2 selected from
238 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
the second population, independent of the sample from the first population. What
can we say about the sampling distribution of the difference X̄1 − X̄2 for repeated
samples of size n1 and n2? According to Theorem 8.2, the variables X̄1 and X̄2
are both approximately normally distributed with means μ1 and μ2 and variances
σ2
1/n1 and σ2
2/n2, respectively. This approximation improves as n1 and n2 increase.
By choosing independent samples from the two populations we ensure that the
variables X̄1 and X̄2 will be independent, and then using Theorem 7.11, with
a1 = 1 and a2 = −1, we can conclude that X̄1 − X̄2 is approximately normally
distributed with mean
μX̄1−X̄2
= μX̄1
− μX̄2
= μ1 − μ2
and variance
σ2
X̄1−X̄2
= σ2
X̄1
+ σ2
X̄2
=
σ2
1
n1
+
σ2
2
n2
.
The Central Limit Theorem can be easily extended to the two-sample, two-population
case.
Theorem 8.3: If independent samples of size n1 and n2 are drawn at random from two popu-
lations, discrete or continuous, with means μ1 and μ2 and variances σ2
1 and σ2
2,
respectively, then the sampling distribution of the differences of means, X̄1 − X̄2,
is approximately normally distributed with mean and variance given by
μX̄1−X̄2
= μ1 − μ2 and σ2
X̄1−X̄2
=
σ2
1
n1
+
σ2
2
n2
.
Hence,
Z =
(X̄1 − X̄2) − (μ1 − μ2)

(σ2
1/n1) + (σ2
2/n2)
is approximately a standard normal variable.
If both n1 and n2 are greater than or equal to 30, the normal approximation
for the distribution of X̄1 − X̄2 is very good when the underlying distributions
are not too far away from normal. However, even when n1 and n2 are less than
30, the normal approximation is reasonably good except when the populations are
decidedly nonnormal. Of course, if both populations are normal, then X̄1 −X̄2 has
a normal distribution no matter what the sizes of n1 and n2 are.
The utility of the sampling distribution of the difference between two sample
averages is very similar to that described in Case Study 8.1 on page 235 for the case
of a single mean. Case Study 8.2 that follows focuses on the use of the difference
between two sample means to support (or not) the conjecture that two population
means are the same.
Case Study 8.2: Paint Drying Time: Two independent experiments are run in which two different
types of paint are compared. Eighteen specimens are painted using type A, and
the drying time, in hours, is recorded for each. The same is done with type B.
The population standard deviations are both known to be 1.0.
8.4 Sampling Distribution of Means and the Central Limit Theorem 239
Assuming that the mean drying time is equal for the two types of paint, find
P(X̄A −X̄B  1.0), where X̄A and X̄B are average drying times for samples of size
nA = nB = 18.
Solution: From the sampling distribution of X̄A − X̄B, we know that the distribution is
approximately normal with mean
μX̄A−X̄B
= μA − μB = 0
and variance
σ2
X̄A−X̄B
=
σ2
A
nA
+
σ2
B
nB
=
1
18
+
1
18
=
1
9
.
xA − xB
μ μ
A − B = 0 1.0
σ XA−XB
= 1 9
Figure 8.5: Area for Case Study 8.2.
The desired probability is given by the shaded region in Figure 8.5. Corre-
sponding to the value X̄A − X̄B = 1.0, we have
z =
1 − (μA − μB)

1/9
=
1 − 0

1/9
= 3.0;
so
P(Z  3.0) = 1 − P(Z  3.0) = 1 − 0.9987 = 0.0013.
What Do We Learn from Case Study 8.2?
The machinery in the calculation is based on the presumption that μA = μB.
Suppose, however, that the experiment is actually conducted for the purpose of
drawing an inference regarding the equality of μA and μB, the two population
mean drying times. If the two averages differ by as much as 1 hour (or more),
this clearly is evidence that would lead one to conclude that the population mean
drying time is not equal for the two types of paint. On the other hand, suppose
240 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
that the difference in the two sample averages is as small as, say, 15 minutes. If
μA = μB,
P[(X̄A − X̄B)  0.25 hour] = P

X̄A − X̄B − 0

1/9

3
4

= P

Z 
3
4

= 1 − P(Z  0.75) = 1 − 0.7734 = 0.2266.
Since this probability is not low, one would conclude that a difference in sample
means of 15 minutes can happen by chance (i.e., it happens frequently even though
μA = μB). As a result, that type of difference in average drying times certainly is
not a clear signal that μA = μB.
As we indicated earlier, a more detailed formalism regarding this and other
types of statistical inference (e.g., hypothesis testing) will be supplied in future
chapters. The Central Limit Theorem and sampling distributions discussed in the
next three sections will also play a vital role.
Example 8.6: The television picture tubes of manufacturer A have a mean lifetime of 6.5 years
and a standard deviation of 0.9 year, while those of manufacturer B have a mean
lifetime of 6.0 years and a standard deviation of 0.8 year. What is the probability
that a random sample of 36 tubes from manufacturer A will have a mean lifetime
that is at least 1 year more than the mean lifetime of a sample of 49 tubes from
manufacturer B?
Solution: We are given the following information:
Population 1 Population 2
μ1 = 6.5 μ2 = 6.0
σ1 = 0.9 σ2 = 0.8
n1 = 36 n2 = 49
If we use Theorem 8.3, the sampling distribution of X̄1 − X̄2 will be approxi-
mately normal and will have a mean and standard deviation
μX̄1−X̄2
= 6.5 − 6.0 = 0.5 and σX̄1−X̄2
=

0.81
36
+
0.64
49
= 0.189.
The probability that the mean lifetime for 36 tubes from manufacturer A will
be at least 1 year longer than the mean lifetime for 49 tubes from manufacturer B
is given by the area of the shaded region in Figure 8.6. Corresponding to the value
x̄1 − x̄2 = 1.0, we find that
z =
1.0 − 0.5
0.189
= 2.65,
and hence
P(X̄1 − X̄2 ≥ 1.0) = P(Z  2.65) = 1 − P(Z  2.65)
= 1 − 0.9960 = 0.0040.
/ /
Exercises 241
0.5 1.0
x1  x2
x1 x2
 0.189
σ
Figure 8.6: Area for Example 8.6.
More on Sampling Distribution of Means—Normal Approximation to
the Binomial Distribution
Section 6.5 presented the normal approximation to the binomial distribution at
length. Conditions were given on the parameters n and p for which the distribution
of a binomial random variable can be approximated by the normal distribution.
Examples and exercises reflected the importance of the concept of the “normal
approximation.” It turns out that the Central Limit Theorem sheds even more
light on how and why this approximation works. We certainly know that a binomial
random variable is the number X of successes in n independent trials, where the
outcome of each trial is binary. We also illustrated in Chapter 1 that the proportion
computed in such an experiment is an average of a set of 0s and 1s. Indeed, while
the proportion X/n is an average, X is the sum of this set of 0s and 1s, and both
X and X/n are approximately normal if n is sufficiently large. Of course, from
what we learned in Chapter 6, we know that there are conditions on n and p that
affect the quality of the approximation, namely np ≥ 5 and nq ≥ 5.
Exercises
8.17 If all possible samples of size 16 are drawn from
a normal population with mean equal to 50 and stan-
dard deviation equal to 5, what is the probability that a
sample mean X̄ will fall in the interval from μX̄ −1.9σX̄
to μX̄ −0.4σX̄ ? Assume that the sample means can be
measured to any degree of accuracy.
8.18 If the standard deviation of the mean for the
sampling distribution of random samples of size 36
from a large or infinite population is 2, how large must
the sample size become if the standard deviation is to
be reduced to 1.2?
8.19 A certain type of thread is manufactured with a
mean tensile strength of 78.3 kilograms and a standard
deviation of 5.6 kilograms. How is the variance of the
sample mean changed when the sample size is
(a) increased from 64 to 196?
(b) decreased from 784 to 49?
8.20 Given the discrete uniform population
f(x) =
1
3
, x = 2, 4, 6,
0, elsewhere,
find the probability that a random sample of size 54,
selected with replacement, will yield a sample mean
greater than 4.1 but less than 4.4. Assume the means
are measured to the nearest tenth.
8.21 A soft-drink machine is regulated so that the
amount of drink dispensed averages 240 milliliters with
/ /
242 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
a standard deviation of 15 milliliters. Periodically, the
machine is checked by taking a sample of 40 drinks
and computing the average content. If the mean of the
40 drinks is a value within the interval μX̄ ± 2σX̄ , the
machine is thought to be operating satisfactorily; oth-
erwise, adjustments are made. In Section 8.3, the com-
pany official found the mean of 40 drinks to be x̄ = 236
milliliters and concluded that the machine needed no
adjustment. Was this a reasonable decision?
8.22 The heights of 1000 students are approximately
normally distributed with a mean of 174.5 centimeters
and a standard deviation of 6.9 centimeters. Suppose
200 random samples of size 25 are drawn from this pop-
ulation and the means recorded to the nearest tenth of
a centimeter. Determine
(a) the mean and standard deviation of the sampling
distribution of X̄;
(b) the number of sample means that fall between 172.5
and 175.8 centimeters inclusive;
(c) the number of sample means falling below 172.0
centimeters.
8.23 The random variable X, representing the num-
ber of cherries in a cherry puff, has the following prob-
ability distribution:
x 4 5 6 7
P(X = x) 0.2 0.4 0.3 0.1
(a) Find the mean μ and the variance σ2
of X.
(b) Find the mean μX̄ and the variance σ2
X̄ of the mean
X̄ for random samples of 36 cherry puffs.
(c) Find the probability that the average number of
cherries in 36 cherry puffs will be less than 5.5.
8.24 If a certain machine makes electrical resistors
having a mean resistance of 40 ohms and a standard
deviation of 2 ohms, what is the probability that a
random sample of 36 of these resistors will have a com-
bined resistance of more than 1458 ohms?
8.25 The average life of a bread-making machine is 7
years, with a standard deviation of 1 year. Assuming
that the lives of these machines follow approximately
a normal distribution, find
(a) the probability that the mean life of a random sam-
ple of 9 such machines falls between 6.4 and 7.2
years;
(b) the value of x to the right of which 15% of the
means computed from random samples of size 9
would fall.
8.26 The amount of time that a drive-through bank
teller spends on a customer is a random variable with
a mean μ = 3.2 minutes and a standard deviation
σ = 1.6 minutes. If a random sample of 64 customers
is observed, find the probability that their mean time
at the teller’s window is
(a) at most 2.7 minutes;
(b) more than 3.5 minutes;
(c) at least 3.2 minutes but less than 3.4 minutes.
8.27 In a chemical process, the amount of a certain
type of impurity in the output is difficult to control
and is thus a random variable. Speculation is that the
population mean amount of the impurity is 0.20 gram
per gram of output. It is known that the standard
deviation is 0.1 gram per gram. An experiment is con-
ducted to gain more insight regarding the speculation
that μ = 0.2. The process is run on a lab scale 50
times and the sample average x̄ turns out to be 0.23
gram per gram. Comment on the speculation that the
mean amount of impurity is 0.20 gram per gram. Make
use of the Central Limit Theorem in your work.
8.28 A random sample of size 25 is taken from a nor-
mal population having a mean of 80 and a standard
deviation of 5. A second random sample of size 36
is taken from a different normal population having a
mean of 75 and a standard deviation of 3. Find the
probability that the sample mean computed from the
25 measurements will exceed the sample mean com-
puted from the 36 measurements by at least 3.4 but
less than 5.9. Assume the difference of the means to
be measured to the nearest tenth.
8.29 The distribution of heights of a certain breed of
terrier has a mean of 72 centimeters and a standard de-
viation of 10 centimeters, whereas the distribution of
heights of a certain breed of poodle has a mean of 28
centimeters with a standard deviation of 5 centimeters.
Assuming that the sample means can be measured to
any degree of accuracy, find the probability that the
sample mean for a random sample of heights of 64 ter-
riers exceeds the sample mean for a random sample of
heights of 100 poodles by at most 44.2 centimeters.
8.30 The mean score for freshmen on an aptitude test
at a certain college is 540, with a standard deviation of
50. Assume the means to be measured to any degree
of accuracy. What is the probability that two groups
selected at random, consisting of 32 and 50 students,
respectively, will differ in their mean scores by
(a) more than 20 points?
(b) an amount between 5 and 10 points?
8.31 Consider Case Study 8.2 on page 238. Suppose
18 specimens were used for each type of paint in an
experiment and x̄A −x̄B, the actual difference in mean
drying time, turned out to be 1.0.
(a) Does this seem to be a reasonable result if the
8.5 Sampling Distribution of S2
243
two population mean drying times truly are equal?
Make use of the result in the solution to Case Study
8.2.
(b) If someone did the experiment 10,000 times un-
der the condition that μA = μB, in how many of
those 10,000 experiments would there be a differ-
ence x̄A − x̄B that was as large as (or larger than)
1.0?
8.32 Two different box-filling machines are used to fill
cereal boxes on an assembly line. The critical measure-
ment influenced by these machines is the weight of the
product in the boxes. Engineers are quite certain that
the variance of the weight of product is σ2
= 1 ounce.
Experiments are conducted using both machines with
sample sizes of 36 each. The sample averages for ma-
chines A and B are x̄A = 4.5 ounces and x̄B = 4.7
ounces. Engineers are surprised that the two sample
averages for the filling machines are so different.
(a) Use the Central Limit Theorem to determine
P(X̄B − X̄A ≥ 0.2)
under the condition that μA = μB.
(b) Do the aforementioned experiments seem to, in any
way, strongly support a conjecture that the popu-
lation means for the two machines are different?
Explain using your answer in (a).
8.33 The chemical benzene is highly toxic to hu-
mans. However, it is used in the manufacture of many
medicine dyes, leather, and coverings. Government
regulations dictate that for any production process in-
volving benzene, the water in the output of the process
must not exceed 7950 parts per million (ppm) of ben-
zene. For a particular process of concern, the water
sample was collected by a manufacturer 25 times ran-
domly and the sample average x̄ was 7960 ppm. It is
known from historical data that the standard deviation
σ is 100 ppm.
(a) What is the probability that the sample average in
this experiment would exceed the government limit
if the population mean is equal to the limit? Use
the Central Limit Theorem.
(b) Is an observed x̄ = 7960 in this experiment firm
evidence that the population mean for the process
exceeds the government limit? Answer your ques-
tion by computing
P(X̄ ≥ 7960 | μ = 7950).
Assume that the distribution of benzene concentra-
tion is normal.
8.34 Two alloys A and B are being used to manufac-
ture a certain steel product. An experiment needs to
be designed to compare the two in terms of maximum
load capacity in tons (the maximum weight that can
be tolerated without breaking). It is known that the
two standard deviations in load capacity are equal at
5 tons each. An experiment is conducted in which 30
specimens of each alloy (A and B) are tested and the
results recorded as follows:
x̄A = 49.5, x̄B = 45.5; x̄A − x̄B = 4.
The manufacturers of alloy A are convinced that this
evidence shows conclusively that μA  μB and strongly
supports the claim that their alloy is superior. Man-
ufacturers of alloy B claim that the experiment could
easily have given x̄A − x̄B = 4 even if the two popula-
tion means are equal. In other words, “the results are
inconclusive!”
(a) Make an argument that manufacturers of alloy B
are wrong. Do it by computing
P(X̄A − X̄B  4 | μA = μB).
(b) Do you think these data strongly support alloy A?
8.35 Consider the situation described in Example 8.4
on page 234. Do these results prompt you to question
the premise that μ = 800 hours? Give a probabilis-
tic result that indicates how rare an event X̄ ≤ 775 is
when μ = 800. On the other hand, how rare would it
be if μ truly were, say, 760 hours?
8.36 Let X1, X2, . . . , Xn be a random sample from a
distribution that can take on only positive values. Use
the Central Limit Theorem to produce an argument
that if n is sufficiently large, then Y = X1X2 · · · Xn
has approximately a lognormal distribution.
8.5 Sampling Distribution of S2
In the preceding section we learned about the sampling distribution of X̄. The
Central Limit Theorem allowed us to make use of the fact that
X̄ − μ
σ/
√
n
244 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
tends toward N(0, 1) as the sample size grows large. Sampling distributions of
important statistics allow us to learn information about parameters. Usually, the
parameters are the counterpart to the statistics in question. For example, if an
engineer is interested in the population mean resistance of a certain type of resistor,
the sampling distribution of X̄ will be exploited once the sample information is
gathered. On the other hand, if the variability in resistance is to be studied,
clearly the sampling distribution of S2
will be used in learning about the parametric
counterpart, the population variance σ2
.
If a random sample of size n is drawn from a normal population with mean
μ and variance σ2
, and the sample variance is computed, we obtain a value of
the statistic S2
. We shall proceed to consider the distribution of the statistic
(n − 1)S2
/σ2
.
By the addition and subtraction of the sample mean X̄, it is easy to see that
n

i=1
(Xi − μ)2
=
n

i=1
[(Xi − X̄) + (X̄ − μ)]2
=
n

i=1
(Xi − X̄)2
+
n

i=1
(X̄ − μ)2
+ 2(X̄ − μ)
n

i=1
(Xi − X̄)
=
n

i=1
(Xi − X̄)2
+ n(X̄ − μ)2
.
Dividing each term of the equality by σ2
and substituting (n−1)S2
for
n

i=1
(Xi−X̄)2
,
we obtain
1
σ2
n

i=1
(Xi − μ)2
=
(n − 1)S2
σ2
+
(X̄ − μ)2
σ2/n
.
Now, according to Corollary 7.1 on page 222, we know that
n

i=1
(Xi − μ)2
σ2
is a chi-squared random variable with n degrees of freedom. We have a chi-squared
random variable with n degrees of freedom partitioned into two components. Note
that in Section 6.7 we showed that a chi-squared distribution is a special case of
a gamma distribution. The second term on the right-hand side is Z2
, which is
a chi-squared random variable with 1 degree of freedom, and it turns out that
(n − 1)S2
/σ2
is a chi-squared random variable with n − 1 degree of freedom. We
formalize this in the following theorem.
Theorem 8.4: If S2
is the variance of a random sample of size n taken from a normal population
having the variance σ2
, then the statistic
χ2
=
(n − 1)S2
σ2
=
n

i=1
(Xi − X̄)2
σ2
has a chi-squared distribution with v = n − 1 degrees of freedom.
The values of the random variable χ2
are calculated from each sample by the
8.5 Sampling Distribution of S2
245
formula
χ2
=
(n − 1)s2
σ2
.
The probability that a random sample produces a χ2
value greater than some
specified value is equal to the area under the curve to the right of this value. It is
customary to let χ2
α represent the χ2
value above which we find an area of α. This
is illustrated by the shaded region in Figure 8.7.
0 χ
χ
2
2
α
α
Figure 8.7: The chi-squared distribution.
Table A.5 gives values of χ2
α for various values of α and v. The areas, α, are
the column headings; the degrees of freedom, v, are given in the left column; and
the table entries are the χ2
values. Hence, the χ2
value with 7 degrees of freedom,
leaving an area of 0.05 to the right, is χ2
0.05 = 14.067. Owing to lack of symmetry,
we must also use the tables to find χ2
0.95 = 2.167 for v = 7.
Exactly 95% of a chi-squared distribution lies between χ2
0.975 and χ2
0.025. A χ2
value falling to the right of χ2
0.025 is not likely to occur unless our assumed value of
σ2
is too small. Similarly, a χ2
value falling to the left of χ2
0.975 is unlikely unless
our assumed value of σ2
is too large. In other words, it is possible to have a χ2
value to the left of χ2
0.975 or to the right of χ2
0.025 when σ2
is correct, but if this
should occur, it is more probable that the assumed value of σ2
is in error.
Example 8.7: A manufacturer of car batteries guarantees that the batteries will last, on average,
3 years with a standard deviation of 1 year. If five of these batteries have lifetimes
of 1.9, 2.4, 3.0, 3.5, and 4.2 years, should the manufacturer still be convinced that
the batteries have a standard deviation of 1 year? Assume that the battery lifetime
follows a normal distribution.
Solution: We first find the sample variance using Theorem 8.1,
s2
=
(5)(48.26) − (15)2
(5)(4)
= 0.815.
Then
χ2
=
(4)(0.815)
1
= 3.26
246 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
is a value from a chi-squared distribution with 4 degrees of freedom. Since 95%
of the χ2
values with 4 degrees of freedom fall between 0.484 and 11.143, the
computed value with σ2
= 1 is reasonable, and therefore the manufacturer has no
reason to suspect that the standard deviation is other than 1 year.
Degrees of Freedom as a Measure of Sample Information
Recall from Corollary 7.1 in Section 7.3 that
n

i=1
(Xi − μ)2
σ2
has a χ2
-distribution with n degrees of freedom. Note also Theorem 8.4, which
indicates that the random variable
(n − 1)S2
σ2
=
n

i=1
(Xi − X̄)2
σ2
has a χ2
-distribution with n−1 degrees of freedom. The reader may also recall that
the term degrees of freedom, used in this identical context, is discussed in Chapter
1.
As we indicated earlier, the proof of Theorem 8.4 will not be given. However,
the reader can view Theorem 8.4 as indicating that when μ is not known and one
considers the distribution of
n

i=1
(Xi − X̄)2
σ2
,
there is 1 less degree of freedom, or a degree of freedom is lost in the estimation
of μ (i.e., when μ is replaced by x̄). In other words, there are n degrees of free-
dom, or independent pieces of information, in the random sample from the normal
distribution. When the data (the values in the sample) are used to compute the
mean, there is 1 less degree of freedom in the information used to estimate σ2
.
8.6 t-Distribution
In Section 8.4, we discussed the utility of the Central Limit Theorem. Its applica-
tions revolve around inferences on a population mean or the difference between two
population means. Use of the Central Limit Theorem and the normal distribution
is certainly helpful in this context. However, it was assumed that the population
standard deviation is known. This assumption may not be unreasonable in situ-
ations where the engineer is quite familiar with the system or process. However,
in many experimental scenarios, knowledge of σ is certainly no more reasonable
than knowledge of the population mean μ. Often, in fact, an estimate of σ must
be supplied by the same sample information that produced the sample average x̄.
As a result, a natural statistic to consider to deal with inferences on μ is
T =
X̄ − μ
S/
√
n
,
8.6 t-Distribution 247
since S is the sample analog to σ. If the sample size is small, the values of S2
fluc-
tuate considerably from sample to sample (see Exercise 8.43 on page 259) and the
distribution of T deviates appreciably from that of a standard normal distribution.
If the sample size is large enough, say n ≥ 30, the distribution of T does not
differ considerably from the standard normal. However, for n  30, it is useful to
deal with the exact distribution of T. In developing the sampling distribution of T,
we shall assume that our random sample was selected from a normal population.
We can then write
T =
(X̄ − μ)/(σ/
√
n)

S2/σ2
=
Z

V/(n − 1)
,
where
Z =
X̄ − μ
σ/
√
n
has the standard normal distribution and
V =
(n − 1)S2
σ2
has a chi-squared distribution with v = n−1 degrees of freedom. In sampling from
normal populations, we can show that X̄ and S2
are independent, and consequently
so are Z and V . The following theorem gives the definition of a random variable
T as a function of Z (standard normal) and χ2
. For completeness, the density
function of the t-distribution is given.
Theorem 8.5: Let Z be a standard normal random variable and V a chi-squared random variable
with v degrees of freedom. If Z and V are independent, then the distribution of
the random variable T, where
T =
Z

V/v
,
is given by the density function
h(t) =
Γ[(v + 1)/2]
Γ(v/2)
√
πv

1 +
t2
v
−(v+1)/2
, − ∞  t  ∞.
This is known as the t-distribution with v degrees of freedom.
From the foregoing and the theorem above we have the following corollary.
248 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
Corollary 8.1: Let X1, X2, . . . , Xn be independent random variables that are all normal with
mean μ and standard deviation σ. Let
X̄ =
1
n
n

i=1
Xi and S2
=
1
n − 1
n

i=1
(Xi − X̄)2
.
Then the random variable T = X̄−μ
S/
√
n
has a t-distribution with v = n − 1 degrees
of freedom.
The probability distribution of T was first published in 1908 in a paper written
by W. S. Gosset. At the time, Gosset was employed by an Irish brewery that
prohibited publication of research by members of its staff. To circumvent this re-
striction, he published his work secretly under the name “Student.” Consequently,
the distribution of T is usually called the Student t-distribution or simply the t-
distribution. In deriving the equation of this distribution, Gosset assumed that the
samples were selected from a normal population. Although this would seem to be a
very restrictive assumption, it can be shown that nonnormal populations possess-
ing nearly bell-shaped distributions will still provide values of T that approximate
the t-distribution very closely.
What Does the t-Distribution Look Like?
The distribution of T is similar to the distribution of Z in that they both are
symmetric about a mean of zero. Both distributions are bell shaped, but the t-
distribution is more variable, owing to the fact that the T-values depend on the
fluctuations of two quantities, X̄ and S2
, whereas the Z-values depend only on the
changes in X̄ from sample to sample. The distribution of T differs from that of Z
in that the variance of T depends on the sample size n and is always greater than
1. Only when the sample size n → ∞ will the two distributions become the same.
In Figure 8.8, we show the relationship between a standard normal distribution
(v = ∞) and t-distributions with 2 and 5 degrees of freedom. The percentage
points of the t-distribution are given in Table A.4.
2 1 0 1 2
v  2
v  
v  5
Figure 8.8: The t-distribution curves for v = 2, 5,
and ∞.
t
t1  t t
0 α
α
α
Figure 8.9: Symmetry property (about 0) of the
t-distribution.
8.6 t-Distribution 249
It is customary to let tα represent the t-value above which we find an area equal
to α. Hence, the t-value with 10 degrees of freedom leaving an area of 0.025 to
the right is t = 2.228. Since the t-distribution is symmetric about a mean of zero,
we have t1−α = −tα; that is, the t-value leaving an area of 1 − α to the right and
therefore an area of α to the left is equal to the negative t-value that leaves an area
of α in the right tail of the distribution (see Figure 8.9). That is, t0.95 = −t0.05,
t0.99 = −t0.01, and so forth.
Example 8.8: The t-value with v = 14 degrees of freedom that leaves an area of 0.025 to the left,
and therefore an area of 0.975 to the right, is
t0.975 = −t0.025 = −2.145.
Example 8.9: Find P(−t0.025  T  t0.05).
Solution: Since t0.05 leaves an area of 0.05 to the right, and −t0.025 leaves an area of 0.025
to the left, we find a total area of
1 − 0.05 − 0.025 = 0.925
between −t0.025 and t0.05. Hence
P(−t0.025  T  t0.05) = 0.925.
Example 8.10: Find k such that P(k  T  −1.761) = 0.045 for a random sample of size 15
selected from a normal distribution and X−μ
s/
√
n
.
t
0
k −t0.005
0.045
Figure 8.10: The t-values for Example 8.10.
Solution: From Table A.4 we note that 1.761 corresponds to t0.05 when v = 14. Therefore,
−t0.05 = −1.761. Since k in the original probability statement is to the left of
−t0.05 = −1.761, let k = −tα. Then, from Figure 8.10, we have
0.045 = 0.05 − α, or α = 0.005.
Hence, from Table A.4 with v = 14,
k = −t0.005 = −2.977 and P(−2.977  T  −1.761) = 0.045.
250 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
Exactly 95% of the values of a t-distribution with v = n − 1 degrees of freedom
lie between −t0.025 and t0.025. Of course, there are other t-values that contain 95%
of the distribution, such as −t0.02 and t0.03, but these values do not appear in Table
A.4, and furthermore, the shortest possible interval is obtained by choosing t-values
that leave exactly the same area in the two tails of our distribution. A t-value that
falls below −t0.025 or above t0.025 would tend to make us believe either that a very
rare event has taken place or that our assumption about μ is in error. Should this
happen, we shall make the the decision that our assumed value of μ is in error.
In fact, a t-value falling below −t0.01 or above t0.01 would provide even stronger
evidence that our assumed value of μ is quite unlikely. General procedures for
testing claims concerning the value of the parameter μ will be treated in Chapter
10. A preliminary look into the foundation of these procedure is illustrated by the
following example.
Example 8.11: A chemical engineer claims that the population mean yield of a certain batch
process is 500 grams per milliliter of raw material. To check this claim he samples
25 batches each month. If the computed t-value falls between −t0.05 and t0.05, he
is satisfied with this claim. What conclusion should he draw from a sample that
has a mean x̄ = 518 grams per milliliter and a sample standard deviation s = 40
grams? Assume the distribution of yields to be approximately normal.
Solution: From Table A.4 we find that t0.05 = 1.711 for 24 degrees of freedom. Therefore, the
engineer can be satisfied with his claim if a sample of 25 batches yields a t-value
between −1.711 and 1.711. If μ= 500, then
t =
518 − 500
40/
√
25
= 2.25,
a value well above 1.711. The probability of obtaining a t-value, with v = 24, equal
to or greater than 2.25 is approximately 0.02. If μ  500, the value of t computed
from the sample is more reasonable. Hence, the engineer is likely to conclude that
the process produces a better product than he thought.
What Is the t-Distribution Used For?
The t-distribution is used extensively in problems that deal with inference about
the population mean (as illustrated in Example 8.11) or in problems that involve
comparative samples (i.e., in cases where one is trying to determine if means from
two samples are significantly different). The use of the distribution will be extended
in Chapters 9, 10, 11, and 12. The reader should note that use of the t-distribution
for the statistic
T =
X̄ − μ
S/
√
n
requires that X1, X2, . . . , Xn be normal. The use of the t-distribution and the
sample size consideration do not relate to the Central Limit Theorem. The use
of the standard normal distribution rather than T for n ≥ 30 merely implies that
S is a sufficiently good estimator of σ in this case. In chapters that follow the
t-distribution finds extensive usage.
8.7 F-Distribution 251
8.7 F-Distribution
We have motivated the t-distribution in part by its application to problems in which
there is comparative sampling (i.e., a comparison between two sample means).
For example, some of our examples in future chapters will take a more formal
approach, chemical engineer collects data on two catalysts, biologist collects data
on two growth media, or chemist gathers data on two methods of coating material
to inhibit corrosion. While it is of interest to let sample information shed light
on two population means, it is often the case that a comparison of variability is
equally important, if not more so. The F-distribution finds enormous application
in comparing sample variances. Applications of the F-distribution are found in
problems involving two or more samples.
The statistic F is defined to be the ratio of two independent chi-squared random
variables, each divided by its number of degrees of freedom. Hence, we can write
F =
U/v1
V/v2
,
where U and V are independent random variables having chi-squared distributions
with v1 and v2 degrees of freedom, respectively. We shall now state the sampling
distribution of F.
Theorem 8.6: Let U and V be two independent random variables having chi-squared distributions
with v1 and v2 degrees of freedom, respectively. Then the distribution of the
random variable F = U/v1
V/v2
is given by the density function
h(f) =

Γ[(v1+v2)/2](v1/v2)v1/2
Γ(v1/2)Γ(v2/2)
f(v1/2)−1
(1+v1f/v2)(v1+v2)/2 , f  0,
0, f ≤ 0.
This is known as the F-distribution with v1 and v2 degrees of freedom (d.f.).
We will make considerable use of the random variable F in future chapters. How-
ever, the density function will not be used and is given only for completeness. The
curve of the F-distribution depends not only on the two parameters v1 and v2 but
also on the order in which we state them. Once these two values are given, we can
identify the curve. Typical F-distributions are shown in Figure 8.11.
Let fα be the f-value above which we find an area equal to α. This is illustrated
by the shaded region in Figure 8.12. Table A.6 gives values of fα only for α = 0.05
and α = 0.01 for various combinations of the degrees of freedom v1 and v2. Hence,
the f-value with 6 and 10 degrees of freedom, leaving an area of 0.05 to the right,
is f0.05 = 3.22. By means of the following theorem, Table A.6 can also be used to
find values of f0.95 and f0.99. The proof is left for the reader.
252 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
f
0
d.f.  (6, 10)
d.f.  (10, 30)
Figure 8.11: Typical F-distributions.
f
0 f
α
α
Figure 8.12: Illustration of the fα for the F-
distribution.
Theorem 8.7: Writing fα(v1, v2) for fα with v1 and v2 degrees of freedom, we obtain
f1−α(v1, v2) =
1
fα(v2, v1)
.
Thus, the f-value with 6 and 10 degrees of freedom, leaving an area of 0.95 to the
right, is
f0.95(6, 10) =
1
f0.05(10, 6)
=
1
4.06
= 0.246.
The F -Distribution with Two Sample Variances
Suppose that random samples of size n1 and n2 are selected from two normal
populations with variances σ2
1 and σ2
2, respectively. From Theorem 8.4, we know
that
χ2
1 =
(n1 − 1)S2
1
σ2
1
and χ2
2 =
(n2 − 1)S2
2
σ2
2
are random variables having chi-squared distributions with v1 = n1 − 1 and v2 =
n2 − 1 degrees of freedom. Furthermore, since the samples are selected at random,
we are dealing with independent random variables. Then, using Theorem 8.6 with
χ2
1 = U and χ2
2 = V , we obtain the following result.
Theorem 8.8: If S2
1 and S2
2 are the variances of independent random samples of size n1 and n2
taken from normal populations with variances σ2
1 and σ2
2, respectively, then
F =
S2
1 /σ2
1
S2
2 /σ2
2
=
σ2
2S2
1
σ2
1S2
2
has an F-distribution with v1 = n1 − 1 and v2 = n2 − 1 degrees of freedom.
8.7 F-Distribution 253
What Is the F -Distribution Used For?
We answered this question, in part, at the beginning of this section. The F-
distribution is used in two-sample situations to draw inferences about the pop-
ulation variances. This involves the application of Theorem 8.8. However, the
F-distribution can also be applied to many other types of problems involving sam-
ple variances. In fact, the F-distribution is called the variance ratio distribution.
As an illustration, consider Case Study 8.2, in which two paints, A and B, were
compared with regard to mean drying time. The normal distribution applies nicely
(assuming that σA and σB are known). However, suppose that there are three types
of paints to compare, say A, B, and C. We wish to determine if the population
means are equivalent. Suppose that important summary information from the
experiment is as follows:
Paint Sample Mean Sample Variance Sample Size
A X̄A = 4.5 s2
A = 0.20 10
B X̄B = 5.5 s2
B = 0.14 10
C X̄C = 6.5 s2
C = 0.11 10
The problem centers around whether or not the sample averages (x̄A, x̄B, x̄C)
are far enough apart. The implication of “far enough apart” is very important.
It would seem reasonable that if the variability between sample averages is larger
than what one would expect by chance, the data do not support the conclusion
that μA = μB = μC. Whether these sample averages could have occurred by
chance depends on the variability within samples, as quantified by s2
A, s2
B, and
s2
C. The notion of the important components of variability is best seen through
some simple graphics. Consider the plot of raw data from samples A, B, and C,
shown in Figure 8.13. These data could easily have generated the above summary
information.
4.5 5.5 6.5
A A A A A A A A A A
B B B B B B B BB B
CC C C CC C C C C
xA xB xC
Figure 8.13: Data from three distinct samples.
It appears evident that the data came from distributions with different pop-
ulation means, although there is some overlap between the samples. An analysis
that involves all of the data would attempt to determine if the variability between
the sample averages and the variability within the samples could have occurred
jointly if in fact the populations have a common mean. Notice that the key to this
analysis centers around the two following sources of variability.
(1) Variability within samples (between observations in distinct samples)
(2) Variability between samples (between sample averages)
Clearly, if the variability in (1) is considerably larger than that in (2), there will be
considerable overlap in the sample data, a signal that the data could all have come
254 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
from a common distribution. An example is found in the data set shown in Figure
8.14. On the other hand, it is very unlikely that data from distributions with a
common mean could have variability between sample averages that is considerably
larger than the variability within samples.
A A A A A A A A A A
B B B B B B B B B B
C C C C C C C C C C
xA xB
xC
Figure 8.14: Data that easily could have come from the same population.
The sources of variability in (1) and (2) above generate important ratios of
sample variances, and ratios are used in conjunction with the F-distribution. The
general procedure involved is called analysis of variance. It is interesting that
in the paint example described here, we are dealing with inferences on three pop-
ulation means, but two sources of variability are used. We will not supply details
here, but in Chapters 13 through 15 we make extensive use of analysis of variance,
and, of course, the F-distribution plays an important role.
8.8 Quantile and Probability Plots
In Chapter 1 we introduced the reader to empirical distributions. The motivation is
to use creative displays to extract information about properties of a set of data. For
example, stem-and-leaf plots provide the viewer with a look at symmetry and other
properties of the data. In this chapter we deal with samples, which, of course, are
collections of experimental data from which we draw conclusions about populations.
Often the appearance of the sample provides information about the distribution
from which the data are taken. For example, in Chapter 1 we illustrated the general
nature of pairs of samples with point plots that displayed a relative comparison
between central tendency and variability in two samples.
In chapters that follow, we often make the assumption that a distribution is
normal. Graphical information regarding the validity of this assumption can be
retrieved from displays like stem-and-leaf plots and frequency histograms. In ad-
dition, we will introduce the notion of normal probability plots and quantile plots
in this section. These plots are used in studies that have varying degrees of com-
plexity, with the main objective of the plots being to provide a diagnostic check on
the assumption that the data came from a normal distribution.
We can characterize statistical analysis as the process of drawing conclusions
about systems in the presence of system variability. For example, an engineer’s
attempt to learn about a chemical process is often clouded by process variability.
A study involving the number of defective items in a production process is often
made more difficult by variability in the method of manufacture of the items. In
what has preceded, we have learned about samples and statistics that express center
of location and variability in the sample. These statistics provide single measures,
whereas a graphical display adds additional information through a picture.
One type of plot that can be particularly useful in characterizing the nature of
a data set is the quantile plot. As in the case of the box-and-whisker plot (Section
8.8 Quantile and Probability Plots 255
1.6), one can use the basic ideas in the quantile plot to compare samples of data,
where the goal of the analyst is to draw distinctions. Further illustrations of this
type of usage of quantile plots will be given in future chapters where the formal
statistical inference associated with comparing samples is discussed. At that point,
case studies will expose the reader to both the formal inference and the diagnostic
graphics for the same data set.
Quantile Plot
The purpose of the quantile plot is to depict, in sample form, the cumulative
distribution function discussed in Chapter 3.
Definition 8.6: A quantile of a sample, q(f), is a value for which a specified fraction f of the
data values is less than or equal to q(f).
Obviously, a quantile represents an estimate of a characteristic of a population,
or rather, the theoretical distribution. The sample median is q(0.5). The 75th
percentile (upper quartile) is q(0.75) and the lower quartile is q(0.25).
A quantile plot simply plots the data values on the vertical axis against an
empirical assessment of the fraction of observations exceeded by the data value. For
theoretical purposes, this fraction is computed as
fi =
i − 3
8
n + 1
4
,
where i is the order of the observations when they are ranked from low to high. In
other words, if we denote the ranked observations as
y(1) ≤ y(2) ≤ y(3) ≤ · · · ≤ y(n−1) ≤ y(n),
then the quantile plot depicts a plot of y(i) against fi. In Figure 8.15, the quantile
plot is given for the paint can ear data discussed previously.
Unlike the box-and-whisker plot, the quantile plot actually shows all observa-
tions. All quantiles, including the median and the upper and lower quantile, can
be approximated visually. For example, we readily observe a median of 35 and
an upper quartile of about 36. Relatively large clusters around specific values are
indicated by slopes near zero, while sparse data in certain areas produce steeper
slopes. Figure 8.15 depicts sparsity of data from the values 28 through 30 but
relatively high density at 36 through 38. In Chapters 9 and 10 we pursue quantile
plotting further by illustrating useful ways of comparing distinct samples.
It should be somewhat evident to the reader that detection of whether or not
a data set came from a normal distribution can be an important tool for the data
analyst. As we indicated earlier in this section, we often make the assumption that
all or subsets of observations in a data set are realizations of independent identically
distributed normal random variables. Once again, the diagnostic plot can often
nicely augment (for display purposes) a formal goodness-of-fit test on the data.
Goodness-of-fit tests are discussed in Chapter 10. Readers of a scientific paper or
report tend to find diagnostic information much clearer, less dry, and perhaps less
boring than a formal analysis. In later chapters (Chapters 9 through 13), we focus
256 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
0.0 0.2 0.4 0.6 0.8 1.0
28
30
32
34
36
38
40
Quantile
Fraction, f
Figure 8.15: Quantile plot for paint data.
again on methods of detecting deviations from normality as an augmentation of
formal statistical inference. Quantile plots are useful in detection of distribution
types. There are also situations in both model building and design of experiments
in which the plots are used to detect important model terms or effects that
are active. In other situations, they are used to determine whether or not the
underlying assumptions made by the scientist or engineer in building the model
are reasonable. Many examples with illustrations will be encountered in Chapters
11, 12, and 13. The following subsection provides a discussion and illustration of
a diagnostic plot called the normal quantile-quantile plot.
Normal Quantile-Quantile Plot
The normal quantile-quantile plot takes advantage of what is known about the
quantiles of the normal distribution. The methodology involves a plot of the em-
pirical quantiles recently discussed against the corresponding quantile of the normal
distribution. Now, the expression for a quantile of an N(μ, σ) random variable is
very complicated. However, a good approximation is given by
qμ,σ(f) = μ + σ{4.91[f0.14
− (1 − f)0.14
]}.
The expression in braces (the multiple of σ) is the approximation for the corre-
sponding quantile for the N(0, 1) random variable, that is,
q0,1(f) = 4.91[f0.14
− (1 − f)0.14
].
8.8 Quantile and Probability Plots 257
Definition 8.7: The normal quantile-quantile plot is a plot of y(i) (ordered observations)
against q0,1(fi), where fi =
i− 3
8
n+ 1
4
.
A nearly straight-line relationship suggests that the data came from a normal
distribution. The intercept on the vertical axis is an estimate of the population
mean μ and the slope is an estimate of the standard deviation σ. Figure 8.16 shows
a normal quantile-quantile plot for the paint can data.
−2 2 1 −2 2
28
30
32
34
36
38
40
Quantile
y
Standard normal quantile, q (f)
0,1
Figure 8.16: Normal quantile-quantile plot for paint data.
Normal Probability Plotting
Notice how the deviation from normality becomes clear from the appearance of the
plot. The asymmetry exhibited in the data results in changes in the slope.
The ideas of probability plotting are manifested in plots other than the normal
quantile-quantile plot discussed here. For example, much attention is given to the
so-called normal probability plot, in which f is plotted against the ordered data
values on special paper and the scale used results in a straight line. In addition,
an alternative plot makes use of the expected values of the ranked observations for
the normal distribution and plots the ranked observations against their expected
value, under the assumption of data from N(μ, σ). Once again, the straight line
is the graphical yardstick used. We continue to suggest that the foundation in
graphical analytical methods developed in this section will aid in understanding
formal methods of distinguishing between distinct samples of data.
258 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
Example 8.12: Consider the data in Exercise 10.41 on page 358 in Chapter 10. In a study “Nu-
trient Retention and Macro Invertebrate Community Response to Sewage Stress
in a Stream Ecosystem,” conducted in the Department of Zoology at the Virginia
Polytechnic Institute and State University, data were collected on density measure-
ments (number of organisms per square meter) at two different collecting stations.
Details are given in Chapter 10 regarding analytical methods of comparing samples
to determine if both are from the same N(μ, σ) distribution. The data are given
in Table 8.1.
Table 8.1: Data for Example 8.12
Number of Organisms per Square Meter
Station 1 Station 2
5, 030
13, 700
10, 730
11, 400
860
2, 200
4, 250
15, 040
4, 980
11, 910
8, 130
26, 850
17, 660
22, 800
1, 130
1, 690
2, 800
4, 670
6, 890
7, 720
7, 030
7, 330
2, 810
1, 330
3, 320
1, 230
2, 130
2, 190
Construct a normal quantile-quantile plot and draw conclusions regarding whether
or not it is reasonable to assume that the two samples are from the same n(x; μ, σ)
distribution.
−2 −1 0 1 2
5,000
10,000
15,000
20,000
25,000
Standard normal quantile, q ( f)
0,1
Station 1
Station 2
Quantile
Figure 8.17: Normal quantile-quantile plot for density data of Example 8.12.
/ /
Exercises 259
Solution: Figure 8.17 shows the normal quantile-quantile plot for the density measurements.
The plot is far from a single straight line. In fact, the data from station 1 reflect
a few values in the lower tail of the distribution and several in the upper tail.
The “clustering” of observations would make it seem unlikely that the two samples
came from a common N(μ, σ) distribution.
Although we have concentrated our development and illustration on probability
plotting for the normal distribution, we could focus on any distribution. We would
merely need to compute quantities analytically for the theoretical distribution in
question.
Exercises
8.37 For a chi-squared distribution, find
(a) χ2
0.025 when v = 15;
(b) χ2
0.01 when v = 7;
(c) χ2
0.05 when v = 24.
8.38 For a chi-squared distribution, find
(a) χ2
0.005 when v = 5;
(b) χ2
0.05 when v = 19;
(c) χ2
0.01 when v = 12.
8.39 For a chi-squared distribution, find χ2
α such that
(a) P(X2
 χ2
α) = 0.99 when v = 4;
(b) P(X2
 χ2
α) = 0.025 when v = 19;
(c) P(37.652  X2
 χ2
α) = 0.045 when v = 25.
8.40 For a chi-squared distribution, find χ2
α such that
(a) P(X2
 χ2
α) = 0.01 when v = 21;
(b) P(X2
 χ2
α) = 0.95 when v = 6;
(c) P(χ2
α  X2
 23.209) = 0.015 when v = 10.
8.41 Assume the sample variances to be continuous
measurements. Find the probability that a random
sample of 25 observations, from a normal population
with variance σ2
= 6, will have a sample variance S2
(a) greater than 9.1;
(b) between 3.462 and 10.745.
8.42 The scores on a placement test given to college
freshmen for the past five years are approximately nor-
mally distributed with a mean μ = 74 and a variance
σ2
= 8. Would you still consider σ2
= 8 to be a valid
value of the variance if a random sample of 20 students
who take the placement test this year obtain a value of
s2
= 20?
8.43 Show that the variance of S2
for random sam-
ples of size n from a normal population decreases as
n becomes large. [Hint: First find the variance of
(n − 1)S2
/σ2
.]
8.44 (a) Find t0.025 when v = 14.
(b) Find −t0.10 when v = 10.
(c) Find t0.995 when v = 7.
8.45 (a) Find P(T  2.365) when v = 7.
(b) Find P(T  1.318) when v = 24.
(c) Find P(−1.356  T  2.179) when v = 12.
(d) Find P(T  −2.567) when v = 17.
8.46 (a) Find P(−t0.005  T  t0.01) for v = 20.
(b) Find P(T  −t0.025).
8.47 Given a random sample of size 24 from a normal
distribution, find k such that
(a) P(−2.069  T  k) = 0.965;
(b) P(k  T  2.807) = 0.095;
(c) P(−k  T  k) = 0.90.
8.48 A manufacturing firm claims that the batteries
used in their electronic games will last an average of
30 hours. To maintain this average, 16 batteries are
tested each month. If the computed t-value falls be-
tween −t0.025 and t0.025, the firm is satisfied with its
claim. What conclusion should the firm draw from a
sample that has a mean of x̄ = 27.5 hours and a stan-
dard deviation of s = 5 hours? Assume the distribution
of battery lives to be approximately normal.
8.49 A normal population with unknown variance has
a mean of 20. Is one likely to obtain a random sample
of size 9 from this population with a mean of 24 and
a standard deviation of 4.1? If not, what conclusion
would you draw?
/ /
260 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
8.50 A maker of a certain brand of low-fat cereal bars
claims that the average saturated fat content is 0.5
gram. In a random sample of 8 cereal bars of this
brand, the saturated fat content was 0.6, 0.7, 0.7, 0.3,
0.4, 0.5, 0.4, and 0.2. Would you agree with the claim?
Assume a normal distribution.
8.51 For an F-distribution, find
(a) f0.05 with v1 = 7 and v2 = 15;
(b) f0.05 with v1 = 15 and v2 = 7:
(c) f0.01 with v1 = 24 and v2 = 19;
(d) f0.95 with v1 = 19 and v2 = 24;
(e) f0.99 with v1 = 28 and v2 = 12.
8.52 Pull-strength tests on 10 soldered leads for a
semiconductor device yield the following results, in
pounds of force required to rupture the bond:
19.8 12.7 13.2 16.9 10.6
18.8 11.1 14.3 17.0 12.5
Another set of 8 leads was tested after encapsulation
to determine whether the pull strength had been in-
creased by encapsulation of the device, with the fol-
lowing results:
24.9 22.8 23.6 22.1 20.4 21.6 21.8 22.5
Comment on the evidence available concerning equal-
ity of the two population variances.
8.53 Consider the following measurements of the
heat-producing capacity of the coal produced by two
mines (in millions of calories per ton):
Mine 1: 8260 8130 8350 8070 8340
Mine 2: 7950 7890 7900 8140 7920 7840
Can it be concluded that the two population variances
are equal?
8.54 Construct a quantile plot of these data, which
represent the lifetimes, in hours, of fifty 40-watt, 110-
volt internally frosted incandescent lamps taken from
forced life tests:
919 1196 785 1126 936 918
1156 920 948 1067 1092 1162
1170 929 950 905 972 1035
1045 855 1195 1195 1340 1122
938 970 1237 956 1102 1157
978 832 1009 1157 1151 1009
765 958 902 1022 1333 811
1217 1085 896 958 1311 1037
702 923
8.55 Construct a normal quantile-quantile plot of
these data, which represent the diameters of 36 rivet
heads in 1/100 of an inch:
6.72 6.77 6.82 6.70 6.78 6.70 6.62
6.75 6.66 6.66 6.64 6.76 6.73 6.80
6.72 6.76 6.76 6.68 6.66 6.62 6.72
6.76 6.70 6.78 6.76 6.67 6.70 6.72
6.74 6.81 6.79 6.78 6.66 6.76 6.76
6.72
Review Exercises
8.56 Consider the data displayed in Exercise 1.20 on
page 31. Construct a box-and-whisker plot and com-
ment on the nature of the sample. Compute the sample
mean and sample standard deviation.
8.57 If X1, X2, . . . , Xn are independent random vari-
ables having identical exponential distributions with
parameter θ, show that the density function of the ran-
dom variable Y = X1+X2+· · ·+Xn is that of a gamma
distribution with parameters α = n and β = θ.
8.58 In testing for carbon monoxide in a certain
brand of cigarette, the data, in milligrams per
cigarette, were coded by subtracting 12 from each ob-
servation. Use the results of Exercise 8.14 on page 231
to find the standard deviation for the carbon monox-
ide content of a random sample of 15 cigarettes of this
brand if the coded measurements are 3.8, −0.9, 5.4,
4.5, 5.2, 5.6, 2.7, −0.1, −0.3, −1.7, 5.7, 3.3, 4.4, −0.5,
and 1.9.
8.59 If S2
1 and S2
2 represent the variances of indepen-
dent random samples of size n1 = 8 and n2 = 12,
taken from normal populations with equal variances,
find P(S2
1 /S2
2  4.89).
8.60 A random sample of 5 bank presidents indi-
cated annual salaries of $395,000, $521,000, $483,000,
$479,000, and $510,000. Find the variance of this set.
8.61 If the number of hurricanes that hit a certain
area of the eastern United States per year is a random
variable having a Poisson distribution with μ = 6, find
the probability that this area will be hit by
(a) exactly 15 hurricanes in 2 years;
(b) at most 9 hurricanes in 2 years.
8.62 A taxi company tests a random sample of 10
steel-belted radial tires of a certain brand and records
the following tread wear: 48,000, 53,000, 45,000,
61,000, 59,000, 56,000, 63,000, 49,000, 53,000, and
54,000 kilometers. Use the results of Exercise 8.14 on
page 231 to find the standard deviation of this set of
/ /
Review Exercises 261
data by first dividing each observation by 1000 and
then subtracting 55.
8.63 Consider the data of Exercise 1.19 on page 31.
Construct a box-and-whisker plot. Comment. Com-
pute the sample mean and sample standard deviation.
8.64 If S2
1 and S2
2 represent the variances of indepen-
dent random samples of size n1 = 25 and n2 = 31,
taken from normal populations with variances σ2
1 = 10
and σ2
2 = 15, respectively, find
P(S2
1 /S2
2  1.26).
8.65 Consider Example 1.5 on page 25. Comment on
any outliers.
8.66 Consider Review Exercise 8.56. Comment on
any outliers in the data.
8.67 The breaking strength X of a certain rivet used
in a machine engine has a mean 5000 psi and stan-
dard deviation 400 psi. A random sample of 36 rivets
is taken. Consider the distribution of X̄, the sample
mean breaking strength.
(a) What is the probability that the sample mean falls
between 4800 psi and 5200 psi?
(b) What sample n would be necessary in order to have
P(4900  X̄  5100) = 0.99?
8.68 Consider the situation of Review Exercise 8.62.
If the population from which the sample was taken has
population mean μ = 53, 000 kilometers, does the sam-
ple information here seem to support that claim? In
your answer, compute
t =
x̄ − 53, 000
s/
√
10
and determine from Table A.4 (with 9 d.f.) whether
the computed t-value is reasonable or appears to be a
rare event.
8.69 Two distinct solid fuel propellants, type A and
type B, are being considered for a space program activ-
ity. Burning rates of the propellant are crucial. Ran-
dom samples of 20 specimens of the two propellants
are taken with sample means 20.5 cm/sec for propel-
lant A and 24.50 cm/sec for propellant B. It is gen-
erally assumed that the variability in burning rate is
roughly the same for the two propellants and is given
by a population standard deviation of 5 cm/sec. As-
sume that the burning rates for each propellant are
approximately normal and hence make use of the Cen-
tral Limit Theorem. Nothing is known about the two
population mean burning rates, and it is hoped that
this experiment might shed some light on them.
(a) If, indeed, μA = μB, what is P(X̄B − X̄A ≥ 4.0)?
(b) Use your answer in (a) to shed some light on the
proposition that μA = μB.
8.70 The concentration of an active ingredient in the
output of a chemical reaction is strongly influenced by
the catalyst that is used in the reaction. It is felt that
when catalyst A is used, the population mean concen-
tration exceeds 65%. The standard deviation is known
to be σ = 5%. A sample of outputs from 30 inde-
pendent experiments gives the average concentration
of x̄A = 64.5%.
(a) Does this sample information with an average con-
centration of x̄A = 64.5% provide disturbing in-
formation that perhaps μA is not 65%, but less
than 65%? Support your answer with a probability
statement.
(b) Suppose a similar experiment is done with the use
of another catalyst, catalyst B. The standard devi-
ation σ is still assumed to be 5% and x̄B turns out
to be 70%. Comment on whether or not the sample
information on catalyst B strongly suggests that
μB is truly greater than μA. Support your answer
by computing
P(X̄B − X̄A ≥ 5.5 | μB = μA).
(c) Under the condition that μA = μB = 65%, give the
approximate distribution of the following quantities
(with mean and variance of each). Make use of the
Central Limit Theorem.
i)X̄B;
ii)X̄A − X̄B;
iii)X̄A−X̄B
σ
√
2/30
.
8.71 From the information in Review Exercise 8.70,
compute (assuming μB = 65%) P(X̄B ≥ 70).
8.72 Given a normal random variable X with mean
20 and variance 9, and a random sample of size n taken
from the distribution, what sample size n is necessary
in order that
P(19.9 ≤ X̄ ≤ 20.1) = 0.95?
8.73 In Chapter 9, the concept of parameter esti-
mation will be discussed at length. Suppose X is a
random variable with mean μ and variance σ2
= 1.0.
Suppose also that a random sample of size n is to be
taken and x̄ is to be used as an estimate of μ. When
the data are taken and the sample mean is measured,
we wish it to be within 0.05 unit of the true mean with
probability 0.99. That is, we want there to be a good
chance that the computed x̄ from the sample is “very
262 Chapter 8 Fundamental Sampling Distributions and Data Descriptions
close” to the population mean (wherever it is!), so we
wish
P(|X̄ − μ|  0.05) = 0.99.
What sample size is required?
8.74 Suppose a filling machine is used to fill cartons
with a liquid product. The specification that is strictly
enforced for the filling machine is 9 ± 1.5 oz. If any car-
ton is produced with weight outside these bounds, it is
considered by the supplier to be defective. It is hoped
that at least 99% of cartons will meet these specifica-
tions. With the conditions μ = 9 and σ = 1, what
proportion of cartons from the process are defective?
If changes are made to reduce variability, what must
σ be reduced to in order to meet specifications with
probability 0.99? Assume a normal distribution for
the weight.
8.75 Consider the situation in Review Exercise 8.74.
Suppose a considerable effort is conducted to “tighten”
the variability in the system. Following the effort, a
random sample of size 40 is taken from the new assem-
bly line and the sample variance is s2
= 0.188 ounces2
.
Do we have strong numerical evidence that σ2
has been
reduced below 1.0? Consider the probability
P(S2
≤ 0.188 | σ2
= 1.0),
and give your conclusion.
8.76 Group Project: The class should be divided
into groups of four people. The four students in each
group should go to the college gym or a local fit-
ness center. The students should ask each person who
comes through the door his or her height in inches.
Each group will then divide the height data by gender
and work together to answer the following questions.
(a) Construct a normal quantile-quantile plot of the
data. Based on the plot, do the data appear to
follow a normal distribution?
(b) Use the estimated sample variance as the true vari-
ance for each gender. Assume that the popula-
tion mean height for male students is actually three
inches larger than that of female students. What is
the probability that the average height of the male
students will be 4 inches larger than that of the
female students in your sample?
(c) What factors could render these results misleading?
8.9 Potential Misconceptions and Hazards;
Relationship to Material in Other Chapters
The Central Limit Theorem is one of the most powerful tools in all of statistics, and
even though this chapter is relatively short, it contains a wealth of fundamental
information about tools that will be used throughout the balance of the text.
The notion of a sampling distribution is one of the most important fundamental
concepts in all of statistics, and the student at this point in his or her training
should gain a clear understanding of it before proceeding beyond this chapter. All
chapters that follow will make considerable use of sampling distributions. Suppose
one wants to use the statistic X̄ to draw inferences about the population mean
μ. This will be done by using the observed value x̄ from a single sample of size
n. Then any inference made must be accomplished by taking into account not
just the single value but rather the theoretical structure, or distribution of all x̄
values that could be observed from samples of size n. Thus, the concept of
a sampling distribution comes to the surface. This distribution is the basis for the
Central Limit Theorem. The t, χ2
, and F-distributions are also used in the context
of sampling distributions. For example, the t-distribution, pictured in Figure 8.8,
represents the structure that occurs if all of the values of x̄−μ
s/
√
n
are formed, where
x̄ and s are taken from samples of size n from a n(x; μ, σ) distribution. Similar
remarks can be made about χ2
and F, and the reader should not forget that the
sample information forming the statistics for all of these distributions is the normal.
So it can be said that where there is a t, F, or χ2
, the source was a sample
from a normal distribution.
8.9 Potential Misconceptions and Hazards 263
The three distributions described above may appear to have been introduced in
a rather self-contained fashion with no indication of what they are about. However,
they will appear in practical problem-solving throughout the balance of the text.
Now, there are three things that one must bear in mind, lest confusion set in
regarding these fundamental sampling distributions:
(i) One cannot use the Central Limit Theorem unless σ is known. When σ is not
known, it should be replaced by s, the sample standard deviation, in order to
use the Central Limit Theorem.
(ii) The T statistic is not a result of the Central Limit Theorem and x1, x2, . . . , xn
must come from a n(x; μ, σ) distribution in order for x̄−μ
s/
√
n
to be a t-distribution;
s is, of course, merely an estimate of σ.
(iii) While the notion of degrees of freedom is new at this point, the concept
should be very intuitive, since it is reasonable that the nature of the distri-
bution of S and also t should depend on the amount of information in the
sample x1, x2, . . . , xn.
This page intentionally left blank
Chapter 9
One- and Two-Sample
Estimation Problems
9.1 Introduction
In previous chapters, we emphasized sampling properties of the sample mean and
variance. We also emphasized displays of data in various forms. The purpose of
these presentations is to build a foundation that allows us to draw conclusions about
the population parameters from experimental data. For example, the Central Limit
Theorem provides information about the distribution of the sample mean X̄. The
distribution involves the population mean μ. Thus, any conclusions concerning μ
drawn from an observed sample average must depend on knowledge of this sampling
distribution. Similar comments apply to S2
and σ2
. Clearly, any conclusions we
draw about the variance of a normal distribution will likely involve the sampling
distribution of S2
.
In this chapter, we begin by formally outlining the purpose of statistical in-
ference. We follow this by discussing the problem of estimation of population
parameters. We confine our formal developments of specific estimation proce-
dures to problems involving one and two samples.
9.2 Statistical Inference
In Chapter 1, we discussed the general philosophy of formal statistical inference.
Statistical inference consists of those methods by which one makes inferences or
generalizations about a population. The trend today is to distinguish between the
classical method of estimating a population parameter, whereby inferences are
based strictly on information obtained from a random sample selected from the
population, and the Bayesian method, which utilizes prior subjective knowledge
about the probability distribution of the unknown parameters in conjunction with
the information provided by the sample data. Throughout most of this chapter,
we shall use classical methods to estimate unknown population parameters such as
the mean, the proportion, and the variance by computing statistics from random
265
266 Chapter 9 One- and Two-Sample Estimation Problems
samples and applying the theory of sampling distributions, much of which was
covered in Chapter 8. Bayesian estimation will be discussed in Chapter 18.
Statistical inference may be divided into two major areas: estimation and
tests of hypotheses. We treat these two areas separately, dealing with theory
and applications of estimation in this chapter and hypothesis testing in Chapter
10. To distinguish clearly between the two areas, consider the following examples.
A candidate for public office may wish to estimate the true proportion of voters
favoring him by obtaining opinions from a random sample of 100 eligible voters.
The fraction of voters in the sample favoring the candidate could be used as an
estimate of the true proportion in the population of voters. A knowledge of the
sampling distribution of a proportion enables one to establish the degree of accuracy
of such an estimate. This problem falls in the area of estimation.
Now consider the case in which one is interested in finding out whether brand
A floor wax is more scuff-resistant than brand B floor wax. He or she might
hypothesize that brand A is better than brand B and, after proper testing, accept or
reject this hypothesis. In this example, we do not attempt to estimate a parameter,
but instead we try to arrive at a correct decision about a prestated hypothesis.
Once again we are dependent on sampling theory and the use of data to provide
us with some measure of accuracy for our decision.
9.3 Classical Methods of Estimation
A point estimate of some population parameter θ is a single value θ̂ of a statistic
Θ̂. For example, the value x̄ of the statistic X̄, computed from a sample of size n,
is a point estimate of the population parameter μ. Similarly, p̂ = x/n is a point
estimate of the true proportion p for a binomial experiment.
An estimator is not expected to estimate the population parameter without
error. We do not expect X̄ to estimate μ exactly, but we certainly hope that it is
not far off. For a particular sample, it is possible to obtain a closer estimate of μ
by using the sample median X̃ as an estimator. Consider, for instance, a sample
consisting of the values 2, 5, and 11 from a population whose mean is 4 but is
supposedly unknown. We would estimate μ to be x̄ = 6, using the sample mean
as our estimate, or x̃ = 5, using the sample median as our estimate. In this case,
the estimator X̃ produces an estimate closer to the true parameter than does the
estimator X̄. On the other hand, if our random sample contains the values 2, 6,
and 7, then x̄ = 5 and x̃ = 6, so X̄ is the better estimator. Not knowing the true
value of μ, we must decide in advance whether to use X̄ or X̃ as our estimator.
Unbiased Estimator
What are the desirable properties of a “good” decision function that would influ-
ence us to choose one estimator rather than another? Let Θ̂ be an estimator whose
value θ̂ is a point estimate of some unknown population parameter θ. Certainly, we
would like the sampling distribution of Θ̂ to have a mean equal to the parameter
estimated. An estimator possessing this property is said to be unbiased.
9.3 Classical Methods of Estimation 267
Definition 9.1: A statistic Θ̂ is said to be an unbiased estimator of the parameter θ if
μΘ̂ = E(Θ̂) = θ.
Example 9.1: Show that S2
is an unbiased estimator of the parameter σ2
.
Solution: In Section 8.5 on page 244, we showed that
n

i=1
(Xi − X̄)2
=
n

i=1
(Xi − μ)2
− n(X̄ − μ)2
.
Now
E(S2
) = E

1
n − 1
n

i=1
(Xi − X̄)2

=
1
n − 1
 n

i=1
E(Xi − μ)2
− nE(X̄ − μ)2

=
1
n − 1
 n

i=1
σ2
Xi
− nσ2
X̄

.
However,
σ2
Xi
= σ2
, for i = 1, 2, . . . , n, and σ2
X̄ =
σ2
n
.
Therefore,
E(S2
) =
1
n − 1

nσ2
− n
σ2
n

= σ2
.
Although S2
is an unbiased estimator of σ2
, S, on the other hand, is usually a
biased estimator of σ, with the bias becoming insignificant for large samples. This
example illustrates why we divide by n − 1 rather than n when the variance is
estimated.
Variance of a Point Estimator
If Θ̂1 and Θ̂2 are two unbiased estimators of the same population parameter θ, we
want to choose the estimator whose sampling distribution has the smaller variance.
Hence, if σ2
θ̂1
 σ2
θ̂2
, we say that Θ̂1 is a more efficient estimator of θ than Θ̂2.
Definition 9.2: If we consider all possible unbiased estimators of some parameter θ, the one with
the smallest variance is called the most efficient estimator of θ.
Figure 9.1 illustrates the sampling distributions of three different estimators,
Θ̂1, Θ̂2, and Θ̂3, all estimating θ. It is clear that only Θ̂1 and Θ̂2 are unbiased,
since their distributions are centered at θ. The estimator Θ̂1 has a smaller variance
than Θ̂2 and is therefore more efficient. Hence, our choice for an estimator of θ,
among the three considered, would be Θ̂1.
For normal populations, one can show that both X̄ and X̃ are unbiased estima-
tors of the population mean μ, but the variance of X̄ is smaller than the variance
268 Chapter 9 One- and Two-Sample Estimation Problems
θ
^
θ
2
^
1
^
3
^
Figure 9.1: Sampling distributions of different estimators of θ.
of X̃. Thus, both estimates x̄ and x̃ will, on average, equal the population mean
μ, but x̄ is likely to be closer to μ for a given sample, and thus X̄ is more efficient
than X̃.
Interval Estimation
Even the most efficient unbiased estimator is unlikely to estimate the population
parameter exactly. It is true that estimation accuracy increases with large samples,
but there is still no reason we should expect a point estimate from a given sample
to be exactly equal to the population parameter it is supposed to estimate. There
are many situations in which it is preferable to determine an interval within which
we would expect to find the value of the parameter. Such an interval is called an
interval estimate.
An interval estimate of a population parameter θ is an interval of the form
θ̂L  θ  θ̂U , where θ̂L and θ̂U depend on the value of the statistic Θ̂ for a
particular sample and also on the sampling distribution of Θ̂. For example, a
random sample of SAT verbal scores for students in the entering freshman class
might produce an interval from 530 to 550, within which we expect to find the
true average of all SAT verbal scores for the freshman class. The values of the
endpoints, 530 and 550, will depend on the computed sample mean x̄ and the
sampling distribution of X̄. As the sample size increases, we know that σ2
X̄
= σ2
/n
decreases, and consequently our estimate is likely to be closer to the parameter μ,
resulting in a shorter interval. Thus, the interval estimate indicates, by its length,
the accuracy of the point estimate. An engineer will gain some insight into the
population proportion defective by taking a sample and computing the sample
proportion defective. But an interval estimate might be more informative.
Interpretation of Interval Estimates
Since different samples will generally yield different values of Θ̂ and, therefore,
different values for θ̂L and θ̂U , these endpoints of the interval are values of corre-
sponding random variables Θ̂L and Θ̂U . From the sampling distribution of Θ̂ we
shall be able to determine Θ̂L and Θ̂U such that P(Θ̂L  θ  Θ̂U ) is equal to any
9.4 Single Sample: Estimating the Mean 269
positive fractional value we care to specify. If, for instance, we find Θ̂L and Θ̂U
such that
P(Θ̂L  θ  Θ̂U ) = 1 − α,
for 0  α  1, then we have a probability of 1−α of selecting a random sample that
will produce an interval containing θ. The interval θ̂L  θ  θ̂U , computed from
the selected sample, is called a 100(1 − α)% confidence interval, the fraction
1 − α is called the confidence coefficient or the degree of confidence, and
the endpoints, θ̂L and θ̂U , are called the lower and upper confidence limits.
Thus, when α = 0.05, we have a 95% confidence interval, and when α = 0.01, we
obtain a wider 99% confidence interval. The wider the confidence interval is, the
more confident we can be that the interval contains the unknown parameter. Of
course, it is better to be 95% confident that the average life of a certain television
transistor is between 6 and 7 years than to be 99% confident that it is between 3
and 10 years. Ideally, we prefer a short interval with a high degree of confidence.
Sometimes, restrictions on the size of our sample prevent us from achieving short
intervals without sacrificing some degree of confidence.
In the sections that follow, we pursue the notions of point and interval esti-
mation, with each section presenting a different special case. The reader should
notice that while point and interval estimation represent different approaches to
gaining information regarding a parameter, they are related in the sense that con-
fidence interval estimators are based on point estimators. In the following section,
for example, we will see that X̄ is a very reasonable point estimator of μ. As a
result, the important confidence interval estimator of μ depends on knowledge of
the sampling distribution of X̄.
We begin the following section with the simplest case of a confidence interval.
The scenario is simple and yet unrealistic. We are interested in estimating a popu-
lation mean μ and yet σ is known. Clearly, if μ is unknown, it is quite unlikely that
σ is known. Any historical results that produced enough information to allow the
assumption that σ is known would likely have produced similar information about
μ. Despite this argument, we begin with this case because the concepts and indeed
the resulting mechanics associated with confidence interval estimation remain the
same for the more realistic situations presented later in Section 9.4 and beyond.
9.4 Single Sample: Estimating the Mean
The sampling distribution of X̄ is centered at μ, and in most applications the
variance is smaller than that of any other estimators of μ. Thus, the sample
mean x̄ will be used as a point estimate for the population mean μ. Recall that
σ2
X̄
= σ2
/n, so a large sample will yield a value of X̄ that comes from a sampling
distribution with a small variance. Hence, x̄ is likely to be a very accurate estimate
of μ when n is large.
Let us now consider the interval estimate of μ. If our sample is selected from
a normal population or, failing this, if n is sufficiently large, we can establish a
confidence interval for μ by considering the sampling distribution of X̄.
According to the Central Limit Theorem, we can expect the sampling distri-
bution of X̄ to be approximately normally distributed with mean μX̄ = μ and
270 Chapter 9 One- and Two-Sample Estimation Problems
standard deviation σX̄ = σ/
√
n. Writing zα/2 for the z-value above which we find
an area of α/2 under the normal curve, we can see from Figure 9.2 that
P(−zα/2  Z  zα/2) = 1 − α,
where
Z =
X̄ − μ
σ/
√
n
.
Hence,
P

−zα/2 
X̄ − μ
σ/
√
n
 zα/2

= 1 − α.
z
1 −
−zα/2 0 zα/2
α/2 α /2
α
Figure 9.2: P(−zα/2  Z  zα/2) = 1 − α.
Multiplying each term in the inequality by σ/
√
n and then subtracting X̄ from each
term and multiplying by −1 (reversing the sense of the inequalities), we obtain
P

X̄ − zα/2
σ
√
n
 μ  X̄ + zα/2
σ
√
n

= 1 − α.
A random sample of size n is selected from a population whose variance σ2
is known,
and the mean x̄ is computed to give the 100(1 − α)% confidence interval below. It
is important to emphasize that we have invoked the Central Limit Theorem above.
As a result, it is important to note the conditions for applications that follow.
Confidence
Interval on μ, σ2
Known
If x̄ is the mean of a random sample of size n from a population with known
variance σ2
, a 100(1 − α)% confidence interval for μ is given by
x̄ − zα/2
σ
√
n
 μ  x̄ + zα/2
σ
√
n
,
where zα/2 is the z-value leaving an area of α/2 to the right.
For small samples selected from nonnormal populations, we cannot expect our
degree of confidence to be accurate. However, for samples of size n ≥ 30, with
9.4 Single Sample: Estimating the Mean 271
the shape of the distributions not too skewed, sampling theory guarantees good
results.
Clearly, the values of the random variables Θ̂L and Θ̂U , defined in Section 9.3,
are the confidence limits
θ̂L = x̄ − zα/2
σ
√
n
and θ̂U = x̄ + zα/2
σ
√
n
.
Different samples will yield different values of x̄ and therefore produce different
interval estimates of the parameter μ, as shown in Figure 9.3. The dot at the
center of each interval indicates the position of the point estimate x̄ for that random
sample. Note that all of these intervals are of the same width, since their widths
depend only on the choice of zα/2 once x̄ is determined. The larger the value we
choose for zα/2, the wider we make all the intervals and the more confident we
can be that the particular sample selected will produce an interval that contains
the unknown parameter μ. In general, for a selection of zα/2, 100(1 − α)% of the
intervals will cover μ.
1
2
3
4
5
6
7
8
9
10
μ
x
Sample
Figure 9.3: Interval estimates of μ for different samples.
Example 9.2: The average zinc concentration recovered from a sample of measurements taken
in 36 different locations in a river is found to be 2.6 grams per milliliter. Find
the 95% and 99% confidence intervals for the mean zinc concentration in the river.
Assume that the population standard deviation is 0.3 gram per milliliter.
Solution: The point estimate of μ is x̄ = 2.6. The z-value leaving an area of 0.025 to the
right, and therefore an area of 0.975 to the left, is z0.025 = 1.96 (Table A.3). Hence,
the 95% confidence interval is
2.6 − (1.96)

0.3
√
36

 μ  2.6 + (1.96)

0.3
√
36

,
272 Chapter 9 One- and Two-Sample Estimation Problems
which reduces to 2.50  μ  2.70. To find a 99% confidence interval, we find the
z-value leaving an area of 0.005 to the right and 0.995 to the left. From Table A.3
again, z0.005 = 2.575, and the 99% confidence interval is
2.6 − (2.575)

0.3
√
36

 μ  2.6 + (2.575)

0.3
√
36

,
or simply
2.47  μ  2.73.
We now see that a longer interval is required to estimate μ with a higher degree of
confidence.
The 100(1−α)% confidence interval provides an estimate of the accuracy of our
point estimate. If μ is actually the center value of the interval, then x̄ estimates
μ without error. Most of the time, however, x̄ will not be exactly equal to μ and
the point estimate will be in error. The size of this error will be the absolute value
of the difference between μ and x̄, and we can be 100(1 − α)% confident that this
difference will not exceed zα/2
σ
√
n
. We can readily see this if we draw a diagram of
a hypothetical confidence interval, as in Figure 9.4.
x μ
Error
x z σ σ
n x  z n
/2
α /2
α
/ /
Figure 9.4: Error in estimating μ by x̄.
Theorem 9.1: If x̄ is used as an estimate of μ, we can be 100(1 − α)% confident that the error
will not exceed zα/2
σ
√
n
.
In Example 9.2, we are 95% confident that the sample mean x̄ = 2.6 differs
from the true mean μ by an amount less than (1.96)(0.3)/
√
36 = 0.1 and 99%
confident that the difference is less than (2.575)(0.3)/
√
36 = 0.13.
Frequently, we wish to know how large a sample is necessary to ensure that
the error in estimating μ will be less than a specified amount e. By Theorem 9.1,
we must choose n such that zα/2
σ
√
n
= e. Solving this equation gives the following
formula for n.
Theorem 9.2: If x̄ is used as an estimate of μ, we can be 100(1 − α)% confident that the error
will not exceed a specified amount e when the sample size is
n =
#zα/2σ
e
$2
.
When solving for the sample size, n, we round all fractional values up to the
next whole number. By adhering to this principle, we can be sure that our degree
of confidence never falls below 100(1 − α)%.
9.4 Single Sample: Estimating the Mean 273
Strictly speaking, the formula in Theorem 9.2 is applicable only if we know
the variance of the population from which we select our sample. Lacking this
information, we could take a preliminary sample of size n ≥ 30 to provide an
estimate of σ. Then, using s as an approximation for σ in Theorem 9.2, we could
determine approximately how many observations are needed to provide the desired
degree of accuracy.
Example 9.3: How large a sample is required if we want to be 95% confident that our estimate
of μ in Example 9.2 is off by less than 0.05?
Solution: The population standard deviation is σ = 0.3. Then, by Theorem 9.2,
n =

(1.96)(0.3)
0.05
2
= 138.3.
Therefore, we can be 95% confident that a random sample of size 139 will provide
an estimate x̄ differing from μ by an amount less than 0.05.
One-Sided Confidence Bounds
The confidence intervals and resulting confidence bounds discussed thus far are
two-sided (i.e., both upper and lower bounds are given). However, there are many
applications in which only one bound is sought. For example, if the measurement
of interest is tensile strength, the engineer receives better information from a lower
bound only. This bound communicates the worst-case scenario. On the other
hand, if the measurement is something for which a relatively large value of μ is not
profitable or desirable, then an upper confidence bound is of interest. An example
would be a case in which inferences need to be made concerning the mean mercury
composition in a river. An upper bound is very informative in this case.
One-sided confidence bounds are developed in the same fashion as two-sided
intervals. However, the source is a one-sided probability statement that makes use
of the Central Limit Theorem:
P

X̄ − μ
σ/
√
n
 zα

= 1 − α.
One can then manipulate the probability statement much as before and obtain
P(μ  X̄ − zασ/
√
n) = 1 − α.
Similar manipulation of P
#
X̄−μ
σ/
√
n
 −zα
$
= 1 − α gives
P(μ  X̄ + zασ/
√
n) = 1 − α.
As a result, the upper and lower one-sided bounds follow.
One-Sided
Confidence
Bounds on μ, σ2
Known
If X̄ is the mean of a random sample of size n from a population with variance
σ2
, the one-sided 100(1 − α)% confidence bounds for μ are given by
upper one-sided bound: x̄ + zασ/
√
n;
lower one-sided bound: x̄ − zασ/
√
n.
274 Chapter 9 One- and Two-Sample Estimation Problems
Example 9.4: In a psychological testing experiment, 25 subjects are selected randomly and their
reaction time, in seconds, to a particular stimulus is measured. Past experience
suggests that the variance in reaction times to these types of stimuli is 4 sec2
and
that the distribution of reaction times is approximately normal. The average time
for the subjects is 6.2 seconds. Give an upper 95% bound for the mean reaction
time.
Solution: The upper 95% bound is given by
x̄ + zασ/
√
n = 6.2 + (1.645)

4/25 = 6.2 + 0.658
= 6.858 seconds.
Hence, we are 95% confident that the mean reaction time is less than 6.858
seconds.
The Case of σ Unknown
Frequently, we must attempt to estimate the mean of a population when the vari-
ance is unknown. The reader should recall learning in Chapter 8 that if we have a
random sample from a normal distribution, then the random variable
T =
X̄ − μ
S/
√
n
has a Student t-distribution with n − 1 degrees of freedom. Here S is the sample
standard deviation. In this situation, with σ unknown, T can be used to construct
a confidence interval on μ. The procedure is the same as that with σ known except
that σ is replaced by S and the standard normal distribution is replaced by the
t-distribution. Referring to Figure 9.5, we can assert that
P(−tα/2  T  tα/2) = 1 − α,
where tα/2 is the t-value with n−1 degrees of freedom, above which we find an area
of α/2. Because of symmetry, an equal area of α/2 will fall to the left of −tα/2.
Substituting for T, we write
P

−tα/2 
X̄ − μ
S/
√
n
 tα/2

= 1 − α.
Multiplying each term in the inequality by S/
√
n, and then subtracting X̄ from
each term and multiplying by −1, we obtain
P

X̄ − tα/2
S
√
n
 μ  X̄ + tα/2
S
√
n

= 1 − α.
For a particular random sample of size n, the mean x̄ and standard deviation s are
computed and the following 100(1 − α)% confidence interval for μ is obtained.
9.4 Single Sample: Estimating the Mean 275
t
1 −
−tα 2 0 tα 2
α/2 α/2
α
Figure 9.5: P(−tα/2  T  tα/2) = 1 − α.
Confidence
Interval on μ, σ2
Unknown
If x̄ and s are the mean and standard deviation of a random sample from a
normal population with unknown variance σ2
, a 100(1−α)% confidence interval
for μ is
x̄ − tα/2
s
√
n
 μ  x̄ + tα/2
s
√
n
,
where tα/2 is the t-value with v = n − 1 degrees of freedom, leaving an area of
α/2 to the right.
We have made a distinction between the cases of σ known and σ unknown in
computing confidence interval estimates. We should emphasize that for σ known
we exploited the Central Limit Theorem, whereas for σ unknown we made use
of the sampling distribution of the random variable T. However, the use of the t-
distribution is based on the premise that the sampling is from a normal distribution.
As long as the distribution is approximately bell shaped, confidence intervals can
be computed when σ2
is unknown by using the t-distribution and we may expect
very good results.
Computed one-sided confidence bounds for μ with σ unknown are as the reader
would expect, namely
x̄ + tα
s
√
n
and x̄ − tα
s
√
n
.
They are the upper and lower 100(1 − α)% bounds, respectively. Here tα is the
t-value having an area of α to the right.
Example 9.5: The contents of seven similar containers of sulfuric acid are 9.8, 10.2, 10.4, 9.8,
10.0, 10.2, and 9.6 liters. Find a 95% confidence interval for the mean contents of
all such containers, assuming an approximately normal distribution.
Solution: The sample mean and standard deviation for the given data are
x̄ = 10.0 and s = 0.283.
Using Table A.4, we find t0.025 = 2.447 for v = 6 degrees of freedom. Hence, the
276 Chapter 9 One- and Two-Sample Estimation Problems
95% confidence interval for μ is
10.0 − (2.447)

0.283
√
7

 μ  10.0 + (2.447)

0.283
√
7

,
which reduces to 9.74  μ  10.26.
Concept of a Large-Sample Confidence Interval
Often statisticians recommend that even when normality cannot be assumed, σ is
unknown, and n ≥ 30, s can replace σ and the confidence interval
x̄ ± zα/2
s
√
n
may be used. This is often referred to as a large-sample confidence interval. The
justification lies only in the presumption that with a sample as large as 30 and
the population distribution not too skewed, s will be very close to the true σ and
thus the Central Limit Theorem prevails. It should be emphasized that this is only
an approximation and the quality of the result becomes better as the sample size
grows larger.
Example 9.6: Scholastic Aptitude Test (SAT) mathematics scores of a random sample of 500 high
school seniors in the state of Texas are collected, and the sample mean and standard
deviation are found to be 501 and 112, respectively. Find a 99% confidence interval
on the mean SAT mathematics score for seniors in the state of Texas.
Solution: Since the sample size is large, it is reasonable to use the normal approximation.
Using Table A.3, we find z0.005 = 2.575. Hence, a 99% confidence interval for μ is
501 ± (2.575)

112
√
500

= 501 ± 12.9,
which yields 488.1  μ  513.9.
9.5 Standard Error of a Point Estimate
We have made a rather sharp distinction between the goal of a point estimate
and that of a confidence interval estimate. The former supplies a single number
extracted from a set of experimental data, and the latter provides an interval that
is reasonable for the parameter, given the experimental data; that is, 100(1 − α)%
of such computed intervals “cover” the parameter.
These two approaches to estimation are related to each other. The common
thread is the sampling distribution of the point estimator. Consider, for example,
the estimator X̄ of μ with σ known. We indicated earlier that a measure of the
quality of an unbiased estimator is its variance. The variance of X̄ is
σ2
X̄ =
σ2
n
.
9.6 Prediction Intervals 277
Thus, the standard deviation of X̄, or standard error of X̄, is σ/
√
n. Simply put,
the standard error of an estimator is its standard deviation. For X̄, the computed
confidence limit
x̄ ± zα/2
σ
√
n
is written as x̄ ± zα/2 s.e.(x̄),
where “s.e.” is the “standard error.” The important point is that the width of the
confidence interval on μ is dependent on the quality of the point estimator through
its standard error. In the case where σ is unknown and sampling is from a normal
distribution, s replaces σ and the estimated standard error s/
√
n is involved. Thus,
the confidence limits on μ are
Confidence
Limits on μ, σ2
Unknown
x̄ ± tα/2
s
√
n
= x̄ ± tα/2 s.e.(x̄)
Again, the confidence interval is no better (in terms of width) than the quality of
the point estimate, in this case through its estimated standard error. Computer
packages often refer to estimated standard errors simply as “standard errors.”
As we move to more complex confidence intervals, there is a prevailing notion
that widths of confidence intervals become shorter as the quality of the correspond-
ing point estimate becomes better, although it is not always quite as simple as we
have illustrated here. It can be argued that a confidence interval is merely an
augmentation of the point estimate to take into account the precision of the point
estimate.
9.6 Prediction Intervals
The point and interval estimations of the mean in Sections 9.4 and 9.5 provide
good information about the unknown parameter μ of a normal distribution or a
nonnormal distribution from which a large sample is drawn. Sometimes, other
than the population mean, the experimenter may also be interested in predicting
the possible value of a future observation. For instance, in quality control, the
experimenter may need to use the observed data to predict a new observation. A
process that produces a metal part may be evaluated on the basis of whether the
part meets specifications on tensile strength. On certain occasions, a customer may
be interested in purchasing a single part. In this case, a confidence interval on the
mean tensile strength does not capture the required information. The customer
requires a statement regarding the uncertainty of a single observation. This type
of requirement is nicely fulfilled by the construction of a prediction interval.
It is quite simple to obtain a prediction interval for the situations we have
considered so far. Assume that the random sample comes from a normal population
with unknown mean μ and known variance σ2
. A natural point estimator of a
new observation is X̄. It is known, from Section 8.4, that the variance of X̄ is
σ2
/n. However, to predict a new observation, not only do we need to account
for the variation due to estimating the mean, but also we should account for the
variation of a future observation. From the assumption, we know that the
variance of the random error in a new observation is σ2
. The development of a
278 Chapter 9 One- and Two-Sample Estimation Problems
prediction interval is best illustrated by beginning with a normal random variable
x0 − x̄, where x0 is the new observation and x̄ comes from the sample. Since x0
and x̄ are independent, we know that
z =
x0 − x̄

σ2 + σ2/n
=
x0 − x̄
σ

1 + 1/n
is n(z; 0, 1). As a result, if we use the probability statement
P(−zα/2  Z  zα/2) = 1 − α
with the z-statistic above and place x0 in the center of the probability statement,
we have the following event occurring with probability 1 − α:
x̄ − zα/2σ

1 + 1/n  x0  x̄ + zα/2σ

1 + 1/n.
As a result, computation of the prediction interval is formalized as follows.
Prediction
Interval of a
Future
Observation, σ2
Known
For a normal distribution of measurements with unknown mean μ and known
variance σ2
, a 100(1 − α)% prediction interval of a future observation x0 is
x̄ − zα/2σ

1 + 1/n  x0  x̄ + zα/2σ

1 + 1/n,
where zα/2 is the z-value leaving an area of α/2 to the right.
Example 9.7: Due to the decrease in interest rates, the First Citizens Bank received a lot of
mortgage applications. A recent sample of 50 mortgage loans resulted in an average
loan amount of $257,300. Assume a population standard deviation of $25,000. For
the next customer who fills out a mortgage application, find a 95% prediction
interval for the loan amount.
Solution: The point prediction of the next customer’s loan amount is x̄ = $257, 300. The
z-value here is z0.025 = 1.96. Hence, a 95% prediction interval for the future loan
amount is
257, 300 − (1.96)(25, 000)

1 + 1/50  x0  257, 300 + (1.96)(25, 000)

1 + 1/50,
which gives the interval ($207,812.43, $306,787.57).
The prediction interval provides a good estimate of the location of a future
observation, which is quite different from the estimate of the sample mean value.
It should be noted that the variation of this prediction is the sum of the variation
due to an estimation of the mean and the variation of a single observation. However,
as in the past, we first consider the case with known variance. It is also important
to deal with the prediction interval of a future observation in the situation where
the variance is unknown. Indeed a Student t-distribution may be used in this case,
as described in the following result. The normal distribution is merely replaced by
the t-distribution.
9.6 Prediction Intervals 279
Prediction
Interval of a
Future
Observation, σ2
Unknown
For a normal distribution of measurements with unknown mean μ and unknown
variance σ2
, a 100(1 − α)% prediction interval of a future observation x0 is
x̄ − tα/2s

1 + 1/n  x0  x̄ + tα/2s

1 + 1/n,
where tα/2 is the t-value with v = n − 1 degrees of freedom, leaving an area of
α/2 to the right.
One-sided prediction intervals can also be constructed. Upper prediction bounds
apply in cases where focus must be placed on future large observations. Concern
over future small observations calls for the use of lower prediction bounds. The
upper bound is given by
x̄ + tαs

1 + 1/n
and the lower bound by
x̄ − tαs

1 + 1/n.
Example 9.8: A meat inspector has randomly selected 30 packs of 95% lean beef. The sample
resulted in a mean of 96.2% with a sample standard deviation of 0.8%. Find a 99%
prediction interval for the leanness of a new pack. Assume normality.
Solution: For v = 29 degrees of freedom, t0.005 = 2.756. Hence, a 99% prediction interval for
a new observation x0 is
96.2 − (2.756)(0.8)

1 +
1
30
 x0  96.2 + (2.756)(0.8)

1 +
1
30
,
which reduces to (93.96, 98.44).
Use of Prediction Limits for Outlier Detection
To this point in the text very little attention has been paid to the concept of
outliers, or aberrant observations. The majority of scientific investigators are
keenly sensitive to the existence of outlying observations or so-called faulty or
“bad data.” We deal with the concept of outlier detection extensively in Chapter
12. However, it is certainly of interest here since there is an important relationship
between outlier detection and prediction intervals.
It is convenient for our purposes to view an outlying observation as one that
comes from a population with a mean that is different from the mean that governs
the rest of the sample of size n being studied. The prediction interval produces a
bound that “covers” a future single observation with probability 1 − α if it comes
from the population from which the sample was drawn. As a result, a methodol-
ogy for outlier detection involves the rule that an observation is an outlier if
it falls outside the prediction interval computed without including the
questionable observation in the sample. As a result, for the prediction inter-
val of Example 9.8, if a new pack of beef is measured and its leanness is outside
the interval (93.96, 98.44), that observation can be viewed as an outlier.
280 Chapter 9 One- and Two-Sample Estimation Problems
9.7 Tolerance Limits
As discussed in Section 9.6, the scientist or engineer may be less interested in esti-
mating parameters than in gaining a notion about where an individual observation
or measurement might fall. Such situations call for the use of prediction intervals.
However, there is yet a third type of interval that is of interest in many applica-
tions. Once again, suppose that interest centers around the manufacturing of a
component part and specifications exist on a dimension of that part. In addition,
there is little concern about the mean of the dimension. But unlike in the scenario
in Section 9.6, one may be less interested in a single observation and more inter-
ested in where the majority of the population falls. If process specifications are
important, the manager of the process is concerned about long-range performance,
not the next observation. One must attempt to determine bounds that, in some
probabilistic sense, “cover” values in the population (i.e., the measured values of
the dimension).
One method of establishing the desired bounds is to determine a confidence
interval on a fixed proportion of the measurements. This is best motivated by
visualizing a situation in which we are doing random sampling from a normal
distribution with known mean μ and variance σ2
. Clearly, a bound that covers the
middle 95% of the population of observations is
μ ± 1.96σ.
This is called a tolerance interval, and indeed its coverage of 95% of measured
observations is exact. However, in practice, μ and σ are seldom known; thus, the
user must apply
x̄ ± ks.
Now, of course, the interval is a random variable, and hence the coverage of a
proportion of the population by the interval is not exact. As a result, a 100(1−γ)%
confidence interval must be used since x̄ ± ks cannot be expected to cover any
specified proportion all the time. As a result, we have the following definition.
Tolerance Limits For a normal distribution of measurements with unknown mean μ and unknown
standard deviation σ, tolerance limits are given by x̄ ± ks, where k is de-
termined such that one can assert with 100(1 − γ)% confidence that the given
limits contain at least the proportion 1 − α of the measurements.
Table A.7 gives values of k for 1 − α = 0.90, 0.95, 0.99; γ = 0.05, 0.01; and
selected values of n from 2 to 300.
Example 9.9: Consider Example 9.8. With the information given, find a tolerance interval that
gives two-sided 95% bounds on 90% of the distribution of packages of 95% lean
beef. Assume the data came from an approximately normal distribution.
Solution: Recall from Example 9.8 that n = 30, the sample mean is 96.2%, and the sample
standard deviation is 0.8%. From Table A.7, k = 2.14. Using
x̄ ± ks = 96.2 ± (2.14)(0.8),
9.7 Tolerance Limits 281
we find that the lower and upper bounds are 94.5 and 97.9.
We are 95% confident that the above range covers the central 90% of the dis-
tribution of 95% lean beef packages.
Distinction among Confidence Intervals, Prediction Intervals,
and Tolerance Intervals
It is important to reemphasize the difference among the three types of intervals dis-
cussed and illustrated in the preceding sections. The computations are straightfor-
ward, but interpretation can be confusing. In real-life applications, these intervals
are not interchangeable because their interpretations are quite distinct.
In the case of confidence intervals, one is attentive only to the population
mean. For example, Exercise 9.13 on page 283 deals with an engineering process
that produces shearing pins. A specification will be set on Rockwell hardness,
below which a customer will not accept any pins. Here, a population parameter
must take a backseat. It is important that the engineer know where the majority
of the values of Rockwell hardness are going to be. Thus, tolerance limits should be
used. Surely, when tolerance limits on any process output are tighter than process
specifications, that is good news for the process manager.
It is true that the tolerance limit interpretation is somewhat related to the
confidence interval. The 100(1−α)% tolerance interval on, say, the proportion 0.95
can be viewed as a confidence interval on the middle 95% of the corresponding
normal distribution. One-sided tolerance limits are also relevant. In the case of
the Rockwell hardness problem, it is desirable to have a lower bound of the form
x̄ − ks such that there is 99% confidence that at least 99% of Rockwell hardness
values will exceed the computed value.
Prediction intervals are applicable when it is important to determine a bound
on a single value. The mean is not the issue here, nor is the location of the
majority of the population. Rather, the location of a single new observation is
required.
Case Study 9.1: Machine Quality: A machine produces metal pieces that are cylindrical in shape.
A sample of these pieces is taken and the diameters are found to be 1.01, 0.97,
1.03, 1.04, 0.99, 0.98, 0.99, 1.01, and 1.03 centimeters. Use these data to calculate
three interval types and draw interpretations that illustrate the distinction between
them in the context of the system. For all computations, assume an approximately
normal distribution. The sample mean and standard deviation for the given data
are x̄ = 1.0056 and s = 0.0246.
(a) Find a 99% confidence interval on the mean diameter.
(b) Compute a 99% prediction interval on a measured diameter of a single metal
piece taken from the machine.
(c) Find the 99% tolerance limits that will contain 95% of the metal pieces pro-
duced by this machine.
Solution: (a) The 99% confidence interval for the mean diameter is given by
x̄ ± t0.005s/
√
n = 1.0056 ± (3.355)(0.0246/3) = 1.0056 ± 0.0275.
/ /
282 Chapter 9 One- and Two-Sample Estimation Problems
Thus, the 99% confidence bounds are 0.9781 and 1.0331.
(b) The 99% prediction interval for a future observation is given by
x̄ ± t0.005s

1 + 1/n = 1.0056 ± (3.355)(0.0246)

1 + 1/9,
with the bounds being 0.9186 and 1.0926.
(c) From Table A.7, for n = 9, 1 − γ = 0.99, and 1 − α = 0.95, we find k = 4.550
for two-sided limits. Hence, the 99% tolerance limits are given by
x̄ + ks = 1.0056 ± (4.550)(0.0246),
with the bounds being 0.8937 and 1.1175. We are 99% confident that the
tolerance interval from 0.8937 to 1.1175 will contain the central 95% of the
distribution of diameters produced.
This case study illustrates that the three types of limits can give appreciably dif-
ferent results even though they are all 99% bounds. In the case of the confidence
interval on the mean, 99% of such intervals cover the population mean diameter.
Thus, we say that we are 99% confident that the mean diameter produced by the
process is between 0.9781 and 1.0331 centimeters. Emphasis is placed on the mean,
with less concern about a single reading or the general nature of the distribution
of diameters in the population. In the case of the prediction limits, the bounds
0.9186 and 1.0926 are based on the distribution of a single “new” metal piece
taken from the process, and again 99% of such limits will cover the diameter of
a new measured piece. On the other hand, the tolerance limits, as suggested in
the previous section, give the engineer a sense of where the “majority,” say the
central 95%, of the diameters of measured pieces in the population reside. The
99% tolerance limits, 0.8937 and 1.1175, are numerically quite different from the
other two bounds. If these bounds appear alarmingly wide to the engineer, it re-
flects negatively on process quality. On the other hand, if the bounds represent a
desirable result, the engineer may conclude that a majority (95% in here) of the
diameters are in a desirable range. Again, a confidence interval interpretation may
be used: namely, 99% of such calculated bounds will cover the middle 95% of the
population of diameters.
Exercises
9.1 A UCLA researcher claims that the life span of
mice can be extended by as much as 25% when the
calories in their diet are reduced by approximately 40%
from the time they are weaned. The restricted diet
is enriched to normal levels by vitamins and protein.
Assuming that it is known from previous studies that
σ = 5.8 months, how many mice should be included
in our sample if we wish to be 99% confident that the
mean life span of the sample will be within 2 months
of the population mean for all mice subjected to this
reduced diet?
9.2 An electrical firm manufactures light bulbs that
have a length of life that is approximately normally
distributed with a standard deviation of 40 hours. If
a sample of 30 bulbs has an average life of 780 hours,
find a 96% confidence interval for the population mean
of all bulbs produced by this firm.
9.3 Many cardiac patients wear an implanted pace-
maker to control their heartbeat. A plastic connec-
tor module mounts on the top of the pacemaker. As-
suming a standard deviation of 0.0015 inch and an ap-
proximately normal distribution, find a 95% confidence
/ /
Exercises 283
interval for the mean of the depths of all connector
modules made by a certain manufacturing company.
A random sample of 75 modules has an average depth
of 0.310 inch.
9.4 The heights of a random sample of 50 college stu-
dents showed a mean of 174.5 centimeters and a stan-
dard deviation of 6.9 centimeters.
(a) Construct a 98% confidence interval for the mean
height of all college students.
(b) What can we assert with 98% confidence about the
possible size of our error if we estimate the mean
height of all college students to be 174.5 centime-
ters?
9.5 A random sample of 100 automobile owners in the
state of Virginia shows that an automobile is driven on
average 23,500 kilometers per year with a standard de-
viation of 3900 kilometers. Assume the distribution of
measurements to be approximately normal.
(a) Construct a 99% confidence interval for the aver-
age number of kilometers an automobile is driven
annually in Virginia.
(b) What can we assert with 99% confidence about the
possible size of our error if we estimate the aver-
age number of kilometers driven by car owners in
Virginia to be 23,500 kilometers per year?
9.6 How large a sample is needed in Exercise 9.2 if we
wish to be 96% confident that our sample mean will be
within 10 hours of the true mean?
9.7 How large a sample is needed in Exercise 9.3 if we
wish to be 95% confident that our sample mean will be
within 0.0005 inch of the true mean?
9.8 An efficiency expert wishes to determine the av-
erage time that it takes to drill three holes in a certain
metal clamp. How large a sample will she need to be
95% confident that her sample mean will be within 15
seconds of the true mean? Assume that it is known
from previous studies that σ = 40 seconds.
9.9 Regular consumption of presweetened cereals con-
tributes to tooth decay, heart disease, and other degen-
erative diseases, according to studies conducted by Dr.
W. H. Bowen of the National Institute of Health and
Dr. J. Yudben, Professor of Nutrition and Dietetics at
the University of London. In a random sample con-
sisting of 20 similar single servings of Alpha-Bits, the
average sugar content was 11.3 grams with a standard
deviation of 2.45 grams. Assuming that the sugar con-
tents are normally distributed, construct a 95% con-
fidence interval for the mean sugar content for single
servings of Alpha-Bits.
9.10 A random sample of 12 graduates of a certain
secretarial school typed an average of 79.3 words per
minute with a standard deviation of 7.8 words per
minute. Assuming a normal distribution for the num-
ber of words typed per minute, find a 95% confidence
interval for the average number of words typed by all
graduates of this school.
9.11 A machine produces metal pieces that are cylin-
drical in shape. A sample of pieces is taken, and the
diameters are found to be 1.01, 0.97, 1.03, 1.04, 0.99,
0.98, 0.99, 1.01, and 1.03 centimeters. Find a 99% con-
fidence interval for the mean diameter of pieces from
this machine, assuming an approximately normal dis-
tribution.
9.12 A random sample of 10 chocolate energy bars of
a certain brand has, on average, 230 calories per bar,
with a standard deviation of 15 calories. Construct a
99% confidence interval for the true mean calorie con-
tent of this brand of energy bar. Assume that the dis-
tribution of the calorie content is approximately nor-
mal.
9.13 A random sample of 12 shearing pins is taken
in a study of the Rockwell hardness of the pin head.
Measurements on the Rockwell hardness are made for
each of the 12, yielding an average value of 48.50 with
a sample standard deviation of 1.5. Assuming the mea-
surements to be normally distributed, construct a 90%
confidence interval for the mean Rockwell hardness.
9.14 The following measurements were recorded for
the drying time, in hours, of a certain brand of latex
paint:
3.4 2.5 4.8 2.9 3.6
2.8 3.3 5.6 3.7 2.8
4.4 4.0 5.2 3.0 4.8
Assuming that the measurements represent a random
sample from a normal population, find a 95% predic-
tion interval for the drying time for the next trial of
the paint.
9.15 Referring to Exercise 9.5, construct a 99% pre-
diction interval for the kilometers traveled annually by
an automobile owner in Virginia.
9.16 Consider Exercise 9.10. Compute the 95% pre-
diction interval for the next observed number of words
per minute typed by a graduate of the secretarial
school.
9.17 Consider Exercise 9.9. Compute a 95% predic-
tion interval for the sugar content of the next single
serving of Alpha-Bits.
9.18 Referring to Exercise 9.13, construct a 95% tol-
erance interval containing 90% of the measurements.
/ /
284 Chapter 9 One- and Two-Sample Estimation Problems
9.19 A random sample of 25 tablets of buffered as-
pirin contains, on average, 325.05 mg of aspirin per
tablet, with a standard deviation of 0.5 mg. Find the
95% tolerance limits that will contain 90% of the tablet
contents for this brand of buffered aspirin. Assume
that the aspirin content is normally distributed.
9.20 Consider the situation of Exercise 9.11. Esti-
mation of the mean diameter, while important, is not
nearly as important as trying to pin down the loca-
tion of the majority of the distribution of diameters.
Find the 95% tolerance limits that contain 95% of the
diameters.
9.21 In a study conducted by the Department of
Zoology at Virginia Tech, fifteen samples of water were
collected from a certain station in the James River in
order to gain some insight regarding the amount of or-
thophosphorus in the river. The concentration of the
chemical is measured in milligrams per liter. Let us
suppose that the mean at the station is not as impor-
tant as the upper extreme of the distribution of the
concentration of the chemical at the station. Concern
centers around whether the concentration at the ex-
treme is too large. Readings for the fifteen water sam-
ples gave a sample mean of 3.84 milligrams per liter
and a sample standard deviation of 3.07 milligrams
per liter. Assume that the readings are a random sam-
ple from a normal distribution. Calculate a prediction
interval (upper 95% prediction limit) and a tolerance
limit (95% upper tolerance limit that exceeds 95% of
the population of values). Interpret both; that is, tell
what each communicates about the upper extreme of
the distribution of orthophosphorus at the sampling
station.
9.22 A type of thread is being studied for its ten-
sile strength properties. Fifty pieces were tested under
similar conditions, and the results showed an average
tensile strength of 78.3 kilograms and a standard devi-
ation of 5.6 kilograms. Assuming a normal distribution
of tensile strengths, give a lower 95% prediction limit
on a single observed tensile strength value. In addi-
tion, give a lower 95% tolerance limit that is exceeded
by 99% of the tensile strength values.
9.23 Refer to Exercise 9.22. Why are the quantities
requested in the exercise likely to be more important to
the manufacturer of the thread than, say, a confidence
interval on the mean tensile strength?
9.24 Refer to Exercise 9.22 again. Suppose that spec-
ifications by a buyer of the thread are that the tensile
strength of the material must be at least 62 kilograms.
The manufacturer is satisfied if at most 5% of the man-
ufactured pieces have tensile strength less than 62 kilo-
grams. Is there cause for concern? Use a one-sided 99%
tolerance limit that is exceeded by 95% of the tensile
strength values.
9.25 Consider the drying time measurements in Ex-
ercise 9.14. Suppose the 15 observations in the data
set are supplemented by a 16th value of 6.9 hours. In
the context of the original 15 observations, is the 16th
value an outlier? Show work.
9.26 Consider the data in Exercise 9.13. Suppose the
manufacturer of the shearing pins insists that the Rock-
well hardness of the product be less than or equal to
44.0 only 5% of the time. What is your reaction? Use
a tolerance limit calculation as the basis for your judg-
ment.
9.27 Consider the situation of Case Study 9.1 on page
281 with a larger sample of metal pieces. The di-
ameters are as follows: 1.01, 0.97, 1.03, 1.04, 0.99,
0.98, 1.01, 1.03, 0.99, 1.00, 1.00, 0.99, 0.98, 1.01, 1.02,
0.99 centimeters. Once again the normality assump-
tion may be made. Do the following and compare your
results to those of the case study. Discuss how they
are different and why.
(a) Compute a 99% confidence interval on the mean
diameter.
(b) Compute a 99% prediction interval on the next di-
ameter to be measured.
(c) Compute a 99% tolerance interval for coverage of
the central 95% of the distribution of diameters.
9.28 In Section 9.3, we emphasized the notion of
“most efficient estimator” by comparing the variance
of two unbiased estimators Θ̂1 and Θ̂2. However, this
does not take into account bias in case one or both
estimators are not unbiased. Consider the quantity
MSE = E(Θ̂ − θ),
where MSE denotes mean squared error. The
MSE is often used to compare two estimators Θ̂1 and
Θ̂2 of θ when either or both is unbiased because (i) it
is intuitively reasonable and (ii) it accounts for bias.
Show that MSE can be written
MSE = E[Θ̂ − E(Θ̂)]2
+ [E(Θ̂ − θ)]2
= Var(Θ̂) + [Bias(Θ̂)]2
.
9.29 Let us define S2
=
n

i=1
(Xi − X̄)2
/n. Show that
E(S2
) = [(n − 1)/n]σ2
,
and hence S2
is a biased estimator for σ2
.
9.30 Consider S2
, the estimator of σ2
, from Exer-
cise 9.29. Analysts often use S2
rather than dividing
n

i=1
(Xi − X̄)2
by n − 1, the degrees of freedom in the
sample.
9.8 Two Samples: Estimating the Difference between Two Means 285
(a) What is the bias of S2
?
(b) Show that the bias of S2
approaches zero as n →
∞.
9.31 If X is a binomial random variable, show that
(a) 
P = X/n is an unbiased estimator of p;
(b) P
= X+
√
n/2
n+
√
n
is a biased estimator of p.
9.32 Show that the estimator P
of Exercise 9.31(b)
becomes unbiased as n → ∞.
9.33 Compare S2
and S2
(see Exercise 9.29), the
two estimators of σ2
, to determine which is more
efficient. Assume these estimators are found using
X1, X2, . . . , Xn, independent random variables from
n(x; μ, σ). Which estimator is more efficient consid-
ering only the variance of the estimators? [Hint: Make
use of Theorem 8.4 and the fact that the variance of
χ2
v is 2v, from Section 6.7.]
9.34 Consider Exercise 9.33. Use the MSE discussed
in Exercise 9.28 to determine which estimator is more
efficient. Write out
MSE(S2
)
MSE(S2)
.
9.8 Two Samples: Estimating the Difference
between Two Means
If we have two populations with means μ1 and μ2 and variances σ2
1 and σ2
2, re-
spectively, a point estimator of the difference between μ1 and μ2 is given by the
statistic X̄1 − X̄2. Therefore, to obtain a point estimate of μ1 − μ2, we shall select
two independent random samples, one from each population, of sizes n1 and n2,
and compute x̄1 −x̄2, the difference of the sample means. Clearly, we must consider
the sampling distribution of X̄1 − X̄2.
According to Theorem 8.3, we can expect the sampling distribution of X̄1 −
X̄2 to be approximately normally distributed with mean μX̄1−X̄2
= μ1 − μ2 and
standard deviation σX̄1−X̄2
=

σ2
1/n1 + σ2
2/n2. Therefore, we can assert with a
probability of 1 − α that the standard normal variable
Z =
(X̄1 − X̄2) − (μ1 − μ2)

σ2
1/n1 + σ2
2/n2
will fall between −zα/2 and zα/2. Referring once again to Figure 9.2, we write
P(−zα/2  Z  zα/2) = 1 − α.
Substituting for Z, we state equivalently that
P

−zα/2 
(X̄1 − X̄2) − (μ1 − μ2)

σ2
1/n1 + σ2
2/n2
 zα/2

= 1 − α,
which leads to the following 100(1 − α)% confidence interval for μ1 − μ2.
Confidence
Interval for
μ1 − μ2, σ2
1 and
σ2
2 Known
If x̄1 and x̄2 are means of independent random samples of sizes n1 and n2
from populations with known variances σ2
1 and σ2
2, respectively, a 100(l − α)%
confidence interval for μ1 − μ2 is given by
(x̄1 − x̄2) − zα/2
%
σ2
1
n1
+
σ2
2
n2
 μ1 − μ2  (x̄1 − x̄2) + zα/2
%
σ2
1
n1
+
σ2
2
n2
,
where zα/2 is the z-value leaving an area of α/2 to the right.
286 Chapter 9 One- and Two-Sample Estimation Problems
The degree of confidence is exact when samples are selected from normal popula-
tions. For nonnormal populations, the Central Limit Theorem allows for a good
approximation for reasonable size samples.
The Experimental Conditions and the Experimental Unit
For the case of confidence interval estimation on the difference between two means,
we need to consider the experimental conditions in the data-taking process. It is
assumed that we have two independent random samples from distributions with
means μ1 and μ2, respectively. It is important that experimental conditions emu-
late this ideal described by these assumptions as closely as possible. Quite often,
the experimenter should plan the strategy of the experiment accordingly. For al-
most any study of this type, there is a so-called experimental unit, which is that
part of the experiment that produces experimental error and is responsible for the
population variance we refer to as σ2
. In a drug study, the experimental unit is
the patient or subject. In an agricultural experiment, it may be a plot of ground.
In a chemical experiment, it may be a quantity of raw materials. It is important
that differences between the experimental units have minimal impact on the re-
sults. The experimenter will have a degree of insurance that experimental units
will not bias results if the conditions that define the two populations are randomly
assigned to the experimental units. We shall again focus on randomization in future
chapters that deal with hypothesis testing.
Example 9.10: A study was conducted in which two types of engines, A and B, were compared.
Gas mileage, in miles per gallon, was measured. Fifty experiments were conducted
using engine type A and 75 experiments were done with engine type B. The
gasoline used and other conditions were held constant. The average gas mileage
was 36 miles per gallon for engine A and 42 miles per gallon for engine B. Find a
96% confidence interval on μB − μA, where μA and μB are population mean gas
mileages for engines A and B, respectively. Assume that the population standard
deviations are 6 and 8 for engines A and B, respectively.
Solution: The point estimate of μB − μA is x̄B − x̄A = 42 − 36 = 6. Using α = 0.04, we find
z0.02 = 2.05 from Table A.3. Hence, with substitution in the formula above, the
96% confidence interval is
6 − 2.05

64
75
+
36
50
 μB − μA  6 + 2.05

64
75
+
36
50
,
or simply 3.43  μB − μA  8.57.
This procedure for estimating the difference between two means is applicable
if σ2
1 and σ2
2 are known. If the variances are not known and the two distributions
involved are approximately normal, the t-distribution becomes involved, as in the
case of a single sample. If one is not willing to assume normality, large samples (say
greater than 30) will allow the use of s1 and s2 in place of σ1 and σ2, respectively,
with the rationale that s1 ≈ σ1 and s2 ≈ σ2. Again, of course, the confidence
interval is an approximate one.
9.8 Two Samples: Estimating the Difference between Two Means 287
Variances Unknown but Equal
Consider the case where σ2
1 and σ2
2 are unknown. If σ2
1 = σ2
2 = σ2
, we obtain a
standard normal variable of the form
Z =
(X̄1 − X̄2) − (μ1 − μ2)

σ2[(1/n1) + (1/n2)]
.
According to Theorem 8.4, the two random variables
(n1 − 1)S2
1
σ2
and
(n2 − 1)S2
2
σ2
have chi-squared distributions with n1 − 1 and n2 − 1 degrees of freedom, respec-
tively. Furthermore, they are independent chi-squared variables, since the random
samples were selected independently. Consequently, their sum
V =
(n1 − 1)S2
1
σ2
+
(n2 − 1)S2
2
σ2
=
(n1 − 1)S2
1 + (n2 − 1)S2
2
σ2
has a chi-squared distribution with v = n1 + n2 − 2 degrees of freedom.
Since the preceding expressions for Z and V can be shown to be independent,
it follows from Theorem 8.5 that the statistic
T =
(X̄1 − X̄2) − (μ1 − μ2)

σ2[(1/n1) + (1/n2)]
%
(n1 − 1)S2
1 + (n2 − 1)S2
2
σ2(n1 + n2 − 2)
has the t-distribution with v = n1 + n2 − 2 degrees of freedom.
A point estimate of the unknown common variance σ2
can be obtained by
pooling the sample variances. Denoting the pooled estimator by S2
p, we have the
following.
Pooled Estimate
of Variance S2
p =
(n1 − 1)S2
1 + (n2 − 1)S2
2
n1 + n2 − 2
.
Substituting S2
p in the T statistic, we obtain the less cumbersome form
T =
(X̄1 − X̄2) − (μ1 − μ2)
Sp

(1/n1) + (1/n2)
.
Using the T statistic, we have
P(−tα/2  T  tα/2) = 1 − α,
where tα/2 is the t-value with n1 + n2 − 2 degrees of freedom, above which we find
an area of α/2. Substituting for T in the inequality, we write
P

−tα/2 
(X̄1 − X̄2) − (μ1 − μ2)
Sp

(1/n1) + (1/n2)
 tα/2

= 1 − α.
After the usual mathematical manipulations, the difference of the sample means
x̄1 − x̄2 and the pooled variance are computed and then the following 100(1 − α)%
confidence interval for μ1 − μ2 is obtained.
The value of s2
p is easily seen to be a weighted average of the two sample
variances s2
1 and s2
2, where the weights are the degrees of freedom.
288 Chapter 9 One- and Two-Sample Estimation Problems
Confidence
Interval for
μ1 − μ2, σ2
1 = σ2
2
but Both
Unknown
If x̄1 and x̄2 are the means of independent random samples of sizes n1 and n2,
respectively, from approximately normal populations with unknown but equal
variances, a 100(1 − α)% confidence interval for μ1 − μ2 is given by
(x̄1 − x̄2) − tα/2sp

1
n1
+
1
n2
 μ1 − μ2  (x̄1 − x̄2) + tα/2sp

1
n1
+
1
n2
,
where sp is the pooled estimate of the population standard deviation and tα/2
is the t-value with v = n1 + n2 − 2 degrees of freedom, leaving an area of α/2
to the right.
Example 9.11: The article “Macroinvertebrate Community Structure as an Indicator of Acid Mine
Pollution,” published in the Journal of Environmental Pollution, reports on an in-
vestigation undertaken in Cane Creek, Alabama, to determine the relationship
between selected physiochemical parameters and different measures of macroinver-
tebrate community structure. One facet of the investigation was an evaluation of
the effectiveness of a numerical species diversity index to indicate aquatic degrada-
tion due to acid mine drainage. Conceptually, a high index of macroinvertebrate
species diversity should indicate an unstressed aquatic system, while a low diversity
index should indicate a stressed aquatic system.
Two independent sampling stations were chosen for this study, one located
downstream from the acid mine discharge point and the other located upstream.
For 12 monthly samples collected at the downstream station, the species diversity
index had a mean value x̄1 = 3.11 and a standard deviation s1 = 0.771, while
10 monthly samples collected at the upstream station had a mean index value
x̄2 = 2.04 and a standard deviation s2 = 0.448. Find a 90% confidence interval for
the difference between the population means for the two locations, assuming that
the populations are approximately normally distributed with equal variances.
Solution: Let μ1 and μ2 represent the population means, respectively, for the species diversity
indices at the downstream and upstream stations. We wish to find a 90% confidence
interval for μ1 − μ2. Our point estimate of μ1 − μ2 is
x̄1 − x̄2 = 3.11 − 2.04 = 1.07.
The pooled estimate, s2
p, of the common variance, σ2
, is
s2
p =
(n1 − 1)s2
1 + (n2 − 1)s2
2
n1 + n2 − 2
=
(11)(0.7712
) + (9)(0.4482
)
12 + 10 − 2
= 0.417.
Taking the square root, we obtain sp = 0.646. Using α = 0.1, we find in Table A.4
that t0.05 = 1.725 for v = n1 + n2 − 2 = 20 degrees of freedom. Therefore, the 90%
confidence interval for μ1 − μ2 is
1.07 − (1.725)(0.646)

1
12
+
1
10
 μ1 − μ2  1.07 + (1.725)(0.646)

1
12
+
1
10
,
which simplifies to 0.593  μ1 − μ2  1.547.
9.8 Two Samples: Estimating the Difference between Two Means 289
Interpretation of the Confidence Interval
For the case of a single parameter, the confidence interval simply provides error
bounds on the parameter. Values contained in the interval should be viewed as
reasonable values given the experimental data. In the case of a difference between
two means, the interpretation can be extended to one of comparing the two means.
For example, if we have high confidence that a difference μ1 − μ2 is positive, we
would certainly infer that μ1  μ2 with little risk of being in error. For example, in
Example 9.11, we are 90% confident that the interval from 0.593 to 1.547 contains
the difference of the population means for values of the species diversity index at
the two stations. The fact that both confidence limits are positive indicates that,
on the average, the index for the station located downstream from the discharge
point is greater than the index for the station located upstream.
Equal Sample Sizes
The procedure for constructing confidence intervals for μ1 − μ2 with σ1 = σ2 = σ
unknown requires the assumption that the populations are normal. Slight depar-
tures from either the equal variance or the normality assumption do not seriously
alter the degree of confidence for our interval. (A procedure is presented in Chap-
ter 10 for testing the equality of two unknown population variances based on the
information provided by the sample variances.) If the population variances are
considerably different, we still obtain reasonable results when the populations are
normal, provided that n1 = n2. Therefore, in planning an experiment, one should
make every effort to equalize the size of the samples.
Unknown and Unequal Variances
Let us now consider the problem of finding an interval estimate of μ1 − μ2 when
the unknown population variances are not likely to be equal. The statistic most
often used in this case is
T
=
(X̄1 − X̄2) − (μ1 − μ2)

(S2
1 /n1) + (S2
2 /n2)
,
which has approximately a t-distribution with v degrees of freedom, where
v =
(s2
1/n1 + s2
2/n2)2
[(s2
1/n1)2/(n1 − 1)] + [(s2
2/n2)2/(n2 − 1)]
.
Since v is seldom an integer, we round it down to the nearest whole number. The
above estimate of the degrees of freedom is called the Satterthwaite approximation
(Satterthwaite, 1946, in the Bibliography).
Using the statistic T
, we write
P(−tα/2  T
 tα/2) ≈ 1 − α,
where tα/2 is the value of the t-distribution with v degrees of freedom, above which
we find an area of α/2. Substituting for T
in the inequality and following the
same steps as before, we state the final result.
290 Chapter 9 One- and Two-Sample Estimation Problems
Confidence
Interval for
μ1 − μ2, σ2
1 = σ2
2
and Both
Unknown
If x̄1 and s2
1 and x̄2 and s2
2 are the means and variances of independent random
samples of sizes n1 and n2, respectively, from approximately normal populations
with unknown and unequal variances, an approximate 100(1 − α)% confidence
interval for μ1 − μ2 is given by
(x̄1 − x̄2) − tα/2
%
s2
1
n1
+
s2
2
n2
 μ1 − μ2  (x̄1 − x̄2) + tα/2
%
s2
1
n1
+
s2
2
n2
,
where tα/2 is the t-value with
v =
(s2
1/n1 + s2
2/n2)2
[(s2
1/n1)2/(n1 − 1)] + [(s2
2/n2)2/(n2 − 1)]
degrees of freedom, leaving an area of α/2 to the right.
Note that the expression for v above involves random variables, and thus v is
an estimate of the degrees of freedom. In applications, this estimate will not result
in a whole number, and thus the analyst must round down to the nearest integer
to achieve the desired confidence.
Before we illustrate the above confidence interval with an example, we should
point out that all the confidence intervals on μ1 − μ2 are of the same general form
as those on a single mean; namely, they can be written as
point estimate ± tα/2 '
s.e.(point estimate)
or
point estimate ± zα/2 s.e.(point estimate).
For example, in the case where σ1 = σ2 = σ, the estimated standard error of
x̄1 − x̄2 is sp

1/n1 + 1/n2. For the case where σ2
1 = σ2
2,
'
s.e.(x̄1 − x̄2) =
%
s2
1
n1
+
s2
2
n2
.
Example 9.12: A study was conducted by the Department of Zoology at the Virginia Tech to
estimate the difference in the amounts of the chemical orthophosphorus measured
at two different stations on the James River. Orthophosphorus was measured in
milligrams per liter. Fifteen samples were collected from station 1, and 12 samples
were obtained from station 2. The 15 samples from station 1 had an average
orthophosphorus content of 3.84 milligrams per liter and a standard deviation of
3.07 milligrams per liter, while the 12 samples from station 2 had an average
content of 1.49 milligrams per liter and a standard deviation of 0.80 milligram
per liter. Find a 95% confidence interval for the difference in the true average
orthophosphorus contents at these two stations, assuming that the observations
came from normal populations with different variances.
Solution: For station 1, we have x̄1 = 3.84, s1 = 3.07, and n1 = 15. For station 2, x̄2 = 1.49,
s2 = 0.80, and n2 = 12. We wish to find a 95% confidence interval for μ1 − μ2.
9.9 Paired Observations 291
Since the population variances are assumed to be unequal, we can only find an
approximate 95% confidence interval based on the t-distribution with v degrees of
freedom, where
v =
(3.072
/15 + 0.802
/12)2
[(3.072/15)2/14] + [(0.802/12)2/11]
= 16.3 ≈ 16.
Our point estimate of μ1 − μ2 is
x̄1 − x̄2 = 3.84 − 1.49 = 2.35.
Using α = 0.05, we find in Table A.4 that t0.025 = 2.120 for v = 16 degrees of
freedom. Therefore, the 95% confidence interval for μ1 − μ2 is
2.35 − 2.120

3.072
15
+
0.802
12
 μ1 − μ2  2.35 + 2.120

3.072
15
+
0.802
12
,
which simplifies to 0.60  μ1 − μ2  4.10. Hence, we are 95% confident that the
interval from 0.60 to 4.10 milligrams per liter contains the difference of the true
average orthophosphorus contents for these two locations.
When two population variances are unknown, the assumption of equal vari-
ances or unequal variances may be precarious. In Section 10.10, a procedure will
be introduced that will aid in discriminating between the equal variance and the
unequal variance situation.
9.9 Paired Observations
At this point, we shall consider estimation procedures for the difference of two
means when the samples are not independent and the variances of the two popu-
lations are not necessarily equal. The situation considered here deals with a very
special experimental condition, namely that of paired observations. Unlike in the
situation described earlier, the conditions of the two populations are not assigned
randomly to experimental units. Rather, each homogeneous experimental unit re-
ceives both population conditions; as a result, each experimental unit has a pair
of observations, one for each population. For example, if we run a test on a new
diet using 15 individuals, the weights before and after going on the diet form the
information for our two samples. The two populations are “before” and “after,”
and the experimental unit is the individual. Obviously, the observations in a pair
have something in common. To determine if the diet is effective, we consider the
differences d1, d2, . . . , dn in the paired observations. These differences are the val-
ues of a random sample D1, D2, . . . , Dn from a population of differences that we
shall assume to be normally distributed with mean μD = μ1 − μ2 and variance σ2
D.
We estimate σ2
D by s2
d, the variance of the differences that constitute our sample.
The point estimator of μD is given by D̄.
When Should Pairing Be Done?
Pairing observations in an experiment is a strategy that can be employed in many
fields of application. The reader will be exposed to this concept in material related
292 Chapter 9 One- and Two-Sample Estimation Problems
to hypothesis testing in Chapter 10 and experimental design issues in Chapters 13
and 15. Selecting experimental units that are relatively homogeneous (within the
units) and allowing each unit to experience both population conditions reduces the
effective experimental error variance (in this case, σ2
D). The reader may visualize
the ith pair difference as
Di = X1i − X2i.
Since the two observations are taken on the sample experimental unit, they are not
independent and, in fact,
Var(Di) = Var(X1i − X2i) = σ2
1 + σ2
2 − 2 Cov(X1i, X2i).
Now, intuitively, we expect that σ2
D should be reduced because of the similarity in
nature of the “errors” of the two observations within a given experimental unit,
and this comes through in the expression above. One certainly expects that if the
unit is homogeneous, the covariance is positive. As a result, the gain in quality of
the confidence interval over that obtained without pairing will be greatest when
there is homogeneity within units and large differences as one goes from unit to
unit. One should keep in mind that the performance of the confidence interval will
depend on the standard error of D̄, which is, of course, σD/
√
n, where n is the
number of pairs. As we indicated earlier, the intent of pairing is to reduce σD.
Tradeoff between Reducing Variance and Losing Degrees of Freedom
Comparing the confidence intervals obtained with and without pairing makes ap-
parent that there is a tradeoff involved. Although pairing should indeed reduce
variance and hence reduce the standard error of the point estimate, the degrees of
freedom are reduced by reducing the problem to a one-sample problem. As a result,
the tα/2 point attached to the standard error is adjusted accordingly. Thus, pair-
ing may be counterproductive. This would certainly be the case if one experienced
only a modest reduction in variance (through σ2
D) by pairing.
Another illustration of pairing involves choosing n pairs of subjects, with each
pair having a similar characteristic such as IQ, age, or breed, and then selecting
one member of each pair at random to yield a value of X1, leaving the other
member to provide the value of X2. In this case, X1 and X2 might represent
the grades obtained by two individuals of equal IQ when one of the individuals is
assigned at random to a class using the conventional lecture approach while the
other individual is assigned to a class using programmed materials.
A 100(1 − α)% confidence interval for μD can be established by writing
P(−tα/2  T  tα/2) = 1 − α,
where T = D̄−μD
Sd/
√
n
and tα/2, as before, is a value of the t-distribution with n − 1
degrees of freedom.
It is now a routine procedure to replace T by its definition in the inequality
above and carry out the mathematical steps that lead to the following 100(1−α)%
confidence interval for μ1 − μ2 = μD.
9.9 Paired Observations 293
Confidence
Interval for
μD = μ1 − μ2 for
Paired
Observations
If ¯
d and sd are the mean and standard deviation, respectively, of the normally
distributed differences of n random pairs of measurements, a 100(1 − α)% con-
fidence interval for μD = μ1 − μ2 is
¯
d − tα/2
sd
√
n
 μD  ¯
d + tα/2
sd
√
n
,
where tα/2 is the t-value with v = n − 1 degrees of freedom, leaving an area of
α/2 to the right.
Example 9.13: A study published in Chemosphere reported the levels of the dioxin TCDD of 20
Massachusetts Vietnam veterans who were possibly exposed to Agent Orange. The
TCDD levels in plasma and in fat tissue are listed in Table 9.1.
Find a 95% confidence interval for μ1 − μ2, where μ1 and μ2 represent the
true mean TCDD levels in plasma and in fat tissue, respectively. Assume the
distribution of the differences to be approximately normal.
Table 9.1: Data for Example 9.13
TCDD TCDD TCDD TCDD
Levels in Levels in Levels in Levels in
Veteran Plasma Fat Tissue di Veteran Plasma Fat Tissue di
1
2
3
4
5
6
7
8
9
10
2.5
3.1
2.1
3.5
3.1
1.8
6.0
3.0
36.0
4.7
4.9
5.9
4.4
6.9
7.0
4.2
10.0
5.5
41.0
4.4
−2.4
−2.8
−2.3
−3.4
−3.9
−2.4
−4.0
−2.5
−5.0
0.3
11
12
13
14
15
16
17
18
19
20
6.9
3.3
4.6
1.6
7.2
1.8
20.0
2.0
2.5
4.1
7.0
2.9
4.6
1.4
7.7
1.1
11.0
2.5
2.3
2.5
−0.1
0.4
0.0
0.2
−0.5
0.7
9.0
−0.5
0.2
1.6
Source: Schecter, A. et al. “Partitioning of 2,3,7,8-chlorinated dibenzo-p-dioxins and dibenzofurans between
adipose tissue and plasma lipid of 20 Massachusetts Vietnam veterans,” Chemosphere, Vol. 20, Nos. 7–9,
1990, pp. 954–955 (Tables I and II).
Solution: We wish to find a 95% confidence interval for μ1 − μ2. Since the observations
are paired, μ1 − μ2 = μD. The point estimate of μD is ¯
d = −0.87. The standard
deviation, sd, of the sample differences is
sd =
(
)
)
* 1
n − 1
n

i=1
(di − ¯
d)2 =

168.4220
19
= 2.9773.
Using α = 0.05, we find in Table A.4 that t0.025 = 2.093 for v = n−1 = 19 degrees
of freedom. Therefore, the 95% confidence interval is
−0.8700 − (2.093)

2.9773
√
20

 μD  −0.8700 + (2.093)

2.9773
√
20

,
/ /
294 Chapter 9 One- and Two-Sample Estimation Problems
or simply −2.2634  μD  0.5234, from which we can conclude that there is no
significant difference between the mean TCDD level in plasma and the mean TCDD
level in fat tissue.
Exercises
9.35 A random sample of size n1 = 25, taken from a
normal population with a standard deviation σ1 = 5,
has a mean x̄1 = 80. A second random sample of size
n2 = 36, taken from a different normal population with
a standard deviation σ2 = 3, has a mean x̄2 = 75. Find
a 94% confidence interval for μ1 − μ2.
9.36 Two kinds of thread are being compared for
strength. Fifty pieces of each type of thread are tested
under similar conditions. Brand A has an average ten-
sile strength of 78.3 kilograms with a standard devi-
ation of 5.6 kilograms, while brand B has an average
tensile strength of 87.2 kilograms with a standard de-
viation of 6.3 kilograms. Construct a 95% confidence
interval for the difference of the population means.
9.37 A study was conducted to determine if a cer-
tain treatment has any effect on the amount of metal
removed in a pickling operation. A random sample of
100 pieces was immersed in a bath for 24 hours without
the treatment, yielding an average of 12.2 millimeters
of metal removed and a sample standard deviation of
1.1 millimeters. A second sample of 200 pieces was
exposed to the treatment, followed by the 24-hour im-
mersion in the bath, resulting in an average removal
of 9.1 millimeters of metal with a sample standard de-
viation of 0.9 millimeter. Compute a 98% confidence
interval estimate for the difference between the popu-
lation means. Does the treatment appear to reduce the
mean amount of metal removed?
9.38 Two catalysts in a batch chemical process, are
being compared for their effect on the output of the
process reaction. A sample of 12 batches was prepared
using catalyst 1, and a sample of 10 batches was pre-
pared using catalyst 2. The 12 batches for which cat-
alyst 1 was used in the reaction gave an average yield
of 85 with a sample standard deviation of 4, and the
10 batches for which catalyst 2 was used gave an aver-
age yield of 81 and a sample standard deviation of 5.
Find a 90% confidence interval for the difference be-
tween the population means, assuming that the pop-
ulations are approximately normally distributed with
equal variances.
9.39 Students may choose between a 3-semester-hour
physics course without labs and a 4-semester-hour
course with labs. The final written examination is the
same for each section. If 12 students in the section with
labs made an average grade of 84 with a standard devi-
ation of 4, and 18 students in the section without labs
made an average grade of 77 with a standard deviation
of 6, find a 99% confidence interval for the difference
between the average grades for the two courses. As-
sume the populations to be approximately normally
distributed with equal variances.
9.40 In a study conducted at Virginia Tech on the
development of ectomycorrhizal, a symbiotic relation-
ship between the roots of trees and a fungus, in which
minerals are transferred from the fungus to the trees
and sugars from the trees to the fungus, 20 northern
red oak seedlings exposed to the fungus Pisolithus tinc-
torus were grown in a greenhouse. All seedlings were
planted in the same type of soil and received the same
amount of sunshine and water. Half received no ni-
trogen at planting time, to serve as a control, and the
other half received 368 ppm of nitrogen in the form
NaNO3. The stem weights, in grams, at the end of 140
days were recorded as follows:
No Nitrogen Nitrogen
0.32 0.26
0.53 0.43
0.28 0.47
0.37 0.49
0.47 0.52
0.43 0.75
0.36 0.79
0.42 0.86
0.38 0.62
0.43 0.46
Construct a 95% confidence interval for the difference
in the mean stem weight between seedlings that re-
ceive no nitrogen and those that receive 368 ppm of
nitrogen. Assume the populations to be normally dis-
tributed with equal variances.
9.41 The following data represent the length of time,
in days, to recovery for patients randomly treated with
one of two medications to clear up severe bladder in-
fections:
Medication 1 Medication 2
n1 = 14 n2 = 16
x̄1 = 17 x̄2 = 19
s2
1 = 1.5 s2
2 = 1.8
Find a 99% confidence interval for the difference μ2−μ1
/ /
Exercises 295
in the mean recovery times for the two medications, as-
suming normal populations with equal variances.
9.42 An experiment reported in Popular Science
compared fuel economies for two types of similarly
equipped diesel mini-trucks. Let us suppose that 12
Volkswagen and 10 Toyota trucks were tested in 90-
kilometer-per-hour steady-paced trials. If the 12 Volks-
wagen trucks averaged 16 kilometers per liter with a
standard deviation of 1.0 kilometer per liter and the 10
Toyota trucks averaged 11 kilometers per liter with a
standard deviation of 0.8 kilometer per liter, construct
a 90% confidence interval for the difference between the
average kilometers per liter for these two mini-trucks.
Assume that the distances per liter for the truck mod-
els are approximately normally distributed with equal
variances.
9.43 A taxi company is trying to decide whether to
purchase brand A or brand B tires for its fleet of taxis.
To estimate the difference in the two brands, an exper-
iment is conducted using 12 of each brand. The tires
are run until they wear out. The results are
Brand A: x̄1 = 36, 300 kilometers,
s1 = 5000 kilometers.
Brand B: x̄2 = 38, 100 kilometers,
s2 = 6100 kilometers.
Compute a 95% confidence interval for μA − μB as-
suming the populations to be approximately normally
distributed. You may not assume that the variances
are equal.
9.44 Referring to Exercise 9.43, find a 99% confidence
interval for μ1 − μ2 if tires of the two brands are as-
signed at random to the left and right rear wheels of
8 taxis and the following distances, in kilometers, are
recorded:
Taxi Brand A Brand B
1 34,400 36,700
2 45,500 46,800
3 36,700 37,700
4 32,000 31,100
5 48,400 47,800
6 32,800 36,400
7 38,100 38,900
8 30,100 31,500
Assume that the differences of the distances are ap-
proximately normally distributed.
9.45 The federal government awarded grants to the
agricultural departments of 9 universities to test the
yield capabilities of two new varieties of wheat. Each
variety was planted on a plot of equal area at each
university, and the yields, in kilograms per plot, were
recorded as follows:
University
Variety 1 2 3 4 5 6 7 8 9
1 38 23 35 41 44 29 37 31 38
2 45 25 31 38 50 33 36 40 43
Find a 95% confidence interval for the mean difference
between the yields of the two varieties, assuming the
differences of yields to be approximately normally dis-
tributed. Explain why pairing is necessary in this prob-
lem.
9.46 The following data represent the running times
of films produced by two motion-picture companies.
Company Time (minutes)
I 103 94 110 87 98
II 97 82 123 92 175 88 118
Compute a 90% confidence interval for the difference
between the average running times of films produced by
the two companies. Assume that the running-time dif-
ferences are approximately normally distributed with
unequal variances.
9.47 Fortune magazine (March 1997) reported the to-
tal returns to investors for the 10 years prior to 1996
and also for 1996 for 431 companies. The total returns
for 10 of the companies are listed below. Find a 95%
confidence interval for the mean change in percent re-
turn to investors.
Total Return
to Investors
Company 1986–96 1996
Coca-Cola 29.8% 43.3%
Mirage Resorts 27.9% 25.4%
Merck 22.1% 24.0%
Microsoft 44.5% 88.3%
Johnson  Johnson 22.2% 18.1%
Intel 43.8% 131.2%
Pfizer 21.7% 34.0%
Procter  Gamble 21.9% 32.1%
Berkshire Hathaway 28.3% 6.2%
SP 500 11.8% 20.3%
9.48 An automotive company is considering two
types of batteries for its automobile. Sample infor-
mation on battery life is collected for 20 batteries of
type A and 20 batteries of type B. The summary
statistics are x̄A = 32.91, x̄B = 30.47, sA = 1.57,
and sB = 1.74. Assume the data on each battery are
normally distributed and assume σA = σB.
(a) Find a 95% confidence interval on μA − μB.
(b) Draw a conclusion from (a) that provides insight
into whether A or B should be adopted.
9.49 Two different brands of latex paint are being
considered for use. Fifteen specimens of each type of
296 Chapter 9 One- and Two-Sample Estimation Problems
paint were selected, and the drying times, in hours,
were as follows:
Paint A Paint B
3.5 2.7 3.9 4.2 3.6 4.7 3.9 4.5 5.5 4.0
2.7 3.3 5.2 4.2 2.9 5.3 4.3 6.0 5.2 3.7
4.4 5.2 4.0 4.1 3.4 5.5 6.2 5.1 5.4 4.8
Assume the drying time is normally distributed with
σA = σB. Find a 95% confidence interval on μB − μA,
where μA and μB are the mean drying times.
9.50 Two levels (low and high) of insulin doses are
given to two groups of diabetic rats to check the insulin-
binding capacity, yielding the following data:
Low dose: n1 = 8 x̄1 = 1.98 s1 = 0.51
High dose: n2 = 13 x̄2 = 1.30 s2 = 0.35
Assume that the variances are equal. Give a 95% con-
fidence interval for the difference in the true average
insulin-binding capacity between the two samples.
9.10 Single Sample: Estimating a Proportion
A point estimator of the proportion p in a binomial experiment is given by the
statistic +
P = X/n, where X represents the number of successes in n trials. There-
fore, the sample proportion p̂ = x/n will be used as the point estimate of the
parameter p.
If the unknown proportion p is not expected to be too close to 0 or 1, we can
establish a confidence interval for p by considering the sampling distribution of
+
P. Designating a failure in each binomial trial by the value 0 and a success by
the value 1, the number of successes, x, can be interpreted as the sum of n values
consisting only of 0 and 1s, and p̂ is just the sample mean of these n values. Hence,
by the Central Limit Theorem, for n sufficiently large, +
P is approximately normally
distributed with mean
μ
P = E( +
P) = E

X
n

=
np
n
= p
and variance
σ2

P
= σ2
X/n =
σ2
X
n2
=
npq
n2
=
pq
n
.
Therefore, we can assert that
P(−zα/2  Z  zα/2) = 1 − α, with Z =
+
P − p

pq/n
,
and zα/2 is the value above which we find an area of α/2 under the standard normal
curve. Substituting for Z, we write
P

−zα/2 
+
P − p

pq/n
 zα/2

= 1 − α.
When n is large, very little error is introduced by substituting the point estimate
p̂ = x/n for the p under the radical sign. Then we can write
P

+
P − zα/2

p̂q̂
n
 p  +
P + zα/2

p̂q̂
n

≈ 1 − α.
9.10 Single Sample: Estimating a Proportion 297
On the other hand, by solving for p in the quadratic inequality above,
−zα/2 
+
P − p

pq/n
 zα/2,
we obtain another form of the confidence interval for p with limits
p̂ +
z2
α/2
2n
1 +
z2
α/2
n
±
zα/2
1 +
z2
α/2
n
%
p̂q̂
n
+
z2
α/2
4n2
.
For a random sample of size n, the sample proportion p̂ = x/n is computed, and
the following approximate 100(1 − α)% confidence intervals for p can be obtained.
Large-Sample
Confidence
Intervals for p
If p̂ is the proportion of successes in a random sample of size n and q̂ = 1 − p̂,
an approximate 100(1 − α)% confidence interval, for the binomial parameter p
is given by (method 1)
p̂ − zα/2

p̂q̂
n
 p  p̂ + zα/2

p̂q̂
n
or by (method 2)
p̂ +
z2
α/2
2n
1 +
z2
α/2
n
−
zα/2
1 +
z2
α/2
n
%
p̂q̂
n
+
z2
α/2
4n2
 p 
p̂ +
z2
α/2
2n
1 +
z2
α/2
n
+
zα/2
1 +
z2
α/2
n
%
p̂q̂
n
+
z2
α/2
4n2
,
where zα/2 is the z-value leaving an area of α/2 to the right.
When n is small and the unknown proportion p is believed to be close to 0 or to
1, the confidence-interval procedure established here is unreliable and, therefore,
should not be used. To be on the safe side, one should require both np̂ and nq̂
to be greater than or equal to 5. The methods for finding a confidence interval
for the binomial parameter p are also applicable when the binomial distribution
is being used to approximate the hypergeometric distribution, that is, when n is
small relative to N, as illustrated by Example 9.14.
Note that although method 2 yields more accurate results, it is more compli-
cated to calculate, and the gain in accuracy that it provides diminishes when the
sample size is large enough. Hence, method 1 is commonly used in practice.
Example 9.14: In a random sample of n = 500 families owning television sets in the city of Hamil-
ton, Canada, it is found that x = 340 subscribe to HBO. Find a 95% confidence
interval for the actual proportion of families with television sets in this city that
subscribe to HBO.
Solution: The point estimate of p is p̂ = 340/500 = 0.68. Using Table A.3, we find that
z0.025 = 1.96. Therefore, using method 1, the 95% confidence interval for p is
0.68 − 1.96

(0.68)(0.32)
500
 p  0.68 + 1.96

(0.68)(0.32)
500
,
which simplifies to 0.6391  p  0.7209.
298 Chapter 9 One- and Two-Sample Estimation Problems
If we use method 2, we can obtain
0.68 + 1.962
(2)(500)
1 + 1.962
500
±
1.96
1 + 1.962
500
%
(0.68)(0.32)
500
+
1.962
(4)(5002)
= 0.6786 ± 0.0408,
which simplifies to 0.6378  p  0.7194. Apparently, when n is large (500 here),
both methods yield very similar results.
If p is the center value of a 100(1 − α)% confidence interval, then p̂ estimates p
without error. Most of the time, however, p̂ will not be exactly equal to p and the
point estimate will be in error. The size of this error will be the positive difference
that separates p and p̂, and we can be 100(1 − α)% confident that this difference
will not exceed zα/2

p̂q̂/n. We can readily see this if we draw a diagram of a
typical confidence interval, as in Figure 9.6. Here we use method 1 to estimate the
error.
p
^ p
Error
^ ^
p z ^ ^
p q/n p z ^ ^
p q/n
/2
α /2
α
Figure 9.6: Error in estimating p by p̂.
Theorem 9.3: If p̂ is used as an estimate of p, we can be 100(1 − α)% confident that the error
will not exceed zα/2

p̂q̂/n.
In Example 9.14, we are 95% confident that the sample proportion p̂ = 0.68
differs from the true proportion p by an amount not exceeding 0.04.
Choice of Sample Size
Let us now determine how large a sample is necessary to ensure that the error in
estimating p will be less than a specified amount e. By Theorem 9.3, we must
choose n such that zα/2

p̂q̂/n = e.
Theorem 9.4: If p̂ is used as an estimate of p, we can be 100(1 − α)% confident that the error
will be less than a specified amount e when the sample size is approximately
n =
z2
α/2p̂q̂
e2
.
Theorem 9.4 is somewhat misleading in that we must use p̂ to determine the
sample size n, but p̂ is computed from the sample. If a crude estimate of p can
be made without taking a sample, this value can be used to determine n. Lacking
such an estimate, we could take a preliminary sample of size n ≥ 30 to provide
an estimate of p. Using Theorem 9.4, we could determine approximately how
many observations are needed to provide the desired degree of accuracy. Note that
fractional values of n are rounded up to the next whole number.
9.10 Single Sample: Estimating a Proportion 299
Example 9.15: How large a sample is required if we want to be 95% confident that our estimate
of p in Example 9.14 is within 0.02 of the true value?
Solution: Let us treat the 500 families as a preliminary sample, providing an estimate p̂ =
0.68. Then, by Theorem 9.4,
n =
(1.96)2
(0.68)(0.32)
(0.02)2
= 2089.8 ≈ 2090.
Therefore, if we base our estimate of p on a random sample of size 2090, we can be
95% confident that our sample proportion will not differ from the true proportion
by more than 0.02.
Occasionally, it will be impractical to obtain an estimate of p to be used for
determining the sample size for a specified degree of confidence. If this happens,
an upper bound for n is established by noting that p̂q̂ = p̂(1 − p̂), which must
be at most 1/4, since p̂ must lie between 0 and 1. This fact may be verified by
completing the square. Hence
p̂(1 − p̂) = −(p̂2
− p̂) =
1
4
−

p̂2
− p̂ +
1
4

=
1
4
−

p̂ −
1
2
2
,
which is always less than 1/4 except when p̂ = 1/2, and then p̂q̂ = 1/4. Therefore,
if we substitute p̂ = 1/2 into the formula for n in Theorem 9.4 when, in fact, p
actually differs from l/2, n will turn out to be larger than necessary for the specified
degree of confidence; as a result, our degree of confidence will increase.
Theorem 9.5: If p̂ is used as an estimate of p, we can be at least 100(1 − α)% confident that
the error will not exceed a specified amount e when the sample size is
n =
z2
α/2
4e2
.
Example 9.16: How large a sample is required if we want to be at least 95% confident that our
estimate of p in Example 9.14 is within 0.02 of the true value?
Solution: Unlike in Example 9.15, we shall now assume that no preliminary sample has been
taken to provide an estimate of p. Consequently, we can be at least 95% confident
that our sample proportion will not differ from the true proportion by more than
0.02 if we choose a sample of size
n =
(1.96)2
(4)(0.02)2
= 2401.
Comparing the results of Examples 9.15 and 9.16, we see that information concern-
ing p, provided by a preliminary sample or from experience, enables us to choose
a smaller sample while maintaining our required degree of accuracy.
300 Chapter 9 One- and Two-Sample Estimation Problems
9.11 Two Samples: Estimating the Difference between
Two Proportions
Consider the problem where we wish to estimate the difference between two bino-
mial parameters p1 and p2. For example, p1 might be the proportion of smokers
with lung cancer and p2 the proportion of nonsmokers with lung cancer, and the
problem is to estimate the difference between these two proportions. First, we
select independent random samples of sizes n1 and n2 from the two binomial pop-
ulations with means n1p1 and n2p2 and variances n1p1q1 and n2p2q2, respectively;
then we determine the numbers x1 and x2 of people in each sample with lung can-
cer and form the proportions p̂1 = x1/n and p̂2 = x2/n. A point estimator of the
difference between the two proportions, p1 − p2, is given by the statistic +
P1 − +
P2.
Therefore, the difference of the sample proportions, p̂1 − p̂2, will be used as the
point estimate of p1 − p2.
A confidence interval for p1 − p2 can be established by considering the sam-
pling distribution of +
P1 − +
P2. From Section 9.10 we know that +
P1 and +
P2 are each
approximately normally distributed, with means p1 and p2 and variances p1q1/n1
and p2q2/n2, respectively. Choosing independent samples from the two popula-
tions ensures that the variables +
P1 and +
P2 will be independent, and then by the
reproductive property of the normal distribution established in Theorem 7.11, we
conclude that +
P1 − +
P2 is approximately normally distributed with mean
μ
P1− 
P2
= p1 − p2
and variance
σ2

P1− 
P2
=
p1q1
n1
+
p2q2
n2
.
Therefore, we can assert that
P(−zα/2  Z  zα/2) = 1 − α,
where
Z =
( +
P1 − +
P2) − (p1 − p2)

p1q1/n1 + p2q2/n2
and zα/2 is the value above which we find an area of α/2 under the standard normal
curve. Substituting for Z, we write
P

−zα/2 
( +
P1 − +
P2) − (p1 − p2)

p1q1/n1 + p2q2/n2
 zα/2

= 1 − α.
After performing the usual mathematical manipulations, we replace p1, p2,
q1, and q2 under the radical sign by their estimates p̂1 = x1/n1, p̂2 = x2/n2,
q̂1 = 1 − p̂1, and q̂2 = 1 − p̂2, provided that n1p̂1, n1q̂1, n2p̂2, and n2q̂2 are all
greater than or equal to 5, and the following approximate 100(1 − α)% confidence
interval for p1 − p2 is obtained.
9.11 Two Samples: Estimating the Difference between Two Proportions 301
Large-Sample
Confidence
Interval for
p1 − p2
If p̂1 and p̂2 are the proportions of successes in random samples of sizes n1 and
n2, respectively, q̂1 = 1 − p̂1, and q̂2 = 1 − p̂2, an approximate 100(1 − α)%
confidence interval for the difference of two binomial parameters, p1 − p2, is
given by
(p̂1 − p̂2) − zα/2

p̂1q̂1
n1
+
p̂2q̂2
n2
 p1 − p2  (p̂1 − p̂2) + zα/2

p̂1q̂1
n1
+
p̂2q̂2
n2
,
where zα/2 is the z-value leaving an area of α/2 to the right.
Example 9.17: A certain change in a process for manufacturing component parts is being con-
sidered. Samples are taken under both the existing and the new process so as
to determine if the new process results in an improvement. If 75 of 1500 items
from the existing process are found to be defective and 80 of 2000 items from the
new process are found to be defective, find a 90% confidence interval for the true
difference in the proportion of defectives between the existing and the new process.
Solution: Let p1 and p2 be the true proportions of defectives for the existing and new pro-
cesses, respectively. Hence, p̂1 = 75/1500 = 0.05 and p̂2 = 80/2000 = 0.04, and
the point estimate of p1 − p2 is
p̂1 − p̂2 = 0.05 − 0.04 = 0.01.
Using Table A.3, we find z0.05 = 1.645. Therefore, substituting into the formula,
with
1.645

(0.05)(0.95)
1500
+
(0.04)(0.96)
2000
= 0.0117,
we find the 90% confidence interval to be −0.0017  p1 − p2  0.0217. Since the
interval contains the value 0, there is no reason to believe that the new process
produces a significant decrease in the proportion of defectives over the existing
method.
Up to this point, all confidence intervals presented were of the form
point estimate ± K s.e.(point estimate),
where K is a constant (either t or normal percent point). This form is valid when
the parameter is a mean, a difference between means, a proportion, or a difference
between proportions, due to the symmetry of the t- and Z-distributions. However,
it does not extend to variances and ratios of variances, which will be discussed in
Sections 9.12 and 9.13.
/ /
302 Chapter 9 One- and Two-Sample Estimation Problems
Exercises
In this set of exercises, for estimation concern-
ing one proportion, use only method 1 to obtain
the confidence intervals, unless instructed oth-
erwise.
9.51 In a random sample of 1000 homes in a certain
city, it is found that 228 are heated by oil. Find 99%
confidence intervals for the proportion of homes in this
city that are heated by oil using both methods pre-
sented on page 297.
9.52 Compute 95% confidence intervals, using both
methods on page 297, for the proportion of defective
items in a process when it is found that a sample of
size 100 yields 8 defectives.
9.53 (a) A random sample of 200 voters in a town is
selected, and 114 are found to support an annexa-
tion suit. Find the 96% confidence interval for the
fraction of the voting population favoring the suit.
(b) What can we assert with 96% confidence about the
possible size of our error if we estimate the fraction
of voters favoring the annexation suit to be 0.57?
9.54 A manufacturer of MP3 players conducts a set
of comprehensive tests on the electrical functions of its
product. All MP3 players must pass all tests prior to
being sold. Of a random sample of 500 MP3 players, 15
failed one or more tests. Find a 90% confidence interval
for the proportion of MP3 players from the population
that pass all tests.
9.55 A new rocket-launching system is being consid-
ered for deployment of small, short-range rockets. The
existing system has p = 0.8 as the probability of a suc-
cessful launch. A sample of 40 experimental launches
is made with the new system, and 34 are successful.
(a) Construct a 95% confidence interval for p.
(b) Would you conclude that the new system is better?
9.56 A geneticist is interested in the proportion of
African males who have a certain minor blood disor-
der. In a random sample of 100 African males, 24 are
found to be afflicted.
(a) Compute a 99% confidence interval for the propor-
tion of African males who have this blood disorder.
(b) What can we assert with 99% confidence about the
possible size of our error if we estimate the propor-
tion of African males with this blood disorder to be
0.24?
9.57 (a) According to a report in the Roanoke Times
 World-News, approximately 2/3 of 1600 adults
polled by telephone said they think the space shut-
tle program is a good investment for the country.
Find a 95% confidence interval for the proportion of
American adults who think the space shuttle pro-
gram is a good investment for the country.
(b) What can we assert with 95% confidence about the
possible size of our error if we estimate the propor-
tion of American adults who think the space shuttle
program is a good investment to be 2/3?
9.58 In the newspaper article referred to in Exercise
9.57, 32% of the 1600 adults polled said the U.S. space
program should emphasize scientific exploration. How
large a sample of adults is needed for the poll if one
wishes to be 95% confident that the estimated per-
centage will be within 2% of the true percentage?
9.59 How large a sample is needed if we wish to be
96% confident that our sample proportion in Exercise
9.53 will be within 0.02 of the true fraction of the vot-
ing population?
9.60 How large a sample is needed if we wish to be
99% confident that our sample proportion in Exercise
9.51 will be within 0.05 of the true proportion of homes
in the city that are heated by oil?
9.61 How large a sample is needed in Exercise 9.52 if
we wish to be 98% confident that our sample propor-
tion will be within 0.05 of the true proportion defec-
tive?
9.62 A conjecture by a faculty member in the micro-
biology department at Washington University School
of Dental Medicine in St. Louis, Missouri, states that
a couple of cups of either green or oolong tea each
day will provide sufficient fluoride to protect your teeth
from decay. How large a sample is needed to estimate
the percentage of citizens in a certain town who favor
having their water fluoridated if one wishes to be at
least 99% confident that the estimate is within 1% of
the true percentage?
9.63 A study is to be made to estimate the percent-
age of citizens in a town who favor having their water
fluoridated. How large a sample is needed if one wishes
to be at least 95% confident that the estimate is within
1% of the true percentage?
9.64 A study is to be made to estimate the propor-
tion of residents of a certain city and its suburbs who
favor the construction of a nuclear power plant near
the city. How large a sample is needed if one wishes to
be at least 95% confident that the estimate is within
0.04 of the true proportion of residents who favor the
construction of the nuclear power plant?
9.12 Single Sample: Estimating the Variance 303
9.65 A certain geneticist is interested in the propor-
tion of males and females in the population who have
a minor blood disorder. In a random sample of 1000
males, 250 are found to be afflicted, whereas 275 of
1000 females tested appear to have the disorder. Com-
pute a 95% confidence interval for the difference be-
tween the proportions of males and females who have
the blood disorder.
9.66 Ten engineering schools in the United States
were surveyed. The sample contained 250 electrical
engineers, 80 being women; 175 chemical engineers, 40
being women. Compute a 90% confidence interval for
the difference between the proportions of women in
these two fields of engineering. Is there a significant
difference between the two proportions?
9.67 A clinical trial was conducted to determine if a
certain type of inoculation has an effect on the inci-
dence of a certain disease. A sample of 1000 rats was
kept in a controlled environment for a period of 1 year,
and 500 of the rats were given the inoculation. In the
group not inoculated, there were 120 incidences of the
disease, while 98 of the rats in the inoculated group
contracted it. If p1 is the probability of incidence of
the disease in uninoculated rats and p2 the probability
of incidence in inoculated rats, compute a 90% confi-
dence interval for p1 − p2.
9.68 In the study Germination and Emergence of
Broccoli, conducted by the Department of Horticulture
at Virginia Tech, a researcher found that at 5◦
C, 10
broccoli seeds out of 20 germinated, while at 15◦
C, 15
out of 20 germinated. Compute a 95% confidence in-
terval for the difference between the proportions of ger-
mination at the two different temperatures and decide
if there is a significant difference.
9.69 A survey of 1000 students found that 274 chose
professional baseball team A as their favorite team. In
a similar survey involving 760 students, 240 of them
chose team A as their favorite. Compute a 95% con-
fidence interval for the difference between the propor-
tions of students favoring team A in the two surveys.
Is there a significant difference?
9.70 According to USA Today (March 17, 1997),
women made up 33.7% of the editorial staff at local
TV stations in the United States in 1990 and 36.2% in
1994. Assume 20 new employees were hired as editorial
staff.
(a) Estimate the number that would have been women
in 1990 and 1994, respectively.
(b) Compute a 95% confidence interval to see if there
is evidence that the proportion of women hired as
editorial staff was higher in 1994 than in 1990.
9.12 Single Sample: Estimating the Variance
If a sample of size n is drawn from a normal population with variance σ2
and
the sample variance s2
is computed, we obtain a value of the statistic S2
. This
computed sample variance is used as a point estimate of σ2
. Hence, the statistic
S2
is called an estimator of σ2
.
An interval estimate of σ2
can be established by using the statistic
X2
=
(n − 1)S2
σ2
.
According to Theorem 8.4, the statistic X2
has a chi-squared distribution with
n − 1 degrees of freedom when samples are chosen from a normal population. We
may write (see Figure 9.7)
P(χ2
1−α/2  X2
 χ2
α/2) = 1 − α,
where χ2
1−α/2 and χ2
α/2 are values of the chi-squared distribution with n−1 degrees
of freedom, leaving areas of 1−α/2 and α/2, respectively, to the right. Substituting
for X2
, we write
P

χ2
1−α/2 
(n − 1)S2
σ2
 χ2
α/2

= 1 − α.
304 Chapter 9 One- and Two-Sample Estimation Problems
0 2
1
2
2
/2
1  α
α /2
α
/2
α /2
α
Figure 9.7: P(χ2
1−α/2  X2
 χ2
α/2) = 1 − α.
Dividing each term in the inequality by (n − 1)S2
and then inverting each term
(thereby changing the sense of the inequalities), we obtain
P

(n − 1)S2
χ2
α/2
 σ2

(n − 1)S2
χ2
1−α/2

= 1 − α.
For a random sample of size n from a normal population, the sample variance s2
is computed, and the following 100(1 − α)% confidence interval for σ2
is obtained.
Confidence
Interval for σ2
If s2
is the variance of a random sample of size n from a normal population, a
100(1 − α)% confidence interval for σ2
is
(n − 1)s2
χ2
α/2
 σ2

(n − 1)s2
χ2
1−α/2
,
where χ2
α/2 and χ2
1−α/2 are χ2
-values with v = n−1 degrees of freedom, leaving
areas of α/2 and 1 − α/2, respectively, to the right.
An approximate 100(1 − α)% confidence interval for σ is obtained by taking
the square root of each endpoint of the interval for σ2
.
Example 9.18: The following are the weights, in decagrams, of 10 packages of grass seed distributed
by a certain company: 46.4, 46.1, 45.8, 47.0, 46.1, 45.9, 45.8, 46.9, 45.2, and 46.0.
Find a 95% confidence interval for the variance of the weights of all such packages
of grass seed distributed by this company, assuming a normal population.
Solution: First we find
s2
=
n
n

i=1
x2
i −
 n

i=1
xi
2
n(n − 1)
=
(10)(21, 273.12) − (461.2)2
(10)(9)
= 0.286.
9.13 Two Samples: Estimating the Ratio of Two Variances 305
To obtain a 95% confidence interval, we choose α = 0.05. Then, using Table
A.5 with v = 9 degrees of freedom, we find χ2
0.025 = 19.023 and χ2
0.975 = 2.700.
Therefore, the 95% confidence interval for σ2
is
(9)(0.286)
19.023
 σ2

(9)(0.286)
2.700
,
or simply 0.135  σ2
 0.953.
9.13 Two Samples: Estimating the Ratio of Two Variances
A point estimate of the ratio of two population variances σ2
1/σ2
2 is given by the ratio
s2
1/s2
2 of the sample variances. Hence, the statistic S2
1 /S2
2 is called an estimator of
σ2
1/σ2
2.
If σ2
1 and σ2
2 are the variances of normal populations, we can establish an
interval estimate of σ2
1/σ2
2 by using the statistic
F =
σ2
2S2
1
σ2
1S2
2
.
According to Theorem 8.8, the random variable F has an F-distribution with
v1 = n1 − 1 and v2 = n2 − 1 degrees of freedom. Therefore, we may write (see
Figure 9.8)
P[f1−α/2(v1, v2)  F  fα/2(v1, v2)] = 1 − α,
where f1−α/2(v1, v2) and fα/2(v1, v2) are the values of the F-distribution with v1
and v2 degrees of freedom, leaving areas of 1 − α/2 and α/2, respectively, to the
right.
f
f1 f
0
/2
1  α
α /2
α
/2
α
/2
α
Figure 9.8: P[f1−α/2(v1, v2)  F  fα/2(v1, v2)] = 1 − α.
306 Chapter 9 One- and Two-Sample Estimation Problems
Substituting for F, we write
P

f1−α/2(v1, v2) 
σ2
2S2
1
σ2
1S2
2
 fα/2(v1, v2)

= 1 − α.
Multiplying each term in the inequality by S2
2 /S2
1 and then inverting each term,
we obtain
P

S2
1
S2
2
1
fα/2(v1, v2)

σ2
1
σ2
2

S2
1
S2
2
1
f1−α/2(v1, v2)

= 1 − α.
The results of Theorem 8.7 enable us to replace the quantity f1−α/2(v1, v2) by
1/fα/2(v2, v1). Therefore,
P

S2
1
S2
2
1
fα/2(v1, v2)

σ2
1
σ2
2

S2
1
S2
2
fα/2(v2, v1)

= 1 − α.
For any two independent random samples of sizes n1 and n2 selected from two
normal populations, the ratio of the sample variances s2
1/s2
2 is computed, and the
following 100(1 − α)% confidence interval for σ2
1/σ2
2 is obtained.
Confidence
Interval for σ2
1/σ2
2
If s2
1 and s2
2 are the variances of independent samples of sizes n1 and n2, re-
spectively, from normal populations, then a 100(1 − α)% confidence interval for
σ2
1/σ2
2 is
s2
1
s2
2
1
fα/2(v1, v2)

σ2
1
σ2
2

s2
1
s2
2
fα/2(v2, v1),
where fα/2(v1, v2) is an f-value with v1 = n1 − 1 and v2 = n2 − 1 degrees of
freedom, leaving an area of α/2 to the right, and fα/2(v2, v1) is a similar f-value
with v2 = n2 − 1 and v1 = n1 − 1 degrees of freedom.
As in Section 9.12, an approximate 100(1 − α)% confidence interval for σ1/σ2
is obtained by taking the square root of each endpoint of the interval for σ2
1/σ2
2.
Example 9.19: A confidence interval for the difference in the mean orthophosphorus contents,
measured in milligrams per liter, at two stations on the James River was con-
structed in Example 9.12 on page 290 by assuming the normal population variance
to be unequal. Justify this assumption by constructing 98% confidence intervals
for σ2
1/σ2
2 and for σ1/σ2, where σ2
1 and σ2
2 are the variances of the populations of
orthophosphorus contents at station 1 and station 2, respectively.
Solution: From Example 9.12, we have n1 = 15, n2 = 12, s1 = 3.07, and s2 = 0.80.
For a 98% confidence interval, α = 0.02. Interpolating in Table A.6, we find
f0.01(14, 11) ≈ 4.30 and f0.01(11, 14) ≈ 3.87. Therefore, the 98% confidence interval
for σ2
1/σ2
2 is

3.072
0.802
 
1
4.30


σ2
1
σ2
2


3.072
0.802

(3.87),
9.14 Maximum Likelihood Estimation (Optional) 307
which simplifies to 3.425 
σ2
1
σ2
2
 56.991. Taking square roots of the confidence
limits, we find that a 98% confidence interval for σ1/σ2 is
1.851 
σ1
σ2
 7.549.
Since this interval does not allow for the possibility of σ1/σ2 being equal to 1, we
were correct in assuming that σ1 = σ2 or σ2
1 = σ2
2 in Example 9.12.
Exercises
9.71 A manufacturer of car batteries claims that the
batteries will last, on average, 3 years with a variance
of 1 year. If 5 of these batteries have lifetimes of 1.9,
2.4, 3.0, 3.5, and 4.2 years, construct a 95% confidence
interval for σ2
and decide if the manufacturer’s claim
that σ2
= 1 is valid. Assume the population of battery
lives to be approximately normally distributed.
9.72 A random sample of 20 students yielded a mean
of x̄ = 72 and a variance of s2
= 16 for scores on a
college placement test in mathematics. Assuming the
scores to be normally distributed, construct a 98% con-
fidence interval for σ2
.
9.73 Construct a 95% confidence interval for σ2
in
Exercise 9.9 on page 283.
9.74 Construct a 99% confidence interval for σ2
in
Exercise 9.11 on page 283.
9.75 Construct a 99% confidence interval for σ in Ex-
ercise 9.12 on page 283.
9.76 Construct a 90% confidence interval for σ in Ex-
ercise 9.13 on page 283.
9.77 Construct a 98% confidence interval for σ1/σ2
in Exercise 9.42 on page 295, where σ1 and σ2 are,
respectively, the standard deviations for the distances
traveled per liter of fuel by the Volkswagen and Toyota
mini-trucks.
9.78 Construct a 90% confidence interval for σ2
1/σ2
2 in
Exercise 9.43 on page 295. Were we justified in assum-
ing that σ2
1 = σ2
2 when we constructed the confidence
interval for μ1 − μ2?
9.79 Construct a 90% confidence interval for σ2
1/σ2
2
in Exercise 9.46 on page 295. Should we have assumed
σ2
1 = σ2
2 in constructing our confidence interval for
μI − μII ?
9.80 Construct a 95% confidence interval for σ2
A/σ2
B
in Exercise 9.49 on page 295. Should the equal-variance
assumption be used?
9.14 Maximum Likelihood Estimation (Optional)
Often the estimators of parameters have been those that appeal to intuition. The
estimator X̄ certainly seems reasonable as an estimator of a population mean μ.
The virtue of S2
as an estimator of σ2
is underscored through the discussion of
unbiasedness in Section 9.3. The estimator for a binomial parameter p is merely a
sample proportion, which, of course, is an average and appeals to common sense.
But there are many situations in which it is not at all obvious what the proper
estimator should be. As a result, there is much to be learned by the student
of statistics concerning different philosophies that produce different methods of
estimation. In this section, we deal with the method of maximum likelihood.
Maximum likelihood estimation is one of the most important approaches to
estimation in all of statistical inference. We will not give a thorough development of
the method. Rather, we will attempt to communicate the philosophy of maximum
likelihood and illustrate with examples that relate to other estimation problems
discussed in this chapter.
308 Chapter 9 One- and Two-Sample Estimation Problems
The Likelihood Function
As the name implies, the method of maximum likelihood is that for which the like-
lihood function is maximized. The likelihood function is best illustrated through
the use of an example with a discrete distribution and a single parameter. Denote
by X1, X2, . . . , Xn the independent random variables taken from a discrete prob-
ability distribution represented by f(x, θ), where θ is a single parameter of the
distribution. Now
L(x1, x2, . . . , xn; θ) = f(x1, x2, . . . , xn; θ)
= f(x1, θ)f(x2, θ) · · · f(xn, θ)
is the joint distribution of the random variables, often referred to as the likelihood
function. Note that the variable of the likelihood function is θ, not x. Denote by
x1, x2, . . . , xn the observed values in a sample. In the case of a discrete random
variable, the interpretation is very clear. The quantity L(x1, x2, . . . , xn; θ), the
likelihood of the sample, is the following joint probability:
P(X1 = x1, X2 = x2, . . . , Xn = xn | θ),
which is the probability of obtaining the sample values x1, x2, . . . , xn. For the
discrete case, the maximum likelihood estimator is one that results in a maximum
value for this joint probability or maximizes the likelihood of the sample.
Consider a fictitious example where three items from an assembly line are
inspected. The items are ruled either defective or nondefective, and thus the
Bernoulli process applies. Testing the three items results in two nondefective items
followed by a defective item. It is of interest to estimate p, the proportion non-
defective in the process. The likelihood of the sample for this illustration is given
by
p · p · q = p2
q = p2
− p3
,
where q = 1 − p. Maximum likelihood estimation would give an estimate of p for
which the likelihood is maximized. It is clear that if we differentiate the likelihood
with respect to p, set the derivative to zero, and solve, we obtain the value
p̂ =
2
3
.
Now, of course, in this situation p̂ = 2/3 is the sample proportion defective
and is thus a reasonable estimator of the probability of a defective. The reader
should attempt to understand that the philosophy of maximum likelihood estima-
tion evolves from the notion that the reasonable estimator of a parameter based
on sample information is that parameter value that produces the largest probability
of obtaining the sample. This is, indeed, the interpretation for the discrete case,
since the likelihood is the probability of jointly observing the values in the sample.
Now, while the interpretation of the likelihood function as a joint probability
is confined to the discrete case, the notion of maximum likelihood extends to the
estimation of parameters of a continuous distribution. We now present a formal
definition of maximum likelihood estimation.
9.14 Maximum Likelihood Estimation (Optional) 309
Definition 9.3: Given independent observations x1, x2, . . . , xn from a probability density func-
tion (continuous case) or probability mass function (discrete case) f(x; θ), the
maximum likelihood estimator θ̂ is that which maximizes the likelihood function
L(x1, x2, . . . , xn; θ) = f(x; θ) = f(x1, θ)f(x2, θ) · · · f(xn, θ).
Quite often it is convenient to work with the natural log of the likelihood
function in finding the maximum of that function. Consider the following example
dealing with the parameter μ of a Poisson distribution.
Example 9.20: Consider a Poisson distribution with probability mass function
f(x|μ) =
e−μ
μx
x!
, x = 0, 1, 2, . . . .
Suppose that a random sample x1, x2, . . . , xn is taken from the distribution. What
is the maximum likelihood estimate of μ?
Solution: The likelihood function is
L(x1, x2, . . . , xn; μ) =
n
,
i=1
f(xi|μ) =
e−nμ
μ
n

i=1
xi
-n
i=1 xi!
.
Now consider
ln L(x1, x2, . . . , xn; μ) = −nμ +
n

i=1
xi ln μ − ln
n
,
i=1
xi!
∂ ln L(x1, x2, . . . , xn; μ)
∂μ
= −n +
n

i=1
xi
μ
.
Solving for μ̂, the maximum likelihood estimator, involves setting the derivative to
zero and solving for the parameter. Thus,
μ̂ =
n

i=1
xi
n
= x̄.
The second derivative of the log-likelihood function is negative, which implies that
the solution above indeed is a maximum. Since μ is the mean of the Poisson
distribution (Chapter 5), the sample average would certainly seem like a reasonable
estimator.
The following example shows the use of the method of maximum likelihood for
finding estimates of two parameters. We simply find the values of the parameters
that maximize (jointly) the likelihood function.
Example 9.21: Consider a random sample x1, x2, . . . , xn from a normal distribution N(μ, σ). Find
the maximum likelihood estimators for μ and σ2
.
310 Chapter 9 One- and Two-Sample Estimation Problems
Solution: The likelihood function for the normal distribution is
L(x1, x2, . . . , xn; μ, σ2
) =
1
(2π)n/2(σ2)n/2
exp

−
1
2
n

i=1

xi − μ
σ
2

.
Taking logarithms gives us
ln L(x1, x2, . . . , xn; μ, σ2
) = −
n
2
ln(2π) −
n
2
ln σ2
−
1
2
n

i=1

xi − μ
σ
2
.
Hence,
∂ ln L
∂μ
=
n

i=1

xi − μ
σ2

and
∂ ln L
∂σ2
= −
n
2σ2
+
1
2(σ2)2
n

i=1
(xi − μ)2
.
Setting both derivatives equal to 0, we obtain
n

i=1
xi − nμ = 0 and nσ2
=
n

i=1
(xi − μ)2
.
Thus, the maximum likelihood estimator of μ is given by
μ̂ =
1
n
n

i=1
xi = x̄,
which is a pleasing result since x̄ has played such an important role in this chapter
as a point estimate of μ. On the other hand, the maximum likelihood estimator of
σ2
is
σ̂2
=
1
n
n

i=1
(xi − x̄)2
.
Checking the second-order partial derivative matrix confirms that the solution
results in a maximum of the likelihood function.
It is interesting to note the distinction between the maximum likelihood esti-
mator of σ2
and the unbiased estimator S2
developed earlier in this chapter. The
numerators are identical, of course, and the denominator is the degrees of freedom
n−1 for the unbiased estimator and n for the maximum likelihood estimator. Max-
imum likelihood estimators do not necessarily enjoy the property of unbiasedness.
However, they do have very important asymptotic properties.
Example 9.22: Suppose 10 rats are used in a biomedical study where they are injected with cancer
cells and then given a cancer drug that is designed to increase their survival rate.
The survival times, in months, are 14, 17, 27, 18, 12, 8, 22, 13, 19, and 12. Assume
9.14 Maximum Likelihood Estimation (Optional) 311
that the exponential distribution applies. Give a maximum likelihood estimate of
the mean survival time.
Solution: From Chapter 6, we know that the probability density function for the exponential
random variable X is
f(x, β) =

1
β e−x/β
, x  0,
0, elsewhere.
Thus, the log-likelihood function for the data, given n = 10, is
ln L(x1, x2, . . . , x10; β) = −10 ln β −
1
β
10

i=1
xi.
Setting
∂ ln L
∂β
= −
10
β
+
1
β2
10

i=1
xi = 0
implies that
β̂ =
1
10
10

i=1
xi = x̄ = 16.2.
Evaluating the second derivative of the log-likelihood function at the value β̂ above
yields a negative value. As a result, the estimator of the parameter β, the popula-
tion mean, is the sample average x̄.
The following example shows the maximum likelihood estimator for a distribu-
tion that does not appear in previous chapters.
Example 9.23: It is known that a sample consisting of the values 12, 11.2, 13.5, 12.3, 13.8, and
11.9 comes from a population with the density function
f(x; θ) =

θ
xθ+1 , x  1,
0, elsewhere,
where θ  0. Find the maximum likelihood estimate of θ.
Solution: The likelihood function of n observations from this population can be written as
L(x1, x2, . . . , x10; θ) =
n
,
i=1
θ
xθ+1
i
=
θn
(
-n
i=1 xi)θ+1
,
which implies that
ln L(x1, x2, . . . , x10; θ) = n ln(θ) − (θ + 1)
n

i=1
ln(xi).
/ /
312 Chapter 9 One- and Two-Sample Estimation Problems
Setting 0 = ∂ ln L
∂θ = n
θ −
n

i=1
ln(xi) results in
θ̂ =
n
n

i=1
ln(xi)
=
6
ln(12) + ln(11.2) + ln(13.5) + ln(12.3) + ln(13.8) + ln(11.9)
= 0.3970.
Since the second derivative of L is −n/θ2
, which is always negative, the likelihood
function does achieve its maximum value at θ̂.
Additional Comments Concerning Maximum Likelihood Estimation
A thorough discussion of the properties of maximum likelihood estimation is be-
yond the scope of this book and is usually a major topic of a course in the theory
of statistical inference. The method of maximum likelihood allows the analyst to
make use of knowledge of the distribution in determining an appropriate estima-
tor. The method of maximum likelihood cannot be applied without knowledge of the
underlying distribution. We learned in Example 9.21 that the maximum likelihood
estimator is not necessarily unbiased. The maximum likelihood estimator is unbi-
ased asymptotically or in the limit; that is, the amount of bias approaches zero as
the sample size becomes large. Earlier in this chapter the notion of efficiency was
discussed, efficiency being linked to the variance property of an estimator. Maxi-
mum likelihood estimators possess desirable variance properties in the limit. The
reader should consult Lehmann and D’Abrera (1998) for details.
Exercises
9.81 Suppose that there are n trials x1, x2, . . . , xn
from a Bernoulli process with parameter p, the prob-
ability of a success. That is, the probability of r suc-
cesses is given by
n
r

pr
(1 − p)n−r
. Work out the max-
imum likelihood estimator for the parameter p.
9.82 Consider the lognormal distribution with the
density function given in Section 6.9. Suppose we have
a random sample x1, x2, . . . , xn from a lognormal dis-
tribution.
(a) Write out the likelihood function.
(b) Develop the maximum likelihood estimators of μ
and σ2
.
9.83 Consider a random sample of x1, . . . , xn coming
from the gamma distribution discussed in Section 6.6.
Suppose the parameter α is known, say 5, and deter-
mine the maximum likelihood estimation for parameter
β.
9.84 Consider a random sample of x1, x2, . . . , xn ob-
servations from a Weibull distribution with parameters
α and β and density function
f(x) =

αβxβ−1
e−αxβ
, x  0,
0, elsewhere,
for α, β  0.
(a) Write out the likelihood function.
(b) Write out the equations that, when solved, give the
maximum likelihood estimators of α and β.
9.85 Consider a random sample of x1, . . . , xn from a
uniform distribution U(0, θ) with unknown parameter
θ, where θ  0. Determine the maximum likelihood
estimator of θ.
9.86 Consider the independent observations
x1, x2, . . . , xn from the gamma distribution discussed
in Section 6.6.
(a) Write out the likelihood function.
/ /
Review Exercises 313
(b) Write out a set of equations that, when solved, give
the maximum likelihood estimators of α and β.
9.87 Consider a hypothetical experiment where a
man with a fungus uses an antifungal drug and is cured.
Consider this, then, a sample of one from a Bernoulli
distribution with probability function
f(x) = px
q1−x
, x = 0, 1,
where p is the probability of a success (cure) and
q = 1 − p. Now, of course, the sample information
gives x = 1. Write out a development that shows that
p̂ = 1.0 is the maximum likelihood estimator of the
probability of a cure.
9.88 Consider the observation X from the negative
binomial distribution given in Section 5.4. Find the
maximum likelihood estimator for p, assuming k is
known.
Review Exercises
9.89 Consider two estimators of σ2
for a sample
x1, x2, . . . , xn, which is drawn from a normal distri-
bution with mean μ and variance σ2
. The estimators
are the unbiased estimator s2
= 1
n−1
n

i=1
(xi − x̄)2
and
the maximum likelihood estimator σ̂2
= 1
n
n

i=1
(xi −x̄)2
.
Discuss the variance properties of these two estimators.
9.90 According to the Roanoke Times, McDonald’s
sold 42.1% of the market share of hamburgers. A ran-
dom sample of 75 burgers sold resulted in 28 of them
being from McDonald’s. Use material in Section 9.10
to determine if this information supports the claim in
the Roanoke Times.
9.91 It is claimed that a new diet will reduce a per-
son’s weight by 4.5 kilograms on average in a period
of 2 weeks. The weights of 7 women who followed this
diet were recorded before and after the 2-week period.
Woman Weight Before Weight After
1 58.5 60.0
2 60.3 54.9
3 61.7 58.1
4 69.0 62.1
5 64.0 58.5
6 62.6 59.9
7 56.7 54.4
Test the claim about the diet by computing a 95% con-
fidence interval for the mean difference in weights. As-
sume the differences of weights to be approximately
normally distributed.
9.92 A study was undertaken at Virginia Tech to de-
termine if fire can be used as a viable management tool
to increase the amount of forage available to deer dur-
ing the critical months in late winter and early spring.
Calcium is a required element for plants and animals.
The amount taken up and stored in plants is closely
correlated to the amount present in the soil. It was
hypothesized that a fire may change the calcium levels
present in the soil and thus affect the amount avail-
able to deer. A large tract of land in the Fishburn
Forest was selected for a prescribed burn. Soil samples
were taken from 12 plots of equal area just prior to the
burn and analyzed for calcium. Postburn calcium lev-
els were analyzed from the same plots. These values,
in kilograms per plot, are presented in the following
table:
Calcium Level (kg/plot)
Plot Preburn Postburn
1
2
3
4
5
6
7
8
9
10
11
12
50
50
82
64
82
73
77
54
23
45
36
54
9
18
45
18
18
9
32
9
18
9
9
9
Construct a 95% confidence interval for the mean dif-
ference in calcium levels in the soil prior to and after
the prescribed burn. Assume the distribution of differ-
ences in calcium levels to be approximately normal.
9.93 A health spa claims that a new exercise pro-
gram will reduce a person’s waist size by 2 centimeters
on average over a 5-day period. The waist sizes, in
centimeters, of 6 men who participated in this exercise
program are recorded before and after the 5-day period
in the following table:
Man Waist Size Before Waist Size After
1
2
3
4
5
6
90.4
95.5
98.7
115.9
104.0
85.6
91.7
93.9
97.4
112.8
101.3
84.0
/ /
314 Chapter 9 One- and Two-Sample Estimation Problems
By computing a 95% confidence interval for the mean
reduction in waist size, determine whether the health
spa’s claim is valid. Assume the distribution of differ-
ences in waist sizes before and after the program to be
approximately normal.
9.94 The Department of Civil Engineering at Virginia
Tech compared a modified (M-5 hr) assay technique for
recovering fecal coliforms in stormwater runoff from an
urban area to a most probable number (MPN) tech-
nique. A total of 12 runoff samples were collected and
analyzed by the two techniques. Fecal coliform counts
per 100 milliliters are recorded in the following table.
Sample MPN Count M-5 hr Count
1
2
3
4
5
6
7
8
9
10
11
12
2300
1200
450
210
270
450
154
179
192
230
340
194
2010
930
400
436
4100
2090
219
169
194
174
274
183
Construct a 90% confidence interval for the difference
in the mean fecal coliform counts between the M-5 hr
and the MPN techniques. Assume that the count dif-
ferences are approximately normally distributed.
9.95 An experiment was conducted to determine
whether surface finish has an effect on the endurance
limit of steel. There is a theory that polishing in-
creases the average endurance limit (for reverse bend-
ing). From a practical point of view, polishing should
not have any effect on the standard deviation of the
endurance limit, which is known from numerous en-
durance limit experiments to be 4000 psi. An ex-
periment was performed on 0.4% carbon steel using
both unpolished and polished smooth-turned speci-
mens. The data are as follows:
Endurance Limit (psi)
Polished Unpolished
0.4% Carbon 0.4% Carbon
85,500 82,600
91,900 82,400
89,400 81,700
84,000 79,500
89,900 79,400
78,700 69,800
87,500 79,900
83,100 83,400
Find a 95% confidence interval for the difference be-
tween the population means for the two methods, as-
suming that the populations are approximately nor-
mally distributed.
9.96 An anthropologist is interested in the proportion
of individuals in two Indian tribes with double occipi-
tal hair whorls. Suppose that independent samples are
taken from each of the two tribes, and it is found that
24 of 100 Indians from tribe A and 36 of 120 Indians
from tribe B possess this characteristic. Construct a
95% confidence interval for the difference pB − pA be-
tween the proportions of these two tribes with occipital
hair whorls.
9.97 A manufacturer of electric irons produces these
items in two plants. Both plants have the same suppli-
ers of small parts. A saving can be made by purchasing
thermostats for plant B from a local supplier. A sin-
gle lot was purchased from the local supplier, and a
test was conducted to see whether or not these new
thermostats were as accurate as the old. The ther-
mostats were tested on tile irons on the 550◦
F setting,
and the actual temperatures were read to the nearest
0.1◦
F with a thermocouple. The data are as follows:
New Supplier (◦
F)
530.3 559.3 549.4 544.0 551.7 566.3
549.9 556.9 536.7 558.8 538.8 543.3
559.1 555.0 538.6 551.1 565.4 554.9
550.0 554.9 554.7 536.1 569.1
Old Supplier (◦
F)
559.7 534.7 554.8 545.0 544.6 538.0
550.7 563.1 551.1 553.8 538.8 564.6
554.5 553.0 538.4 548.3 552.9 535.1
555.0 544.8 558.4 548.7 560.3
Find 95% confidence intervals for σ2
1/σ2
2 and for σ1/σ2,
where σ2
1 and σ2
2 are the population variances of the
thermostat readings for the new and old suppliers, re-
spectively.
9.98 It is argued that the resistance of wire A is
greater than the resistance of wire B. An experiment
on the wires shows the following results (in ohms):
Wire A Wire B
0.140 0.135
0.138 0.140
0.143 0.136
0.142 0.142
0.144 0.138
0.137 0.140
Assuming equal variances, what conclusions do you
draw? Justify your answer.
9.99 An alternative form of estimation is accom-
plished through the method of moments. This method
involves equating the population mean and variance to
the corresponding sample mean x̄ and sample variance
/ /
Review Exercises 315
s2
and solving for the parameters, the results being
the moment estimators. In the case of a single pa-
rameter, only the means are used. Give an argument
that in the case of the Poisson distribution the maxi-
mum likelihood estimator and moment estimators are
the same.
9.100 Specify the moment estimators for μ and σ2
for the normal distribution.
9.101 Specify the moment estimators for μ and σ2
for the lognormal distribution.
9.102 Specify the moment estimators for α and β for
the gamma distribution.
9.103 A survey was done with the hope of comparing
salaries of chemical plant managers employed in two
areas of the country, the northern and west central re-
gions. An independent random sample of 300 plant
managers was selected from each of the two regions.
These managers were asked their annual salaries. The
results are as follows
North West Central
x̄1 = $102, 300 x̄2 = $98, 500
s1 = $5700 s2 = $3800
(a) Construct a 99% confidence interval for μ1 − μ2,
the difference in the mean salaries.
(b) What assumption did you make in (a) about the
distribution of annual salaries for the two regions?
Is the assumption of normality necessary? Why or
why not?
(c) What assumption did you make about the two vari-
ances? Is the assumption of equality of variances
reasonable? Explain!
9.104 Consider Review Exercise 9.103. Let us assume
that the data have not been collected yet and that pre-
vious statistics suggest that σ1 = σ2 = $4000. Are
the sample sizes in Review Exercise 9.103 sufficient to
produce a 95% confidence interval on μ1 − μ2 having a
width of only $1000? Show all work.
9.105 A labor union is becoming defensive about
gross absenteeism by its members. The union lead-
ers had always claimed that, in a typical month, 95%
of its members were absent less than 10 hours. The
union decided to check this by monitoring a random
sample of 300 of its members. The number of hours
absent was recorded for each of the 300 members. The
results were x̄ = 6.5 hours and s = 2.5 hours. Use the
data to respond to this claim, using a one-sided toler-
ance limit and choosing the confidence level to be 99%.
Be sure to interpret what you learn from the tolerance
limit calculation.
9.106 A random sample of 30 firms dealing in wireless
products was selected to determine the proportion of
such firms that have implemented new software to im-
prove productivity. It turned out that 8 of the 30 had
implemented such software. Find a 95% confidence in-
terval on p, the true proportion of such firms that have
implemented new software.
9.107 Refer to Review Exercise 9.106. Suppose there
is concern about whether the point estimate p̂ = 8/30
is accurate enough because the confidence interval
around p is not sufficiently narrow. Using p̂ as the
estimate of p, how many companies would need to be
sampled in order to have a 95% confidence interval with
a width of only 0.05?
9.108 A manufacturer turns out a product item that
is labeled either “defective” or “not defective.” In order
to estimate the proportion defective, a random sam-
ple of 100 items is taken from production, and 10 are
found to be defective. Following implementation of a
quality improvement program, the experiment is con-
ducted again. A new sample of 100 is taken, and this
time only 6 are found to be defective.
(a) Give a 95% confidence interval on p1 − p2, where
p1 is the population proportion defective before im-
provement and p2 is the proportion defective after
improvement.
(b) Is there information in the confidence interval
found in (a) that would suggest that p1  p2? Ex-
plain.
9.109 A machine is used to fill boxes with product
in an assembly line operation. Much concern centers
around the variability in the number of ounces of prod-
uct in a box. The standard deviation in weight of prod-
uct is known to be 0.3 ounce. An improvement is im-
plemented, after which a random sample of 20 boxes is
selected and the sample variance is found to be 0.045
ounce2
. Find a 95% confidence interval on the variance
in the weight of the product. Does it appear from the
range of the confidence interval that the improvement
of the process enhanced quality as far as variability is
concerned? Assume normality on the distribution of
weights of product.
9.110 A consumer group is interested in comparing
operating costs for two different types of automobile
engines. The group is able to find 15 owners whose
cars have engine type A and 15 whose cars have engine
type B. All 30 owners bought their cars at roughly the
same time, and all have kept good records for a cer-
tain 12-month period. In addition, these owners drove
roughly the same number of miles. The cost statistics
are ȳA = $87.00/1000 miles, ȳB = $75.00/1000 miles,
sA = $5.99, and sB = $4.85. Compute a 95% confi-
dence interval to estimate μA − μB, the difference in
316 Chapter 9 One- and Two-Sample Estimation Problems
the mean operating costs. Assume normality and equal
variances.
9.111 Consider the statistic S2
p, the pooled estimate
of σ2
discussed in Section 9.8. It is used when one is
willing to assume that σ2
1 = σ2
2 = σ2
. Show that the es-
timator is unbiased for σ2
[i.e., show that E(S2
p) = σ2
].
You may make use of results from any theorem or ex-
ample in this chapter.
9.112 A group of human factor researchers are con-
cerned about reaction to a stimulus by airplane pilots
in a certain cockpit arrangement. An experiment was
conducted in a simulation laboratory, and 15 pilots
were used with average reaction time of 3.2 seconds
with a sample standard deviation of 0.6 second. It is
of interest to characterize the extreme (i.e., worst case
scenario). To that end, do the following:
(a) Give a particular important one-sided 99% confi-
dence bound on the mean reaction time. What
assumption, if any, must you make on the distribu-
tion of reaction times?
(b) Give a 99% one-sided prediction interval and give
an interpretation of what it means. Must you make
an assumption about the distribution of reaction
times to compute this bound?
(c) Compute a one-sided tolerance bound with 99%
confidence that involves 95% of reaction times.
Again, give an interpretation and assumptions
about the distribution, if any. (Note: The one-
sided tolerance limit values are also included in Ta-
ble A.7.)
9.113 A certain supplier manufactures a type of rub-
ber mat that is sold to automotive companies. The
material used to produce the mats must have certain
hardness characteristics. Defective mats are occasion-
ally discovered and rejected. The supplier claims that
the proportion defective is 0.05. A challenge was made
by one of the clients who purchased the mats, so an ex-
periment was conducted in which 400 mats are tested
and 17 were found defective.
(a) Compute a 95% two-sided confidence interval on
the proportion defective.
(b) Compute an appropriate 95% one-sided confidence
interval on the proportion defective.
(c) Interpret both intervals from (a) and (b) and com-
ment on the claim made by the supplier.
9.15 Potential Misconceptions and Hazards;
Relationship to Material in Other Chapters
The concept of a large-sample confidence interval on a population is often confusing
to the beginning student. It is based on the notion that even when σ is unknown
and one is not convinced that the distribution being sampled is normal, a confidence
interval on μ can be computed from
x̄ ± zα/2
s
√
n
.
In practice, this formula is often used when the sample is too small. The genesis of
this large sample interval is, of course, the Central Limit Theorem (CLT), under
which normality is not necessary. Here the CLT requires a known σ, of which s
is only an estimate. Thus, n must be at least as large as 30 and the underly-
ing distribution must be close to symmetric, in which case the interval is still an
approximation.
There are instances in which the appropriateness of the practical application
of material in this chapter depends very much on the specific context. One very
important illustration is the use of the t-distribution for the confidence interval
on μ when σ is unknown. Strictly speaking, the use of the t-distribution requires
that the distribution sampled from be normal. However, it is well known that
any application of the t-distribution is reasonably insensitive (i.e., robust) to the
normality assumption. This represents one of those fortunate situations which
9.15 Potential Misconceptions and Hazards 317
occur often in the field of statistics in which a basic assumption does not hold
and yet “everything turns out all right!” However, one population from which
the sample is drawn cannot deviate substantially from normal. Thus, the normal
probability plots discussed in Chapter 8 and the goodness-of-fit tests introduced
in Chapter 10 often need be called upon to ascertain some sense of “nearness to
normality.” This idea of “robustness to normality” will reappear in Chapter 10.
It is our experience that one of the most serious “misuses of statistics” in prac-
tice evolves from confusion about distinctions in the interpretation of the types of
statistical intervals. Thus, the subsection in this chapter where differences among
the three types of intervals are discussed is important. It is very likely that in
practice the confidence interval is heavily overused. That is, it is used when
there is really no interest in the mean; rather, the question is “Where is the next
observation going to fall?” or often, more importantly, “Where is the large bulk of
the distribution?” These are crucial questions that are not answered by comput-
ing an interval on the mean. The interpretation of a confidence interval is often
misunderstood. It is tempting to conclude that the parameter falls inside the in-
terval with probability 0.95. While this is a correct interpretation of a Bayesian
posterior interval (readers are referred to Chapter 18 for more information on
Bayesian inference), it is not the proper frequency interpretation.
A confidence interval merely suggests that if the experiment is conducted and
data are observed again and again, about 95% of such intervals will contain the
true parameter. Any beginning student of practical statistics should be very clear
on the difference among these statistical intervals.
Another potential serious misuse of statistics centers around the use of the
χ2
-distribution for a confidence interval on a single variance. Again, normality of
the distribution from which the sample is drawn is assumed. Unlike the use of the
t-distribution, the use of the χ2
test for this application is not robust to the nor-
mality assumption (i.e., the sampling distribution of (n−1)S2
σ2 deviates far from
χ2
if the underlying distribution is not normal). Thus, strict use of goodness-of-fit
(Chapter 10) tests and/or normal probability plotting can be extremely important
in such contexts. More information about this general issue will be given in future
chapters.
This page intentionally left blank
Chapter 10
One- and Two-Sample Tests of
Hypotheses
10.1 Statistical Hypotheses: General Concepts
Often, the problem confronting the scientist or engineer is not so much the es-
timation of a population parameter, as discussed in Chapter 9, but rather the
formation of a data-based decision procedure that can produce a conclusion about
some scientific system. For example, a medical researcher may decide on the basis
of experimental evidence whether coffee drinking increases the risk of cancer in
humans; an engineer might have to decide on the basis of sample data whether
there is a difference between the accuracy of two kinds of gauges; or a sociologist
might wish to collect appropriate data to enable him or her to decide whether
a person’s blood type and eye color are independent variables. In each of these
cases, the scientist or engineer postulates or conjectures something about a system.
In addition, each must make use of experimental data and make a decision based
on the data. In each case, the conjecture can be put in the form of a statistical
hypothesis. Procedures that lead to the acceptance or rejection of statistical hy-
potheses such as these comprise a major area of statistical inference. First, let us
define precisely what we mean by a statistical hypothesis.
Definition 10.1: A statistical hypothesis is an assertion or conjecture concerning one or more
populations.
The truth or falsity of a statistical hypothesis is never known with absolute
certainty unless we examine the entire population. This, of course, would be im-
practical in most situations. Instead, we take a random sample from the population
of interest and use the data contained in this sample to provide evidence that either
supports or does not support the hypothesis. Evidence from the sample that is
inconsistent with the stated hypothesis leads to a rejection of the hypothesis.
319
320 Chapter 10 One- and Two-Sample Tests of Hypotheses
The Role of Probability in Hypothesis Testing
It should be made clear to the reader that the decision procedure must include an
awareness of the probability of a wrong conclusion. For example, suppose that the
hypothesis postulated by the engineer is that the fraction defective p in a certain
process is 0.10. The experiment is to observe a random sample of the product
in question. Suppose that 100 items are tested and 12 items are found defective.
It is reasonable to conclude that this evidence does not refute the condition that
the binomial parameter p = 0.10, and thus it may lead one not to reject the
hypothesis. However, it also does not refute p = 0.12 or perhaps even p = 0.15.
As a result, the reader must be accustomed to understanding that rejection of a
hypothesis implies that the sample evidence refutes it. Put another way,
rejection means that there is a small probability of obtaining the sample
information observed when, in fact, the hypothesis is true. For example,
for our proportion-defective hypothesis, a sample of 100 revealing 20 defective items
is certainly evidence for rejection. Why? If, indeed, p = 0.10, the probability of
obtaining 20 or more defectives is approximately 0.002. With the resulting small
risk of a wrong conclusion, it would seem safe to reject the hypothesis that
p = 0.10. In other words, rejection of a hypothesis tends to all but “rule out” the
hypothesis. On the other hand, it is very important to emphasize that acceptance
or, rather, failure to reject does not rule out other possibilities. As a result, the
firm conclusion is established by the data analyst when a hypothesis is rejected.
The formal statement of a hypothesis is often influenced by the structure of the
probability of a wrong conclusion. If the scientist is interested in strongly supporting
a contention, he or she hopes to arrive at the contention in the form of rejection of a
hypothesis. If the medical researcher wishes to show strong evidence in favor of the
contention that coffee drinking increases the risk of cancer, the hypothesis tested
should be of the form “there is no increase in cancer risk produced by drinking
coffee.” As a result, the contention is reached via a rejection. Similarly, to support
the claim that one kind of gauge is more accurate than another, the engineer tests
the hypothesis that there is no difference in the accuracy of the two kinds of gauges.
The foregoing implies that when the data analyst formalizes experimental evi-
dence on the basis of hypothesis testing, the formal statement of the hypothesis
is very important.
The Null and Alternative Hypotheses
The structure of hypothesis testing will be formulated with the use of the term
null hypothesis, which refers to any hypothesis we wish to test and is denoted
by H0. The rejection of H0 leads to the acceptance of an alternative hypoth-
esis, denoted by H1. An understanding of the different roles played by the null
hypothesis (H0) and the alternative hypothesis (H1) is crucial to one’s understand-
ing of the rudiments of hypothesis testing. The alternative hypothesis H1 usually
represents the question to be answered or the theory to be tested, and thus its spec-
ification is crucial. The null hypothesis H0 nullifies or opposes H1 and is often the
logical complement to H1. As the reader gains more understanding of hypothesis
testing, he or she should note that the analyst arrives at one of the two following
10.2 Testing a Statistical Hypothesis 321
conclusions:
reject H0 in favor of H1 because of sufficient evidence in the data or
fail to reject H0 because of insufficient evidence in the data.
Note that the conclusions do not involve a formal and literal “accept H0.” The
statement of H0 often represents the “status quo” in opposition to the new idea,
conjecture, and so on, stated in H1, while failure to reject H0 represents the proper
conclusion. In our binomial example, the practical issue may be a concern that
the historical defective probability of 0.10 no longer is true. Indeed, the conjecture
may be that p exceeds 0.10. We may then state
H0: p = 0.10,
H1: p  0.10.
Now 12 defective items out of 100 does not refute p = 0.10, so the conclusion is
“fail to reject H0.” However, if the data produce 20 out of 100 defective items,
then the conclusion is “reject H0” in favor of H1: p  0.10.
Though the applications of hypothesis testing are quite abundant in scientific
and engineering work, perhaps the best illustration for a novice lies in the predica-
ment encountered in a jury trial. The null and alternative hypotheses are
H0: defendant is innocent,
H1: defendant is guilty.
The indictment comes because of suspicion of guilt. The hypothesis H0 (the status
quo) stands in opposition to H1 and is maintained unless H1 is supported by
evidence “beyond a reasonable doubt.” However, “failure to reject H0” in this case
does not imply innocence, but merely that the evidence was insufficient to convict.
So the jury does not necessarily accept H0 but fails to reject H0.
10.2 Testing a Statistical Hypothesis
To illustrate the concepts used in testing a statistical hypothesis about a popula-
tion, we present the following example. A certain type of cold vaccine is known to
be only 25% effective after a period of 2 years. To determine if a new and some-
what more expensive vaccine is superior in providing protection against the same
virus for a longer period of time, suppose that 20 people are chosen at random and
inoculated. (In an actual study of this type, the participants receiving the new
vaccine might number several thousand. The number 20 is being used here only
to demonstrate the basic steps in carrying out a statistical test.) If more than 8 of
those receiving the new vaccine surpass the 2-year period without contracting the
virus, the new vaccine will be considered superior to the one presently in use. The
requirement that the number exceed 8 is somewhat arbitrary but appears reason-
able in that it represents a modest gain over the 5 people who could be expected to
receive protection if the 20 people had been inoculated with the vaccine already in
use. We are essentially testing the null hypothesis that the new vaccine is equally
effective after a period of 2 years as the one now commonly used. The alternative
322 Chapter 10 One- and Two-Sample Tests of Hypotheses
hypothesis is that the new vaccine is in fact superior. This is equivalent to testing
the hypothesis that the binomial parameter for the probability of a success on a
given trial is p = 1/4 against the alternative that p  1/4. This is usually written
as follows:
H0: p = 0.25,
H1: p  0.25.
The Test Statistic
The test statistic on which we base our decision is X, the number of individuals
in our test group who receive protection from the new vaccine for a period of at
least 2 years. The possible values of X, from 0 to 20, are divided into two groups:
those numbers less than or equal to 8 and those greater than 8. All possible scores
greater than 8 constitute the critical region. The last number that we observe
in passing into the critical region is called the critical value. In our illustration,
the critical value is the number 8. Therefore, if x  8, we reject H0 in favor of the
alternative hypothesis H1. If x ≤ 8, we fail to reject H0. This decision criterion is
illustrated in Figure 10.1.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
x
Do not reject H0
(p  0.25)
Reject H0
(p 0.25)
Figure 10.1: Decision criterion for testing p = 0.25 versus p  0.25.
The Probability of a Type I Error
The decision procedure just described could lead to either of two wrong conclusions.
For instance, the new vaccine may be no better than the one now in use (H0 true)
and yet, in this particular randomly selected group of individuals, more than 8
surpass the 2-year period without contracting the virus. We would be committing
an error by rejecting H0 in favor of H1 when, in fact, H0 is true. Such an error is
called a type I error.
Definition 10.2: Rejection of the null hypothesis when it is true is called a type I error.
A second kind of error is committed if 8 or fewer of the group surpass the 2-year
period successfully and we are unable to conclude that the vaccine is better when
it actually is better (H1 true). Thus, in this case, we fail to reject H0 when in fact
H0 is false. This is called a type II error.
Definition 10.3: Nonrejection of the null hypothesis when it is false is called a type II error.
In testing any statistical hypothesis, there are four possible situations that
determine whether our decision is correct or in error. These four situations are
10.2 Testing a Statistical Hypothesis 323
summarized in Table 10.1.
Table 10.1: Possible Situations for Testing a Statistical Hypothesis
H0 is true H0 is false
Do not reject H0 Correct decision Type II error
Reject H0 Type I error Correct decision
The probability of committing a type I error, also called the level of signif-
icance, is denoted by the Greek letter α. In our illustration, a type I error will
occur when more than 8 individuals inoculated with the new vaccine surpass the
2-year period without contracting the virus and researchers conclude that the new
vaccine is better when it is actually equivalent to the one in use. Hence, if X is
the number of individuals who remain free of the virus for at least 2 years,
α = P(type I error) = P

X  8 when p =
1
4

=
20

x=9
b

x; 20,
1
4

= 1 −
8

x=0
b

x; 20,
1
4

= 1 − 0.9591 = 0.0409.
We say that the null hypothesis, p = 1/4, is being tested at the α = 0.0409 level of
significance. Sometimes the level of significance is called the size of the test. A
critical region of size 0.0409 is very small, and therefore it is unlikely that a type
I error will be committed. Consequently, it would be most unusual for more than
8 individuals to remain immune to a virus for a 2-year period using a new vaccine
that is essentially equivalent to the one now on the market.
The Probability of a Type II Error
The probability of committing a type II error, denoted by β, is impossible to com-
pute unless we have a specific alternative hypothesis. If we test the null hypothesis
that p = 1/4 against the alternative hypothesis that p = 1/2, then we are able
to compute the probability of not rejecting H0 when it is false. We simply find
the probability of obtaining 8 or fewer in the group that surpass the 2-year period
when p = 1/2. In this case,
β = P(type II error) = P

X ≤ 8 when p =
1
2

=
8

x=0
b

x; 20,
1
2

= 0.2517.
This is a rather high probability, indicating a test procedure in which it is quite
likely that we shall reject the new vaccine when, in fact, it is superior to what is
now in use. Ideally, we like to use a test procedure for which the type I and type
II error probabilities are both small.
It is possible that the director of the testing program is willing to make a type
II error if the more expensive vaccine is not significantly superior. In fact, the only
324 Chapter 10 One- and Two-Sample Tests of Hypotheses
time he wishes to guard against the type II error is when the true value of p is at
least 0.7. If p = 0.7, this test procedure gives
β = P(type II error) = P(X ≤ 8 when p = 0.7)
=
8

x=0
b(x; 20, 0.7) = 0.0051.
With such a small probability of committing a type II error, it is extremely unlikely
that the new vaccine would be rejected when it was 70% effective after a period of
2 years. As the alternative hypothesis approaches unity, the value of β diminishes
to zero.
The Role of α, β, and Sample Size
Let us assume that the director of the testing program is unwilling to commit a
type II error when the alternative hypothesis p = 1/2 is true, even though we have
found the probability of such an error to be β = 0.2517. It is always possible to
reduce β by increasing the size of the critical region. For example, consider what
happens to the values of α and β when we change our critical value to 7 so that
all scores greater than 7 fall in the critical region and those less than or equal to
7 fall in the nonrejection region. Now, in testing p = 1/4 against the alternative
hypothesis that p = 1/2, we find that
α =
20

x=8
b

x; 20,
1
4

= 1 −
7

x=0
b

x; 20,
1
4

= 1 − 0.8982 = 0.1018
and
β =
7

x=0
b

x; 20,
1
2

= 0.1316.
By adopting a new decision procedure, we have reduced the probability of com-
mitting a type II error at the expense of increasing the probability of committing
a type I error. For a fixed sample size, a decrease in the probability of one error
will usually result in an increase in the probability of the other error. Fortunately,
the probability of committing both types of error can be reduced by
increasing the sample size. Consider the same problem using a random sample
of 100 individuals. If more than 36 of the group surpass the 2-year period, we
reject the null hypothesis that p = 1/4 and accept the alternative hypothesis that
p  1/4. The critical value is now 36. All possible scores above 36 constitute the
critical region, and all possible scores less than or equal to 36 fall in the acceptance
region.
To determine the probability of committing a type I error, we shall use the
normal curve approximation with
μ = np = (100)

1
4

= 25 and σ =
√
npq =

(100)(1/4)(3/4) = 4.33.
Referring to Figure 10.2, we need the area under the normal curve to the right of
x = 36.5. The corresponding z-value is
z =
36.5 − 25
4.33
= 2.66.
10.2 Testing a Statistical Hypothesis 325
x
36.5
 4.33
 25
α
μ
σ
Figure 10.2: Probability of a type I error.
From Table A.3 we find that
α = P(type I error) = P

X  36 when p =
1
4

≈ P(Z  2.66)
= 1 − P(Z  2.66) = 1 − 0.9961 = 0.0039.
If H0 is false and the true value of H1 is p = 1/2, we can determine the
probability of a type II error using the normal curve approximation with
μ = np = (100)(1/2) = 50 and σ =
√
npq =

(100)(1/2)(1/2) = 5.
The probability of a value falling in the nonrejection region when H0 is true is
given by the area of the shaded region to the left of x = 36.5 in Figure 10.3. The
z-value corresponding to x = 36.5 is
z =
36.5 − 50
5
= −2.7.
x
25 36.5 50
 4.33  5
H0
H1
σ σ
Figure 10.3: Probability of a type II error.
Therefore,
β = P(type II error) = P

X ≤ 36 when p =
1
2

≈ P(Z  −2.7) = 0.0035.
326 Chapter 10 One- and Two-Sample Tests of Hypotheses
Obviously, the type I and type II errors will rarely occur if the experiment consists
of 100 individuals.
The illustration above underscores the strategy of the scientist in hypothesis
testing. After the null and alternative hypotheses are stated, it is important to
consider the sensitivity of the test procedure. By this we mean that there should
be a determination, for a fixed α, of a reasonable value for the probability of
wrongly accepting H0 (i.e., the value of β) when the true situation represents some
important deviation from H0. A value for the sample size can usually be determined
for which there is a reasonable balance between the values of α and β computed
in this fashion. The vaccine problem provides an illustration.
Illustration with a Continuous Random Variable
The concepts discussed here for a discrete population can be applied equally well
to continuous random variables. Consider the null hypothesis that the average
weight of male students in a certain college is 68 kilograms against the alternative
hypothesis that it is unequal to 68. That is, we wish to test
H0: μ = 68,
H1: μ = 68.
The alternative hypothesis allows for the possibility that μ  68 or μ  68.
A sample mean that falls close to the hypothesized value of 68 would be consid-
ered evidence in favor of H0. On the other hand, a sample mean that is considerably
less than or more than 68 would be evidence inconsistent with H0 and therefore
favoring H1. The sample mean is the test statistic in this case. A critical region
for the test statistic might arbitrarily be chosen to be the two intervals x̄  67
and x̄  69. The nonrejection region will then be the interval 67 ≤ x̄ ≤ 69. This
decision criterion is illustrated in Figure 10.4.
67 68 69
x
Reject H0
(  68)
Reject H0
(  68)
Do not reject H0
(  68)
μ μ
μ
Figure 10.4: Critical region (in blue).
Let us now use the decision criterion of Figure 10.4 to calculate the probabilities
of committing type I and type II errors when testing the null hypothesis that μ = 68
kilograms against the alternative that μ = 68 kilograms.
Assume the standard deviation of the population of weights to be σ = 3.6. For
large samples, we may substitute s for σ if no other estimate of σ is available.
Our decision statistic, based on a random sample of size n = 36, will be X̄, the
most efficient estimator of μ. From the Central Limit Theorem, we know that
the sampling distribution of X̄ is approximately normal with standard deviation
σX̄ = σ/
√
n = 3.6/6 = 0.6.
10.2 Testing a Statistical Hypothesis 327
The probability of committing a type I error, or the level of significance of our
test, is equal to the sum of the areas that have been shaded in each tail of the
distribution in Figure 10.5. Therefore,
α = P(X̄  67 when μ = 68) + P(X̄  69 when μ = 68).
x
67  68 69
/2
μ
α /2
α
Figure 10.5: Critical region for testing μ = 68 versus μ = 68.
The z-values corresponding to x̄1 = 67 and x̄2 = 69 when H0 is true are
z1 =
67 − 68
0.6
= −1.67 and z2 =
69 − 68
0.6
= 1.67.
Therefore,
α = P(Z  −1.67) + P(Z  1.67) = 2P(Z  −1.67) = 0.0950.
Thus, 9.5% of all samples of size 36 would lead us to reject μ = 68 kilograms when,
in fact, it is true. To reduce α, we have a choice of increasing the sample size
or widening the fail-to-reject region. Suppose that we increase the sample size to
n = 64. Then σX̄ = 3.6/8 = 0.45. Now
z1 =
67 − 68
0.45
= −2.22 and z2 =
69 − 68
0.45
= 2.22.
Hence,
α = P(Z  −2.22) + P(Z  2.22) = 2P(Z  −2.22) = 0.0264.
The reduction in α is not sufficient by itself to guarantee a good testing proce-
dure. We must also evaluate β for various alternative hypotheses. If it is important
to reject H0 when the true mean is some value μ ≥ 70 or μ ≤ 66, then the prob-
ability of committing a type II error should be computed and examined for the
alternatives μ = 66 and μ = 70. Because of symmetry, it is only necessary to
consider the probability of not rejecting the null hypothesis that μ = 68 when the
alternative μ = 70 is true. A type II error will result when the sample mean x̄ falls
between 67 and 69 when H1 is true. Therefore, referring to Figure 10.6, we find
that
β = P(67 ≤ X̄ ≤ 69 when μ = 70).
328 Chapter 10 One- and Two-Sample Tests of Hypotheses
67 68 69 70 71
x
H0 H1
Figure 10.6: Probability of type II error for testing μ = 68 versus μ = 70.
The z-values corresponding to x̄1 = 67 and x̄2 = 69 when H1 is true are
z1 =
67 − 70
0.45
= −6.67 and z2 =
69 − 70
0.45
= −2.22.
Therefore,
β = P(−6.67  Z  −2.22) = P(Z  −2.22) − P(Z  −6.67)
= 0.0132 − 0.0000 = 0.0132.
If the true value of μ is the alternative μ = 66, the value of β will again be
0.0132. For all possible values of μ  66 or μ  70, the value of β will be even
smaller when n = 64, and consequently there would be little chance of not rejecting
H0 when it is false.
The probability of committing a type II error increases rapidly when the true
value of μ approaches, but is not equal to, the hypothesized value. Of course, this
is usually the situation where we do not mind making a type II error. For example,
if the alternative hypothesis μ = 68.5 is true, we do not mind committing a type
II error by concluding that the true answer is μ = 68. The probability of making
such an error will be high when n = 64. Referring to Figure 10.7, we have
β = P(67 ≤ X̄ ≤ 69 when μ = 68.5).
The z-values corresponding to x̄1 = 67 and x̄2 = 69 when μ = 68.5 are
z1 =
67 − 68.5
0.45
= −3.33 and z2 =
69 − 68.5
0.45
= 1.11.
Therefore,
β = P(−3.33  Z  1.11) = P(Z  1.11) − P(Z  −3.33)
= 0.8665 − 0.0004 = 0.8661.
The preceding examples illustrate the following important properties:
10.2 Testing a Statistical Hypothesis 329
67 68 69
68.5
x
H0 H1
Figure 10.7: Type II error for testing μ = 68 versus μ = 68.5.
Important
Properties of a
Test of
Hypothesis
1. The type I error and type II error are related. A decrease in the probability
of one generally results in an increase in the probability of the other.
2. The size of the critical region, and therefore the probability of committing
a type I error, can always be reduced by adjusting the critical value(s).
3. An increase in the sample size n will reduce α and β simultaneously.
4. If the null hypothesis is false, β is a maximum when the true value of a
parameter approaches the hypothesized value. The greater the distance
between the true value and the hypothesized value, the smaller β will be.
One very important concept that relates to error probabilities is the notion of
the power of a test.
Definition 10.4: The power of a test is the probability of rejecting H0 given that a specific alter-
native is true.
The power of a test can be computed as 1 − β. Often different types of
tests are compared by contrasting power properties. Consider the previous
illustration, in which we were testing H0 : μ = 68 and H1 : μ = 68. As before,
suppose we are interested in assessing the sensitivity of the test. The test is gov-
erned by the rule that we do not reject H0 if 67 ≤ x̄ ≤ 69. We seek the capability
of the test to properly reject H0 when indeed μ = 68.5. We have seen that the
probability of a type II error is given by β = 0.8661. Thus, the power of the test
is 1 − 0.8661 = 0.1339. In a sense, the power is a more succinct measure of how
sensitive the test is for detecting differences between a mean of 68 and a mean
of 68.5. In this case, if μ is truly 68.5, the test as described will properly reject
H0 only 13.39% of the time. As a result, the test would not be a good one if it
was important that the analyst have a reasonable chance of truly distinguishing
between a mean of 68.0 (specified by H0) and a mean of 68.5. From the foregoing,
it is clear that to produce a desirable power (say, greater than 0.8), one must either
increase α or increase the sample size.
So far in this chapter, much of the discussion of hypothesis testing has focused
on foundations and definitions. In the sections that follow, we get more specific
330 Chapter 10 One- and Two-Sample Tests of Hypotheses
and put hypotheses in categories as well as discuss tests of hypotheses on various
parameters of interest. We begin by drawing the distinction between a one-sided
and a two-sided hypothesis.
One- and Two-Tailed Tests
A test of any statistical hypothesis where the alternative is one sided, such as
H0: θ = θ0,
H1: θ  θ0
or perhaps
H0: θ = θ0,
H1: θ  θ0,
is called a one-tailed test. Earlier in this section, we referred to the test statistic
for a hypothesis. Generally, the critical region for the alternative hypothesis θ  θ0
lies in the right tail of the distribution of the test statistic, while the critical region
for the alternative hypothesis θ  θ0 lies entirely in the left tail. (In a sense,
the inequality symbol points in the direction of the critical region.) A one-tailed
test was used in the vaccine experiment to test the hypothesis p = 1/4 against
the one-sided alternative p  1/4 for the binomial distribution. The one-tailed
critical region is usually obvious; the reader should visualize the behavior of the
test statistic and notice the obvious signal that would produce evidence supporting
the alternative hypothesis.
A test of any statistical hypothesis where the alternative is two sided, such as
H0: θ = θ0,
H1: θ = θ0,
is called a two-tailed test, since the critical region is split into two parts, often
having equal probabilities, in each tail of the distribution of the test statistic. The
alternative hypothesis θ = θ0 states that either θ  θ0 or θ  θ0. A two-tailed
test was used to test the null hypothesis that μ = 68 kilograms against the two-
sided alternative μ = 68 kilograms in the example of the continuous population of
student weights.
How Are the Null and Alternative Hypotheses Chosen?
The null hypothesis H0 will often be stated using the equality sign. With this
approach, it is clear how the probability of type I error is controlled. However,
there are situations in which “do not reject H0” implies that the parameter θ might
be any value defined by the natural complement to the alternative hypothesis. For
example, in the vaccine example, where the alternative hypothesis is H1: p  1/4,
it is quite possible that nonrejection of H0 cannot rule out a value of p less than
1/4. Clearly though, in the case of one-tailed tests, the statement of the alternative
is the most important consideration.
10.3 The Use of P-Values for Decision Making in Testing Hypotheses 331
Whether one sets up a one-tailed or a two-tailed test will depend on the con-
clusion to be drawn if H0 is rejected. The location of the critical region can be
determined only after H1 has been stated. For example, in testing a new drug, one
sets up the hypothesis that it is no better than similar drugs now on the market and
tests this against the alternative hypothesis that the new drug is superior. Such
an alternative hypothesis will result in a one-tailed test with the critical region
in the right tail. However, if we wish to compare a new teaching technique with
the conventional classroom procedure, the alternative hypothesis should allow for
the new approach to be either inferior or superior to the conventional procedure.
Hence, the test is two-tailed with the critical region divided equally so as to fall in
the extreme left and right tails of the distribution of our statistic.
Example 10.1: A manufacturer of a certain brand of rice cereal claims that the average saturated
fat content does not exceed 1.5 grams per serving. State the null and alternative
hypotheses to be used in testing this claim and determine where the critical region
is located.
Solution: The manufacturer’s claim should be rejected only if μ is greater than 1.5 milligrams
and should not be rejected if μ is less than or equal to 1.5 milligrams. We test
H0: μ = 1.5,
H1: μ  1.5.
Nonrejection of H0 does not rule out values less than 1.5 milligrams. Since we
have a one-tailed test, the greater than symbol indicates that the critical region
lies entirely in the right tail of the distribution of our test statistic X̄.
Example 10.2: A real estate agent claims that 60% of all private residences being built today are
3-bedroom homes. To test this claim, a large sample of new residences is inspected;
the proportion of these homes with 3 bedrooms is recorded and used as the test
statistic. State the null and alternative hypotheses to be used in this test and
determine the location of the critical region.
Solution: If the test statistic were substantially higher or lower than p = 0.6, we would reject
the agent’s claim. Hence, we should make the hypothesis
H0: p = 0.6,
H1: p = 0.6.
The alternative hypothesis implies a two-tailed test with the critical region divided
equally in both tails of the distribution of +
P, our test statistic.
10.3 The Use of P -Values for Decision Making in Testing
Hypotheses
In testing hypotheses in which the test statistic is discrete, the critical region may
be chosen arbitrarily and its size determined. If α is too large, it can be reduced
by making an adjustment in the critical value. It may be necessary to increase the
332 Chapter 10 One- and Two-Sample Tests of Hypotheses
sample size to offset the decrease that occurs automatically in the power of the
test.
Over a number of generations of statistical analysis, it had become customary
to choose an α of 0.05 or 0.01 and select the critical region accordingly. Then, of
course, strict rejection or nonrejection of H0 would depend on that critical region.
For example, if the test is two tailed and α is set at the 0.05 level of significance
and the test statistic involves, say, the standard normal distribution, then a z-value
is observed from the data and the critical region is
z  1.96 or z  −1.96,
where the value 1.96 is found as z0.025 in Table A.3. A value of z in the critical
region prompts the statement “The value of the test statistic is significant,” which
we can then translate into the user’s language. For example, if the hypothesis is
given by
H0: μ = 10,
H1: μ = 10,
one might say, “The mean differs significantly from the value 10.”
Preselection of a Significance Level
This preselection of a significance level α has its roots in the philosophy that
the maximum risk of making a type I error should be controlled. However, this
approach does not account for values of test statistics that are “close” to the
critical region. Suppose, for example, in the illustration with H0 : μ = 10 versus
H1: μ = 10, a value of z = 1.87 is observed; strictly speaking, with α = 0.05, the
value is not significant. But the risk of committing a type I error if one rejects H0
in this case could hardly be considered severe. In fact, in a two-tailed scenario, one
can quantify this risk as
P = 2P(Z  1.87 when μ = 10) = 2(0.0307) = 0.0614.
As a result, 0.0614 is the probability of obtaining a value of z as large as or larger
(in magnitude) than 1.87 when in fact μ = 10. Although this evidence against H0
is not as strong as that which would result from rejection at an α = 0.05 level, it
is important information to the user. Indeed, continued use of α = 0.05 or 0.01 is
only a result of what standards have been passed down through the generations.
The P-value approach has been adopted extensively by users of applied
statistics. The approach is designed to give the user an alternative (in terms
of a probability) to a mere “reject” or “do not reject” conclusion. The P-value
computation also gives the user important information when the z-value falls well
into the ordinary critical region. For example, if z is 2.73, it is informative for the
user to observe that
P = 2(0.0032) = 0.0064,
and thus the z-value is significant at a level considerably less than 0.05. It is
important to know that under the condition of H0, a value of z = 2.73 is an
extremely rare event. That is, a value at least that large in magnitude would only
occur 64 times in 10,000 experiments.
10.3 The Use of P-Values for Decision Making in Testing Hypotheses 333
A Graphical Demonstration of a P-Value
One very simple way of explaining a P-value graphically is to consider two distinct
samples. Suppose that two materials are being considered for coating a particular
type of metal in order to inhibit corrosion. Specimens are obtained, and one
collection is coated with material 1 and one collection coated with material 2. The
sample sizes are n1 = n2 = 10, and corrosion is measured in percent of surface
area affected. The hypothesis is that the samples came from common distributions
with mean μ = 10. Let us assume that the population variance is 1.0. Then we
are testing
H0: μ1 = μ2 = 10.
Let Figure 10.8 represent a point plot of the data; the data are placed on the
distribution stated by the null hypothesis. Let us assume that the “×” data refer to
material 1 and the “◦” data refer to material 2. Now it seems clear that the data do
refute the null hypothesis. But how can this be summarized in one number? The
P-value can be viewed as simply the probability of obtaining these data
given that both samples come from the same distribution. Clearly, this
probability is quite small, say 0.00000001! Thus, the small P-value clearly refutes
H0, and the conclusion is that the population means are significantly different.
 10
μ
Figure 10.8: Data that are likely generated from populations having two different
means.
Use of the P-value approach as an aid in decision-making is quite natural, and
nearly all computer packages that provide hypothesis-testing computation print
out P-values along with values of the appropriate test statistic. The following is a
formal definition of a P-value.
Definition 10.5: A P -value is the lowest level (of significance) at which the observed value of the
test statistic is significant.
How Does the Use of P-Values Differ from Classic Hypothesis Testing?
It is tempting at this point to summarize the procedures associated with testing,
say, H0 : θ = θ0. However, the student who is a novice in this area should un-
derstand that there are differences in approach and philosophy between the classic
/ /
334 Chapter 10 One- and Two-Sample Tests of Hypotheses
fixed α approach that is climaxed with either a “reject H0” or a “do not reject H0”
conclusion and the P-value approach. In the latter, no fixed α is determined and
conclusions are drawn on the basis of the size of the P-value in harmony with the
subjective judgment of the engineer or scientist. While modern computer software
will output P-values, nevertheless it is important that readers understand both
approaches in order to appreciate the totality of the concepts. Thus, we offer a
brief list of procedural steps for both the classical and the P-value approach.
Approach to
Hypothesis
Testing with
Fixed Probability
of Type I Error
1. State the null and alternative hypotheses.
2. Choose a fixed significance level α.
3. Choose an appropriate test statistic and establish the critical region based
on α.
4. Reject H0 if the computed test statistic is in the critical region. Otherwise,
do not reject.
5. Draw scientific or engineering conclusions.
Significance
Testing (P-Value
Approach)
1. State null and alternative hypotheses.
2. Choose an appropriate test statistic.
3. Compute the P-value based on the computed value of the test statistic.
4. Use judgment based on the P-value and knowledge of the scientific system.
In later sections of this chapter and chapters that follow, many examples and
exercises emphasize the P-value approach to drawing scientific conclusions.
Exercises
10.1 Suppose that an allergist wishes to test the hy-
pothesis that at least 30% of the public is allergic to
some cheese products. Explain how the allergist could
commit
(a) a type I error;
(b) a type II error.
10.2 A sociologist is concerned about the effective-
ness of a training course designed to get more drivers
to use seat belts in automobiles.
(a) What hypothesis is she testing if she commits a
type I error by erroneously concluding that the
training course is ineffective?
(b) What hypothesis is she testing if she commits a
type II error by erroneously concluding that the
training course is effective?
10.3 A large manufacturing firm is being charged
with discrimination in its hiring practices.
(a) What hypothesis is being tested if a jury commits
a type I error by finding the firm guilty?
(b) What hypothesis is being tested if a jury commits
a type II error by finding the firm guilty?
10.4 A fabric manufacturer believes that the propor-
tion of orders for raw material arriving late is p = 0.6.
If a random sample of 10 orders shows that 3 or fewer
arrived late, the hypothesis that p = 0.6 should be
rejected in favor of the alternative p  0.6. Use the
binomial distribution.
(a) Find the probability of committing a type I error
if the true proportion is p = 0.6.
(b) Find the probability of committing a type II error
for the alternatives p = 0.3, p = 0.4, and p = 0.5.
10.5 Repeat Exercise 10.4 but assume that 50 orders
are selected and the critical region is defined to be
x ≤ 24, where x is the number of orders in the sample
that arrived late. Use the normal approximation.
10.6 The proportion of adults living in a small town
who are college graduates is estimated to be p = 0.6.
To test this hypothesis, a random sample of 15 adults
is selected. If the number of college graduates in the
sample is anywhere from 6 to 12, we shall not reject
the null hypothesis that p = 0.6; otherwise, we shall
conclude that p = 0.6.
(a) Evaluate α assuming that p = 0.6. Use the bino-
mial distribution.
/ /
Exercises 335
(b) Evaluate β for the alternatives p = 0.5 and p = 0.7.
(c) Is this a good test procedure?
10.7 Repeat Exercise 10.6 but assume that 200 adults
are selected and the fail-to-reject region is defined to
be 110 ≤ x ≤ 130, where x is the number of college
graduates in our sample. Use the normal approxima-
tion.
10.8 In Relief from Arthritis published by Thorsons
Publishers, Ltd., John E. Croft claims that over 40%
of those who suffer from osteoarthritis receive measur-
able relief from an ingredient produced by a particular
species of mussel found off the coast of New Zealand.
To test this claim, the mussel extract is to be given to
a group of 7 osteoarthritic patients. If 3 or more of
the patients receive relief, we shall not reject the null
hypothesis that p = 0.4; otherwise, we conclude that
p  0.4.
(a) Evaluate α, assuming that p = 0.4.
(b) Evaluate β for the alternative p = 0.3.
10.9 A dry cleaning establishment claims that a new
spot remover will remove more than 70% of the spots
to which it is applied. To check this claim, the spot
remover will be used on 12 spots chosen at random. If
fewer than 11 of the spots are removed, we shall not
reject the null hypothesis that p = 0.7; otherwise, we
conclude that p  0.7.
(a) Evaluate α, assuming that p = 0.7.
(b) Evaluate β for the alternative p = 0.9.
10.10 Repeat Exercise 10.9 but assume that 100
spots are treated and the critical region is defined to
be x  82, where x is the number of spots removed.
10.11 Repeat Exercise 10.8 but assume that 70 pa-
tients are given the mussel extract and the critical re-
gion is defined to be x  24, where x is the number of
osteoarthritic patients who receive relief.
10.12 A random sample of 400 voters in a certain city
are asked if they favor an additional 4% gasoline sales
tax to provide badly needed revenues for street repairs.
If more than 220 but fewer than 260 favor the sales tax,
we shall conclude that 60% of the voters are for it.
(a) Find the probability of committing a type I error
if 60% of the voters favor the increased tax.
(b) What is the probability of committing a type II er-
ror using this test procedure if actually only 48%
of the voters are in favor of the additional gasoline
tax?
10.13 Suppose, in Exercise 10.12, we conclude that
60% of the voters favor the gasoline sales tax if more
than 214 but fewer than 266 voters in our sample fa-
vor it. Show that this new critical region results in a
smaller value for α at the expense of increasing β.
10.14 A manufacturer has developed a new fishing
line, which the company claims has a mean breaking
strength of 15 kilograms with a standard deviation of
0.5 kilogram. To test the hypothesis that μ = 15 kilo-
grams against the alternative that μ  15 kilograms, a
random sample of 50 lines will be tested. The critical
region is defined to be x̄  14.9.
(a) Find the probability of committing a type I error
when H0 is true.
(b) Evaluate β for the alternatives μ = 14.8 and μ =
14.9 kilograms.
10.15 A soft-drink machine at a steak house is reg-
ulated so that the amount of drink dispensed is ap-
proximately normally distributed with a mean of 200
milliliters and a standard deviation of 15 milliliters.
The machine is checked periodically by taking a sam-
ple of 9 drinks and computing the average content. If
x̄ falls in the interval 191  x̄  209, the machine is
thought to be operating satisfactorily; otherwise, we
conclude that μ = 200 milliliters.
(a) Find the probability of committing a type I error
when μ = 200 milliliters.
(b) Find the probability of committing a type II error
when μ = 215 milliliters.
10.16 Repeat Exercise 10.15 for samples of size n =
25. Use the same critical region.
10.17 A new curing process developed for a certain
type of cement results in a mean compressive strength
of 5000 kilograms per square centimeter with a stan-
dard deviation of 120 kilograms. To test the hypothesis
that μ = 5000 against the alternative that μ  5000,
a random sample of 50 pieces of cement is tested. The
critical region is defined to be x̄  4970.
(a) Find the probability of committing a type I error
when H0 is true.
(b) Evaluate β for the alternatives μ = 4970 and
μ = 4960.
10.18 If we plot the probabilities of failing to reject
H0 corresponding to various alternatives for μ (includ-
ing the value specified by H0) and connect all the
points by a smooth curve, we obtain the operating
characteristic curve of the test criterion, or simply
the OC curve. Note that the probability of failing to
reject H0 when it is true is simply 1 − α. Operating
characteristic curves are widely used in industrial ap-
plications to provide a visual display of the merits of
the test criterion. With reference to Exercise 10.15,
find the probabilities of failing to reject H0 for the fol-
lowing 9 values of μ and plot the OC curve: 184, 188,
192, 196, 200, 204, 208, 212, and 216.
336 Chapter 10 One- and Two-Sample Tests of Hypotheses
10.4 Single Sample: Tests Concerning a Single Mean
In this section, we formally consider tests of hypotheses on a single population
mean. Many of the illustrations from previous sections involved tests on the mean,
so the reader should already have insight into some of the details that are outlined
here.
Tests on a Single Mean (Variance Known)
We should first describe the assumptions on which the experiment is based. The
model for the underlying situation centers around an experiment with X1, X2, . . . ,
Xn representing a random sample from a distribution with mean μ and variance
σ2
 0. Consider first the hypothesis
H0: μ = μ0,
H1: μ = μ0.
The appropriate test statistic should be based on the random variable X̄. In
Chapter 8, the Central Limit Theorem was introduced, which essentially states
that despite the distribution of X, the random variable X̄ has approximately a
normal distribution with mean μ and variance σ2
/n for reasonably large sample
sizes. So, μX̄ = μ and σ2
X̄
= σ2
/n. We can then determine a critical region based
on the computed sample average, x̄. It should be clear to the reader by now that
there will be a two-tailed critical region for the test.
Standardization of X̄
It is convenient to standardize X̄ and formally involve the standard normal
random variable Z, where
Z =
X̄ − μ
σ/
√
n
.
We know that under H0, that is, if μ = μ0,
√
n(X̄ − μ0)/σ follows an n(x; 0, 1)
distribution, and hence the expression
P

−zα/2 
X̄ − μ0
σ/
√
n
 zα/2

= 1 − α
can be used to write an appropriate nonrejection region. The reader should keep
in mind that, formally, the critical region is designed to control α, the probability
of type I error. It should be obvious that a two-tailed signal of evidence is needed
to support H1. Thus, given a computed value x̄, the formal test involves rejecting
H0 if the computed test statistic z falls in the critical region described next.
10.4 Single Sample: Tests Concerning a Single Mean 337
Test Procedure
for a Single Mean
(Variance
Known)
z =
x̄ − μ0
σ/
√
n
 zα/2 or z =
x̄ − μ0
σ/
√
n
 −zα/2
If −zα/2  z  zα/2, do not reject H0. Rejection of H0, of course, implies
acceptance of the alternative hypothesis μ = μ0. With this definition of the
critical region, it should be clear that there will be probability α of rejecting H0
(falling into the critical region) when, indeed, μ = μ0.
Although it is easier to understand the critical region written in terms of z,
we can write the same critical region in terms of the computed average x̄. The
following can be written as an identical decision procedure:
reject H0 if x̄  a or x̄  b,
where
a = μ0 − zα/2
σ
√
n
, b = μ0 + zα/2
σ
√
n
.
Hence, for a significance level α, the critical values of the random variable z and x̄
are both depicted in Figure 10.9.
x
a μ b
1 
/2
α
α /2
α
Figure 10.9: Critical region for the alternative hypothesis μ = μ0.
Tests of one-sided hypotheses on the mean involve the same statistic described
in the two-sided case. The difference, of course, is that the critical region is only
in one tail of the standard normal distribution. For example, suppose that we seek
to test
H0: μ = μ0,
H1: μ  μ0.
The signal that favors H1 comes from large values of z. Thus, rejection of H0 results
when the computed z  zα. Obviously, if the alternative is H1: μ  μ0, the critical
region is entirely in the lower tail and thus rejection results from z  −zα. Although
in a one-sided testing case the null hypothesis can be written as H0 : μ ≤ μ0 or
H0: μ ≥ μ0, it is usually written as H0: μ = μ0.
The following two examples illustrate tests on means for the case in which σ is
known.
338 Chapter 10 One- and Two-Sample Tests of Hypotheses
Example 10.3: A random sample of 100 recorded deaths in the United States during the past
year showed an average life span of 71.8 years. Assuming a population standard
deviation of 8.9 years, does this seem to indicate that the mean life span today is
greater than 70 years? Use a 0.05 level of significance.
Solution: 1. H0: μ = 70 years.
2. H1: μ  70 years.
3. α = 0.05.
4. Critical region: z  1.645, where z = x̄−μ0
σ/
√
n
.
5. Computations: x̄ = 71.8 years, σ = 8.9 years, and hence z = 71.8−70
8.9/
√
100
= 2.02.
6. Decision: Reject H0 and conclude that the mean life span today is greater
than 70 years.
The P-value corresponding to z = 2.02 is given by the area of the shaded region
in Figure 10.10.
Using Table A.3, we have
P = P(Z  2.02) = 0.0217.
As a result, the evidence in favor of H1 is even stronger than that suggested by a
0.05 level of significance.
Example 10.4: A manufacturer of sports equipment has developed a new synthetic fishing line that
the company claims has a mean breaking strength of 8 kilograms with a standard
deviation of 0.5 kilogram. Test the hypothesis that μ = 8 kilograms against the
alternative that μ = 8 kilograms if a random sample of 50 lines is tested and found
to have a mean breaking strength of 7.8 kilograms. Use a 0.01 level of significance.
Solution: 1. H0: μ = 8 kilograms.
2. H1: μ = 8 kilograms.
3. α = 0.01.
4. Critical region: z  −2.575 and z  2.575, where z = x̄−μ0
σ/
√
n
.
5. Computations: x̄ = 7.8 kilograms, n = 50, and hence z = 7.8−8
0.5/
√
50
= −2.83.
6. Decision: Reject H0 and conclude that the average breaking strength is not
equal to 8 but is, in fact, less than 8 kilograms.
Since the test in this example is two tailed, the desired P-value is twice the
area of the shaded region in Figure 10.11 to the left of z = −2.83. Therefore, using
Table A.3, we have
P = P(|Z|  2.83) = 2P(Z  −2.83) = 0.0046,
which allows us to reject the null hypothesis that μ = 8 kilograms at a level of
significance smaller than 0.01.
10.4 Single Sample: Tests Concerning a Single Mean 339
z
0 2.02
P
Figure 10.10: P-value for Example 10.3.
z
−2.83 0 2.83
P/2 P/2
Figure 10.11: P-value for Example 10.4.
Relationship to Confidence Interval Estimation
The reader should realize by now that the hypothesis-testing approach to statistical
inference in this chapter is very closely related to the confidence interval approach in
Chapter 9. Confidence interval estimation involves computation of bounds within
which it is “reasonable” for the parameter in question to lie. For the case of a
single population mean μ with σ2
known, the structure of both hypothesis testing
and confidence interval estimation is based on the random variable
Z =
X̄ − μ
σ/
√
n
.
It turns out that the testing of H0: μ = μ0 against H1: μ = μ0 at a significance level
α is equivalent to computing a 100(1 − α)% confidence interval on μ and rejecting
H0 if μ0 is outside the confidence interval. If μ0 is inside the confidence interval,
the hypothesis is not rejected. The equivalence is very intuitive and quite simple to
illustrate. Recall that with an observed value x̄, failure to reject H0 at significance
level α implies that
−zα/2 ≤
x̄ − μ0
σ/
√
n
≤ zα/2,
which is equivalent to
x̄ − zα/2
σ
√
n
≤ μ0 ≤ x̄ + zα/2
σ
√
n
.
The equivalence of confidence interval estimation to hypothesis testing extends
to differences between two means, variances, ratios of variances, and so on. As a
result, the student of statistics should not consider confidence interval estimation
and hypothesis testing as separate forms of statistical inference. For example,
consider Example 9.2 on page 271. The 95% confidence interval on the mean is
given by the bounds (2.50, 2.70). Thus, with the same sample information, a two-
sided hypothesis on μ involving any hypothesized value between 2.50 and 2.70 will
not be rejected. As we turn to different areas of hypothesis testing, the equivalence
to the confidence interval estimation will continue to be exploited.
340 Chapter 10 One- and Two-Sample Tests of Hypotheses
Tests on a Single Sample (Variance Unknown)
One would certainly suspect that tests on a population mean μ with σ2
unknown,
like confidence interval estimation, should involve the use of Student t-distribution.
Strictly speaking, the application of Student t for both confidence intervals and
hypothesis testing is developed under the following assumptions. The random
variables X1, X2, . . . , Xn represent a random sample from a normal distribution
with unknown μ and σ2
. Then the random variable
√
n(X̄ − μ)/S has a Student
t-distribution with n−1 degrees of freedom. The structure of the test is identical to
that for the case of σ known, with the exception that the value σ in the test statistic
is replaced by the computed estimate S and the standard normal distribution is
replaced by a t-distribution.
The t-Statistic
for a Test on a
Single Mean
(Variance
Unknown)
For the two-sided hypothesis
H0: μ = μ0,
H1: μ = μ0,
we reject H0 at significance level α when the computed t-statistic
t =
x̄ − μ0
s/
√
n
exceeds tα/2,n−1 or is less than −tα/2,n−1.
The reader should recall from Chapters 8 and 9 that the t-distribution is symmetric
around the value zero. Thus, this two-tailed critical region applies in a fashion
similar to that for the case of known σ. For the two-sided hypothesis at significance
level α, the two-tailed critical regions apply. For H1: μ  μ0, rejection results when
t  tα,n−1. For H1: μ  μ0, the critical region is given by t  −tα,n−1.
Example 10.5: The Edison Electric Institute has published figures on the number of kilowatt hours
used annually by various home appliances. It is claimed that a vacuum cleaner uses
an average of 46 kilowatt hours per year. If a random sample of 12 homes included
in a planned study indicates that vacuum cleaners use an average of 42 kilowatt
hours per year with a standard deviation of 11.9 kilowatt hours, does this suggest
at the 0.05 level of significance that vacuum cleaners use, on average, less than 46
kilowatt hours annually? Assume the population of kilowatt hours to be normal.
Solution: 1. H0: μ = 46 kilowatt hours.
2. H1: μ  46 kilowatt hours.
3. α = 0.05.
4. Critical region: t  −1.796, where t = x̄−μ0
s/
√
n
with 11 degrees of freedom.
5. Computations: x̄ = 42 kilowatt hours, s = 11.9 kilowatt hours, and n = 12.
Hence,
t =
42 − 46
11.9/
√
12
= −1.16, P = P(T  −1.16) ≈ 0.135.
10.4 Single Sample: Tests Concerning a Single Mean 341
6. Decision: Do not reject H0 and conclude that the average number of kilowatt
hours used annually by home vacuum cleaners is not significantly less than
46.
Comment on the Single-Sample t-Test
The reader has probably noticed that the equivalence of the two-tailed t-test for
a single mean and the computation of a confidence interval on μ with σ replaced
by s is maintained. For example, consider Example 9.5 on page 275. Essentially,
we can view that computation as one in which we have found all values of μ0, the
hypothesized mean volume of containers of sulfuric acid, for which the hypothesis
H0: μ = μ0 will not be rejected at α = 0.05. Again, this is consistent with the
statement “Based on the sample information, values of the population mean volume
between 9.74 and 10.26 liters are not unreasonable.”
Comments regarding the normality assumption are worth emphasizing at this
point. We have indicated that when σ is known, the Central Limit Theorem
allows for the use of a test statistic or a confidence interval which is based on Z,
the standard normal random variable. Strictly speaking, of course, the Central
Limit Theorem, and thus the use of the standard normal distribution, does not
apply unless σ is known. In Chapter 8, the development of the t-distribution was
given. There we pointed out that normality on X1, X2, . . . , Xn was an underlying
assumption. Thus, strictly speaking, the Student’s t-tables of percentage points for
tests or confidence intervals should not be used unless it is known that the sample
comes from a normal population. In practice, σ can rarely be assumed to be known.
However, a very good estimate may be available from previous experiments. Many
statistics textbooks suggest that one can safely replace σ by s in the test statistic
z =
x̄ − μ0
σ/
√
n
when n ≥ 30 with a bell-shaped population and still use the Z-tables for the
appropriate critical region. The implication here is that the Central Limit Theorem
is indeed being invoked and one is relying on the fact that s ≈ σ. Obviously, when
this is done, the results must be viewed as approximate. Thus, a computed P-
value (from the Z-distribution) of 0.15 may be 0.12 or perhaps 0.17, or a computed
confidence interval may be a 93% confidence interval rather than a 95% interval
as desired. Now what about situations where n ≤ 30? The user cannot rely on s
being close to σ, and in order to take into account the inaccuracy of the estimate,
the confidence interval should be wider or the critical value larger in magnitude.
The t-distribution percentage points accomplish this but are correct only when the
sample is from a normal distribution. Of course, normal probability plots can be
used to ascertain some sense of the deviation of normality in a data set.
For small samples, it is often difficult to detect deviations from a normal dis-
tribution. (Goodness-of-fit tests are discussed in a later section of this chapter.)
For bell-shaped distributions of the random variables X1, X2, . . . , Xn, the use of
the t-distribution for tests or confidence intervals is likely to produce quite good
results. When in doubt, the user should resort to nonparametric procedures, which
are presented in Chapter 16.
342 Chapter 10 One- and Two-Sample Tests of Hypotheses
Annotated Computer Printout for Single-Sample t-Test
It should be of interest for the reader to see an annotated computer printout
showing the result of a single-sample t-test. Suppose that an engineer is interested
in testing the bias in a pH meter. Data are collected on a neutral substance (pH
= 7.0). A sample of the measurements were taken with the data as follows:
7.07 7.00 7.10 6.97 7.00 7.03 7.01 7.01 6.98 7.08
It is, then, of interest to test
H0: μ = 7.0,
H1: μ = 7.0.
In this illustration, we use the computer package MINITAB to illustrate the anal-
ysis of the data set above. Notice the key components of the printout shown in
Figure 10.12. Of course, the mean ȳ is 7.0250, StDev is simply the sample standard
deviation s = 0.044, and SE Mean is the estimated standard error of the mean and
is computed as s/
√
n = 0.0139. The t-value is the ratio
(7.0250 − 7)/0.0139 = 1.80.
pH-meter
7.07 7.00 7.10 6.97 7.00 7.03 7.01 7.01 6.98 7.08
MTB  Onet ’pH-meter’; SUBC Test 7.
One-Sample T: pH-meter Test of mu = 7 vs not = 7
Variable N Mean StDev SE Mean 95% CI T P
pH-meter 10 7.02500 0.04403 0.01392 (6.99350, 7.05650) 1.80 0.106
Figure 10.12: MINITAB printout for one sample t-test for pH meter.
The P-value of 0.106 suggests results that are inconclusive. There is no evi-
dence suggesting a strong rejection of H0 (based on an α of 0.05 or 0.10), yet one
certainly cannot truly conclude that the pH meter is unbiased. Notice
that the sample size of 10 is rather small. An increase in sample size (perhaps an-
other experiment) may sort things out. A discussion regarding appropriate sample
size appears in Section 10.6.
10.5 Two Samples: Tests on Two Means
The reader should now understand the relationship between tests and confidence
intervals, and can only heavily rely on details supplied by the confidence interval
material in Chapter 9. Tests concerning two means represent a set of very impor-
tant analytical tools for the scientist or engineer. The experimental setting is very
much like that described in Section 9.8. Two independent random samples of sizes
10.5 Two Samples: Tests on Two Means 343
n1 and n2, respectively, are drawn from two populations with means μ1 and μ2
and variances σ2
1 and σ2
2. We know that the random variable
Z =
(X̄1 − X̄2) − (μ1 − μ2)

σ2
1/n1 + σ2
2/n2
has a standard normal distribution. Here we are assuming that n1 and n2 are
sufficiently large that the Central Limit Theorem applies. Of course, if the two
populations are normal, the statistic above has a standard normal distribution
even for small n1 and n2. Obviously, if we can assume that σ1 = σ2 = σ, the
statistic above reduces to
Z =
(X̄1 − X̄2) − (μ1 − μ2)
σ

1/n1 + 1/n2
.
The two statistics above serve as a basis for the development of the test procedures
involving two means. The equivalence between tests and confidence intervals, along
with the technical detail involving tests on one mean, allow a simple transition to
tests on two means.
The two-sided hypothesis on two means can be written generally as
H0: μ1 − μ2 = d0.
Obviously, the alternative can be two sided or one sided. Again, the distribu-
tion used is the distribution of the test statistic under H0. Values x̄1 and x̄2 are
computed and, for σ1 and σ2 known, the test statistic is given by
z =
(x̄1 − x̄2) − d0

σ2
1/n1 + σ2
2/n2
,
with a two-tailed critical region in the case of a two-sided alternative. That is,
reject H0 in favor of H1: μ1 − μ2 = d0 if z  zα/2 or z  −zα/2. One-tailed critical
regions are used in the case of the one-sided alternatives. The reader should, as
before, study the test statistic and be satisfied that for, say, H1: μ1 − μ2  d0, the
signal favoring H1 comes from large values of z. Thus, the upper-tailed critical
region applies.
Unknown But Equal Variances
The more prevalent situations involving tests on two means are those in which
variances are unknown. If the scientist involved is willing to assume that both
distributions are normal and that σ1 = σ2 = σ, the pooled t-test (often called the
two-sample t-test) may be used. The test statistic (see Section 9.8) is given by the
following test procedure.
344 Chapter 10 One- and Two-Sample Tests of Hypotheses
Two-Sample
Pooled t-Test
For the two-sided hypothesis
H0: μ1 = μ2,
H1: μ1 = μ2,
we reject H0 at significance level α when the computed t-statistic
t =
(x̄1 − x̄2) − d0
sp

1/n1 + 1/n2
,
where
s2
p =
s2
1(n1 − 1) + s2
2(n2 − 1)
n1 + n2 − 2
exceeds tα/2,n1+n2−2 or is less than −tα/2,n1+n2−2.
Recall from Chapter 9 that the degrees of freedom for the t-distribution are a
result of pooling of information from the two samples to estimate σ2
. One-sided
alternatives suggest one-sided critical regions, as one might expect. For example,
for H1: μ1 − μ2  d0, reject H1: μ1 − μ2 = d0 when t  tα,n1+n2−2.
Example 10.6: An experiment was performed to compare the abrasive wear of two different lami-
nated materials. Twelve pieces of material 1 were tested by exposing each piece to
a machine measuring wear. Ten pieces of material 2 were similarly tested. In each
case, the depth of wear was observed. The samples of material 1 gave an average
(coded) wear of 85 units with a sample standard deviation of 4, while the samples
of material 2 gave an average of 81 with a sample standard deviation of 5. Can
we conclude at the 0.05 level of significance that the abrasive wear of material 1
exceeds that of material 2 by more than 2 units? Assume the populations to be
approximately normal with equal variances.
Solution: Let μ1 and μ2 represent the population means of the abrasive wear for material 1
and material 2, respectively.
1. H0: μ1 − μ2 = 2.
2. H1: μ1 − μ2  2.
3. α = 0.05.
4. Critical region: t  1.725, where t = (x̄1−x̄2)−d0
sp
√
1/n1+1/n2
with v = 20 degrees of
freedom.
5. Computations:
x̄1 = 85, s1 = 4, n1 = 12,
x̄2 = 81, s2 = 5, n2 = 10.
10.5 Two Samples: Tests on Two Means 345
Hence
sp =

(11)(16) + (9)(25)
12 + 10 − 2
= 4.478,
t =
(85 − 81) − 2
4.478

1/12 + 1/10
= 1.04,
P = P(T  1.04) ≈ 0.16. (See Table A.4.)
6. Decision: Do not reject H0. We are unable to conclude that the abrasive wear
of material 1 exceeds that of material 2 by more than 2 units.
Unknown But Unequal Variances
There are situations where the analyst is not able to assume that σ1 = σ2. Recall
from Section 9.8 that, if the populations are normal, the statistic
T
=
(X̄1 − X̄2) − d0

s2
1/n1 + s2
2/n2
has an approximate t-distribution with approximate degrees of freedom
v =
(s2
1/n1 + s2
2/n2)2
(s2
1/n1)2/(n1 − 1) + (s2
2/n2)2/(n2 − 1)
.
As a result, the test procedure is to not reject H0 when
−tα/2,v  t
 tα/2,v,
with v given as above. Again, as in the case of the pooled t-test, one-sided alter-
natives suggest one-sided critical regions.
Paired Observations
A study of the two-sample t-test or confidence interval on the difference between
means should suggest the need for experimental design. Recall the discussion of
experimental units in Chapter 9, where it was suggested that the conditions of
the two populations (often referred to as the two treatments) should be assigned
randomly to the experimental units. This is done to avoid biased results due to
systematic differences between experimental units. In other words, in hypothesis-
testing jargon, it is important that any significant difference found between means
be due to the different conditions of the populations and not due to the exper-
imental units in the study. For example, consider Exercise 9.40 in Section 9.9.
The 20 seedlings play the role of the experimental units. Ten of them are to be
treated with nitrogen and 10 with no nitrogen. It may be very important that
this assignment to the “nitrogen” and “no-nitrogen” treatments be random to en-
sure that systematic differences between the seedlings do not interfere with a valid
comparison between the means.
In Example 10.6, time of measurement is the most likely choice for the experi-
mental unit. The 22 pieces of material should be measured in random order. We
346 Chapter 10 One- and Two-Sample Tests of Hypotheses
need to guard against the possibility that wear measurements made close together
in time might tend to give similar results. Systematic (nonrandom) differences
in experimental units are not expected. However, random assignments guard
against the problem.
References to planning of experiments, randomization, choice of sample size,
and so on, will continue to influence much of the development in Chapters 13, 14,
and 15. Any scientist or engineer whose interest lies in analysis of real data should
study this material. The pooled t-test is extended in Chapter 13 to cover more
than two means.
Testing of two means can be accomplished when data are in the form of paired
observations, as discussed in Chapter 9. In this pairing structure, the conditions
of the two populations (treatments) are assigned randomly within homogeneous
units. Computation of the confidence interval for μ1 − μ2 in the situation with
paired observations is based on the random variable
T =
D̄ − μD
Sd/
√
n
,
where D̄ and Sd are random variables representing the sample mean and standard
deviation of the differences of the observations in the experimental units. As in the
case of the pooled t-test, the assumption is that the observations from each popu-
lation are normal. This two-sample problem is essentially reduced to a one-sample
problem by using the computed differences d1, d2, . . . , dn. Thus, the hypothesis
reduces to
H0: μD = d0.
The computed test statistic is then given by
t =
d − d0
sd/
√
n
.
Critical regions are constructed using the t-distribution with n − 1 degrees of free-
dom.
Problem of Interaction in a Paired t-Test
Not only will the case study that follows illustrate the use of the paired t-test but
the discussion will shed considerable light on the difficulties that arise when there
is an interaction between the treatments and the experimental units in the paired
t structure. Recall that interaction between factors was introduced in Section 1.7
in a discussion of general types of statistical studies. The concept of interaction
will be an important issue from Chapter 13 through Chapter 15.
There are some types of statistical tests in which the existence of interaction
results in difficulty. The paired t-test is one such example. In Section 9.9, the paired
structure was used in the computation of a confidence interval on the difference
between two means, and the advantage in pairing was revealed for situations in
which the experimental units are homogeneous. The pairing results in a reduction
in σD, the standard deviation of a difference Di = X1i − X2i, as discussed in
10.5 Two Samples: Tests on Two Means 347
Section 9.9. If interaction exists between treatments and experimental units, the
advantage gained in pairing may be substantially reduced. Thus, in Example 9.13
on page 293, the no interaction assumption allowed the difference in mean TCDD
levels (plasma vs. fat tissue) to be the same across veterans. A quick glance at the
data would suggest that there is no significant violation of the assumption of no
interaction.
In order to demonstrate how interaction influences Var(D) and hence the quality
of the paired t-test, it is instructive to revisit the ith difference given by Di = X1i −
X2i = (μ1 − μ2) + (1 − 2), where X1i and X2i are taken on the ith experimental
unit. If the pairing unit is homogeneous, the errors in X1i and in X2i should be
similar and not independent. We noted in Chapter 9 that the positive covariance
between the errors results in a reduced Var(D). Thus, the size of the difference in
the treatments and the relationship between the errors in X1i and X2i contributed
by the experimental unit will tend to allow a significant difference to be detected.
What Conditions Result in Interaction?
Let us consider a situation in which the experimental units are not homogeneous.
Rather, consider the ith experimental unit with random variables X1i and X2i that
are not similar. Let 1i and 2i be random variables representing the errors in the
values X1i and X2i, respectively, at the ith unit. Thus, we may write
X1i = μ1 + 1i and X2i = μ2 + 2i.
The errors with expectation zero may tend to cause the response values X1i and
X2i to move in opposite directions, resulting in a negative value for Cov(1i, 2i)
and hence negative Cov(X1i, X2i). In fact, the model may be complicated even
more by the fact that σ2
1 = Var(1i) = σ2
2 = Var(2i). The variance and covari-
ance parameters may vary among the n experimental units. Thus, unlike in the
homogeneous case, Di will tend to be quite different across experimental units due
to the heterogeneous nature of the difference in 1 − 2 among the units. This
produces the interaction between treatments and units. In addition, for a specific
experimental unit (see Theorem 4.9),
σ2
D = Var(D) = Var(1) + Var(2) − 2 Cov(1, 2)
is inflated by the negative covariance term, and thus the advantage gained in pairing
in the homogeneous unit case is lost in the case described here. While the inflation
in Var(D) will vary from case to case, there is a danger in some cases that the
increase in variance may neutralize any difference that exists between μ1 and μ2.
Of course, a large value of ¯
d in the t-statistic may reflect a treatment difference
that overcomes the inflated variance estimate, s2
d.
Case Study 10.1: Blood Sample Data: In a study conducted in the Forestry and Wildlife De-
partment at Virginia Tech, J. A. Wesson examined the influence of the drug suc-
cinylcholine on the circulation levels of androgens in the blood. Blood samples
were taken from wild, free-ranging deer immediately after they had received an
intramuscular injection of succinylcholine administered using darts and a capture
gun. A second blood sample was obtained from each deer 30 minutes after the
348 Chapter 10 One- and Two-Sample Tests of Hypotheses
first sample, after which the deer was released. The levels of androgens at time of
capture and 30 minutes later, measured in nanograms per milliliter (ng/mL), for
15 deer are given in Table 10.2.
Assuming that the populations of androgen levels at time of injection and 30
minutes later are normally distributed, test at the 0.05 level of significance whether
the androgen concentrations are altered after 30 minutes.
Table 10.2: Data for Case Study 10.1
Androgen (ng/mL)
Deer At Time of Injection 30 Minutes after Injection di
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
2.76
5.18
2.68
3.05
4.10
7.05
6.60
4.79
7.39
7.30
11.78
3.90
26.00
67.48
17.04
7.02
3.10
5.44
3.99
5.21
10.26
13.91
18.53
7.91
4.85
11.10
3.74
94.03
94.03
41.70
4.26
−2.08
2.76
0.94
1.11
3.21
7.31
13.74
0.52
−2.45
−0.68
−0.16
68.03
26.55
24.66
Solution: Let μ1 and μ2 be the average androgen concentration at the time of injection and
30 minutes later, respectively. We proceed as follows:
1. H0: μ1 = μ2 or μD = μ1 − μ2 = 0.
2. H1: μ1 = μ2 or μD = μ1 − μ2 = 0.
3. α = 0.05.
4. Critical region: t  −2.145 and t  2.145, where t = d−d0
sD/
√
n
with v = 14
degrees of freedom.
5. Computations: The sample mean and standard deviation for the di are
d = 9.848 and sd = 18.474.
Therefore,
t =
9.848 − 0
18.474/
√
15
= 2.06.
6. Though the t-statistic is not significant at the 0.05 level, from Table A.4,
P = P(|T|  2.06) ≈ 0.06.
As a result, there is some evidence that there is a difference in mean circulating
levels of androgen.
10.6 Choice of Sample Size for Testing Means 349
The assumption of no interaction would imply that the effect on androgen
levels of the deer is roughly the same in the data for both treatments, i.e., at the
time of injection of succinylcholine and 30 minutes following injection. This can
be expressed with the two factors switching roles; for example, the difference in
treatments is roughly the same across the units (i.e., the deer). There certainly are
some deer/treatment combinations for which the no interaction assumption seems
to hold, but there is hardly any strong evidence that the experimental units are
homogeneous. However, the nature of the interaction and the resulting increase in
Var(D̄) appear to be dominated by a substantial difference in the treatments. This
is further demonstrated by the fact that 11 of the 15 deer exhibited positive signs
for the computed di and the negative di (for deer 2, 10, 11, and 12) are small in
magnitude compared to the 12 positive ones. Thus, it appears that the mean level
of androgen is significantly higher 30 minutes following injection than at injection,
and the conclusions may be stronger than p = 0.06 would suggest.
Annotated Computer Printout for Paired t-Test
Figure 10.13 displays a SAS computer printout for a paired t-test using the data
of Case Study 10.1. Notice that the printout looks like that for a single sample
t-test and, of course, that is exactly what is accomplished, since the test seeks to
determine if d is significantly different from zero.
Analysis Variable : Diff
N Mean Std Error t Value Pr  |t|
---------------------------------------------------------
15 9.8480000 4.7698699 2.06 0.0580
---------------------------------------------------------
Figure 10.13: SAS printout of paired t-test for data of Case Study 10.1.
Summary of Test Procedures
As we complete the formal development of tests on population means, we offer
Table 10.3, which summarizes the test procedure for the cases of a single mean and
two means. Notice the approximate procedure when distributions are normal and
variances are unknown but not assumed to be equal. This statistic was introduced
in Chapter 9.
10.6 Choice of Sample Size for Testing Means
In Section 10.2, we demonstrated how the analyst can exploit relationships among
the sample size, the significance level α, and the power of the test to achieve
a certain standard of quality. In most practical circumstances, the experiment
should be planned, with a choice of sample size made prior to the data-taking
process if possible. The sample size is usually determined to achieve good power
for a fixed α and fixed specific alternative. This fixed alternative may be in the
350 Chapter 10 One- and Two-Sample Tests of Hypotheses
Table 10.3: Tests Concerning Means
H0 Value of Test Statistic H1 Critical Region
μ = μ0 z =
x̄ − μ0
σ/
√
n
; σ known
μ  μ0
μ  μ0
μ = μ0
z  −zα
z  zα
z  −zα/2 or z  zα/2
μ = μ0
t =
x̄ − μ0
s/
√
n
; v = n − 1,
σ unknown
μ  μ0
μ  μ0
μ = μ0
t  −tα
t  tα
t  −tα/2 or t  tα/2
μ1 − μ2 = d0
z =
(x̄1 − x̄2) − d0

σ2
1/n1 + σ2
2/n2
;
σ1 and σ2 known
μ1 − μ2  d0
μ1 − μ2  d0
μ1 − μ2 = d0
z  −zα
z  zα
z  −zα/2 or z  zα/2
μ1 − μ2 = d0
t =
(x̄1 − x̄2) − d0
sp

1/n1 + 1/n2
;
v = n1 + n2 − 2,
σ1 = σ2 but unknown,
s2
p =
(n1 − 1)s2
1 + (n2 − 1)s2
2
n1 + n2 − 2
μ1 − μ2  d0
μ1 − μ2  d0
μ1 − μ2 = d0
t  −tα
t  tα
t  −tα/2 or t  tα/2
μ1 − μ2 = d0
t
=
(x̄1 − x̄2) − d0

s2
1/n1 + s2
2/n2
;
v =
(s2
1/n1 + s2
2/n2)2
(s2
1/n1)2
n1−1 +
(s2
2/n2)2
n2−1
,
σ1 = σ2 and unknown
μ1 − μ2  d0
μ1 − μ2  d0
μ1 − μ2 = d0
t
 −tα
t
 tα
t
 −tα/2 or t
 tα/2
μD = d0
paired
observations
t =
d − d0
sd/
√
n
;
v = n − 1
μD  d0
μD  d0
μD = d0
t  −tα
t  tα
t  −tα/2 or t  tα/2
form of μ − μ0 in the case of a hypothesis involving a single mean or μ1 − μ2 in the
case of a problem involving two means. Specific cases will provide illustrations.
Suppose that we wish to test the hypothesis
H0 : μ = μ0,
H1 : μ  μ0,
with a significance level α, when the variance σ2
is known. For a specific alternative,
say μ = μ0 + δ, the power of our test is shown in Figure 10.14 to be
1 − β = P(X̄  a when μ = μ0 + δ).
Therefore,
β = P(X̄  a when μ = μ0 + δ)
= P

X̄ − (μ0 + δ)
σ/
√
n

a − (μ0 + δ)
σ/
√
n
when μ = μ0 + δ

.
10.6 Choice of Sample Size for Testing Means 351
x
a +
μ0 μ0 δ
α
β
Figure 10.14: Testing μ = μ0 versus μ = μ0 + δ.
Under the alternative hypothesis μ = μ0 + δ, the statistic
X̄ − (μ0 + δ)
σ/
√
n
is the standard normal variable Z. So
β = P

Z 
a − μ0
σ/
√
n
−
δ
σ/
√
n

= P

Z  zα −
δ
σ/
√
n

,
from which we conclude that
−zβ = zα −
δ
√
n
σ
,
and hence
Choice of sample size: n =
(zα + zβ)2
σ2
δ2
,
a result that is also true when the alternative hypothesis is μ  μ0.
In the case of a two-tailed test, we obtain the power 1 − β for a specified
alternative when
n ≈
(zα/2 + zβ)2
σ2
δ2
.
Example 10.7: Suppose that we wish to test the hypothesis
H0: μ = 68 kilograms,
H1: μ  68 kilograms
for the weights of male students at a certain college, using an α = 0.05 level of
significance, when it is known that σ = 5. Find the sample size required if the
power of our test is to be 0.95 when the true mean is 69 kilograms.
352 Chapter 10 One- and Two-Sample Tests of Hypotheses
Solution: Since α = β = 0.05, we have zα = zβ = 1.645. For the alternative β = 69, we take
δ = 1 and then
n =
(1.645 + 1.645)2
(25)
1
= 270.6.
Therefore, 271 observations are required if the test is to reject the null hypothesis
95% of the time when, in fact, μ is as large as 69 kilograms.
Two-Sample Case
A similar procedure can be used to determine the sample size n = n1 = n2 required
for a specific power of the test in which two population means are being compared.
For example, suppose that we wish to test the hypothesis
H0: μ1 − μ2 = d0,
H1: μ1 − μ2 = d0,
when σ1 and σ2 are known. For a specific alternative, say μ1 − μ2 = d0 + δ, the
power of our test is shown in Figure 10.15 to be
1 − β = P(|X̄1 − X̄2|  a when μ1 − μ2 = d0 + δ).
x
a +
−a d0 d0 δ
α 2
β
α 2
Figure 10.15: Testing μ1 − μ2 = d0 versus μ1 − μ2 = d0 + δ.
Therefore,
β = P(−a  X̄1 − X̄2  a when μ1 − μ2 = d0 + δ)
= P

−a − (d0 + δ)

(σ2
1 + σ2
2)/n

(X̄1 − X̄2) − (d0 + δ)

(σ2
1 + σ2
2)/n

a − (d0 + δ)

(σ2
1 + σ2
2)/n
when μ1 − μ2 = d0 + δ

.
Under the alternative hypothesis μ1 − μ2 = d0 + δ, the statistic
X̄1 − X̄2 − (d0 + δ)

(σ2
1 + σ2
2)/n
10.6 Choice of Sample Size for Testing Means 353
is the standard normal variable Z. Now, writing
−zα/2 =
−a − d0

(σ2
1 + σ2
2)/n
and zα/2 =
a − d0

(σ2
1 + σ2
2)/n
,
we have
β = P

−zα/2 −
δ

(σ2
1 + σ2
2)/n
 Z  zα/2 −
δ

(σ2
1 + σ2
2)/n

,
from which we conclude that
−zβ ≈ zα/2 −
δ

(σ2
1 + σ2
2)/n
,
and hence
n ≈
(zα/2 + zβ)2
(σ2
1 + σ2
2)
δ2
.
For the one-tailed test, the expression for the required sample size when n = n1 =
n2 is
Choice of sample size: n =
(zα + zβ)2
(σ2
1 + σ2
2)
δ2
.
When the population variance (or variances, in the two-sample situation) is un-
known, the choice of sample size is not straightforward. In testing the hypothesis
μ = μ0 when the true value is μ = μ0 + δ, the statistic
X̄ − (μ0 + δ)
S/
√
n
does not follow the t-distribution, as one might expect, but instead follows the
noncentral t-distribution. However, tables or charts based on the noncentral
t-distribution do exist for determining the appropriate sample size if some estimate
of σ is available or if δ is a multiple of σ. Table A.8 gives the sample sizes needed
to control the values of α and β for various values of
Δ =
|δ|
σ
=
|μ − μ0|
σ
for both one- and two-tailed tests. In the case of the two-sample t-test in which the
variances are unknown but assumed equal, we obtain the sample sizes n = n1 = n2
needed to control the values of α and β for various values of
Δ =
|δ|
σ
=
|μ1 − μ2 − d0|
σ
from Table A.9.
Example 10.8: In comparing the performance of two catalysts on the effect of a reaction yield, a
two-sample t-test is to be conducted with α = 0.05. The variances in the yields
354 Chapter 10 One- and Two-Sample Tests of Hypotheses
are considered to be the same for the two catalysts. How large a sample for each
catalyst is needed to test the hypothesis
H0: μ1 = μ2,
H1: μ1 = μ2
if it is essential to detect a difference of 0.8σ between the catalysts with probability
0.9?
Solution: From Table A.9, with α = 0.05 for a two-tailed test, β = 0.1, and
Δ =
|0.8σ|
σ
= 0.8,
we find the required sample size to be n = 34.
In practical situations, it might be difficult to force a scientist or engineer
to make a commitment on information from which a value of Δ can be found.
The reader is reminded that the Δ-value quantifies the kind of difference between
the means that the scientist considers important, that is, a difference considered
significant from a scientific, not a statistical, point of view. Example 10.8 illustrates
how this choice is often made, namely, by selecting a fraction of σ. Obviously, if
the sample size is based on a choice of |δ| that is a small fraction of σ, the resulting
sample size may be quite large compared to what the study allows.
10.7 Graphical Methods for Comparing Means
In Chapter 1, considerable attention was directed to displaying data in graphical
form, such as stem-and-leaf plots and box-and-whisker plots. In Section 8.8, quan-
tile plots and quantile-quantile normal plots were used to provide a “picture” to
summarize a set of experimental data. Many computer software packages produce
graphical displays. As we proceed to other forms of data analysis (e.g., regression
analysis and analysis of variance), graphical methods become even more informa-
tive.
Graphical aids cannot be used as a replacement for the test procedure itself.
Certainly, the value of the test statistic indicates the proper type of evidence in
support of H0 or H1. However, a pictorial display provides a good illustration and
is often a better communicator of evidence to the beneficiary of the analysis. Also,
a picture will often clarify why a significant difference was found. Failure of an
important assumption may be exposed by a summary type of graphical tool.
For the comparison of means, side-by-side box-and-whisker plots provide a
telling display. The reader should recall that these plots display the 25th per-
centile, 75th percentile, and the median in a data set. In addition, the whiskers
display the extremes in a data set. Consider Exercise 10.40 at the end of this sec-
tion. Plasma ascorbic acid levels were measured in two groups of pregnant women,
smokers and nonsmokers. Figure 10.16 shows the box-and-whisker plots for both
groups of women. Two things are very apparent. Taking into account variability,
there appears to be a negligible difference in the sample means. In addition, the
variability in the two groups appears to be somewhat different. Of course, the
analyst must keep in mind the rather sizable differences between the sample sizes
in this case.
10.7 Graphical Methods for Comparing Means 355
Nonsmoker Smoker
0.0
0.5
1.0
1.5
Ascorbic
Acid
Figure 10.16: Two box-and-whisker plots of
plasma ascorbic acid in smokers and nonsmokers.
No Nitrogen Nitrogen
0.3
0.4
0.5
0.6
0.7
0.8
Weight
Figure 10.17: Two box-and-whisker plots of
seedling data.
Consider Exercise 9.40 in Section 9.9. Figure 10.17 shows the multiple box-
and-whisker plot for the data on 10 seedlings, half given nitrogen and half given
no nitrogen. The display reveals a smaller variability for the group containing no
nitrogen. In addition, the lack of overlap of the box plots suggests a significant
difference between the mean stem weights for the two groups. It would appear
that the presence of nitrogen increases the stem weights and perhaps increases the
variability in the weights.
There are no certain rules of thumb regarding when two box-and-whisker plots
give evidence of significant difference between the means. However, a rough guide-
line is that if the 25th percentile line for one sample exceeds the median line for
the other sample, there is strong evidence of a difference between means.
More emphasis is placed on graphical methods in a real-life case study presented
later in this chapter.
Annotated Computer Printout for Two-Sample t-Test
Consider once again Exercise 9.40 on page 294, where seedling data under condi-
tions of nitrogen and no nitrogen were collected. Test
H0: μNIT = μNON,
H1: μNIT  μNON,
where the population means indicate mean weights. Figure 10.18 is an annotated
computer printout generated using the SAS package. Notice that sample standard
deviation and standard error are shown for both samples. The t-statistics under the
assumption of equal variance and unequal variance are both given. From the box-
and-whisker plot of Figure 10.17 it would certainly appear that the equal variance
assumption is violated. A P-value of 0.0229 suggests a conclusion of unequal means.
This concurs with the diagnostic information given in Figure 10.18. Incidentally,
notice that t and t
are equal in this case, since n1 = n2.
/ /
356 Chapter 10 One- and Two-Sample Tests of Hypotheses
TTEST Procedure
Variable Weight
Mineral N Mean Std Dev Std Err
No nitrogen 10 0.3990 0.0728 0.0230
Nitrogen 10 0.5650 0.1867 0.0591
Variances DF t Value Pr  |t|
Equal 18 2.62 0.0174
Unequal 11.7 2.62 0.0229
Test the Equality of Variances
Variable Num DF Den DF F Value Pr  F
Weight 9 9 6.58 0.0098
Figure 10.18: SAS printout for two-sample t-test.
Exercises
10.19 In a research report, Richard H. Weindruch of
the UCLA Medical School claims that mice with an
average life span of 32 months will live to be about 40
months old when 40% of the calories in their diet are
replaced by vitamins and protein. Is there any reason
to believe that μ  40 if 64 mice that are placed on
this diet have an average life of 38 months with a stan-
dard deviation of 5.8 months? Use a P-value in your
conclusion.
10.20 A random sample of 64 bags of white ched-
dar popcorn weighed, on average, 5.23 ounces with a
standard deviation of 0.24 ounce. Test the hypothesis
that μ = 5.5 ounces against the alternative hypothesis,
μ  5.5 ounces, at the 0.05 level of significance.
10.21 An electrical firm manufactures light bulbs
that have a lifetime that is approximately normally
distributed with a mean of 800 hours and a standard
deviation of 40 hours. Test the hypothesis that μ = 800
hours against the alternative, μ = 800 hours, if a ran-
dom sample of 30 bulbs has an average life of 788 hours.
Use a P-value in your answer.
10.22 In the American Heart Association journal Hy-
pertension, researchers report that individuals who
practice Transcendental Meditation (TM) lower their
blood pressure significantly. If a random sample of 225
male TM practitioners meditate for 8.5 hours per week
with a standard deviation of 2.25 hours, does that sug-
gest that, on average, men who use TM meditate more
than 8 hours per week? Quote a P-value in your con-
clusion.
10.23 Test the hypothesis that the average content
of containers of a particular lubricant is 10 liters if the
contents of a random sample of 10 containers are 10.2,
9.7, 10.1, 10.3, 10.1, 9.8, 9.9, 10.4, 10.3, and 9.8 liters.
Use a 0.01 level of significance and assume that the
distribution of contents is normal.
10.24 The average height of females in the freshman
class of a certain college has historically been 162.5 cen-
timeters with a standard deviation of 6.9 centimeters.
Is there reason to believe that there has been a change
in the average height if a random sample of 50 females
in the present freshman class has an average height of
165.2 centimeters? Use a P-value in your conclusion.
Assume the standard deviation remains the same.
10.25 It is claimed that automobiles are driven on
average more than 20,000 kilometers per year. To test
this claim, 100 randomly selected automobile owners
are asked to keep a record of the kilometers they travel.
Would you agree with this claim if the random sample
showed an average of 23,500 kilometers and a standard
deviation of 3900 kilometers? Use a P-value in your
conclusion.
10.26 According to a dietary study, high sodium in-
take may be related to ulcers, stomach cancer, and
migraine headaches. The human requirement for salt
is only 220 milligrams per day, which is surpassed in
most single servings of ready-to-eat cereals. If a ran-
dom sample of 20 similar servings of a certain cereal
has a mean sodium content of 244 milligrams and a
standard deviation of 24.5 milligrams, does this sug-
gest at the 0.05 level of significance that the average
sodium content for a single serving of such cereal is
greater than 220 milligrams? Assume the distribution
of sodium contents to be normal.
/ /
Exercises 357
10.27 A study at the University of Colorado at Boul-
der shows that running increases the percent resting
metabolic rate (RMR) in older women. The average
RMR of 30 elderly women runners was 34.0% higher
than the average RMR of 30 sedentary elderly women,
and the standard deviations were reported to be 10.5
and 10.2%, respectively. Was there a significant in-
crease in RMR of the women runners over the seden-
tary women? Assume the populations to be approxi-
mately normally distributed with equal variances. Use
a P-value in your conclusions.
10.28 According to Chemical Engineering, an impor-
tant property of fiber is its water absorbency. The aver-
age percent absorbency of 25 randomly selected pieces
of cotton fiber was found to be 20 with a standard de-
viation of 1.5. A random sample of 25 pieces of acetate
yielded an average percent of 12 with a standard devi-
ation of 1.25. Is there strong evidence that the popula-
tion mean percent absorbency is significantly higher for
cotton fiber than for acetate? Assume that the percent
absorbency is approximately normally distributed and
that the population variances in percent absorbency
for the two fibers are the same. Use a significance level
of 0.05.
10.29 Past experience indicates that the time re-
quired for high school seniors to complete a standard-
ized test is a normal random variable with a mean of 35
minutes. If a random sample of 20 high school seniors
took an average of 33.1 minutes to complete this test
with a standard deviation of 4.3 minutes, test the hy-
pothesis, at the 0.05 level of significance, that μ = 35
minutes against the alternative that μ  35 minutes.
10.30 A random sample of size n1 = 25, taken from a
normal population with a standard deviation σ1 = 5.2,
has a mean x̄1 = 81. A second random sample of size
n2 = 36, taken from a different normal population with
a standard deviation σ2 = 3.4, has a mean x̄2 = 76.
Test the hypothesis that μ1 = μ2 against the alterna-
tive, μ1 = μ2. Quote a P-value in your conclusion.
10.31 A manufacturer claims that the average ten-
sile strength of thread A exceeds the average tensile
strength of thread B by at least 12 kilograms. To test
this claim, 50 pieces of each type of thread were tested
under similar conditions. Type A thread had an aver-
age tensile strength of 86.7 kilograms with a standard
deviation of 6.28 kilograms, while type B thread had
an average tensile strength of 77.8 kilograms with a
standard deviation of 5.61 kilograms. Test the manu-
facturer’s claim using a 0.05 level of significance.
10.32 Amstat News (December 2004) lists median
salaries for associate professors of statistics at research
institutions and at liberal arts and other institutions
in the United States. Assume that a sample of 200
associate professors from research institutions has an
average salary of $70,750 per year with a standard de-
viation of $6000. Assume also that a sample of 200 as-
sociate professors from other types of institutions has
an average salary of $65,200 with a standard deviation
of $5000. Test the hypothesis that the mean salary
for associate professors in research institutions is $2000
higher than for those in other institutions. Use a 0.01
level of significance.
10.33 A study was conducted to see if increasing the
substrate concentration has an appreciable effect on
the velocity of a chemical reaction. With a substrate
concentration of 1.5 moles per liter, the reaction was
run 15 times, with an average velocity of 7.5 micro-
moles per 30 minutes and a standard deviation of 1.5.
With a substrate concentration of 2.0 moles per liter,
12 runs were made, yielding an average velocity of 8.8
micromoles per 30 minutes and a sample standard de-
viation of 1.2. Is there any reason to believe that this
increase in substrate concentration causes an increase
in the mean velocity of the reaction of more than 0.5
micromole per 30 minutes? Use a 0.01 level of signifi-
cance and assume the populations to be approximately
normally distributed with equal variances.
10.34 A study was made to determine if the subject
matter in a physics course is better understood when a
lab constitutes part of the course. Students were ran-
domly selected to participate in either a 3-semester-
hour course without labs or a 4-semester-hour course
with labs. In the section with labs, 11 students made
an average grade of 85 with a standard deviation of 4.7,
and in the section without labs, 17 students made an
average grade of 79 with a standard deviation of 6.1.
Would you say that the laboratory course increases the
average grade by as much as 8 points? Use a P-value in
your conclusion and assume the populations to be ap-
proximately normally distributed with equal variances.
10.35 To find out whether a new serum will arrest
leukemia, 9 mice, all with an advanced stage of the
disease, are selected. Five mice receive the treatment
and 4 do not. Survival times, in years, from the time
the experiment commenced are as follows:
Treatment 2.1 5.3 1.4 4.6 0.9
No Treatment 1.9 0.5 2.8 3.1
At the 0.05 level of significance, can the serum be said
to be effective? Assume the two populations to be nor-
mally distributed with equal variances.
10.36 Engineers at a large automobile manufactur-
ing company are trying to decide whether to purchase
brand A or brand B tires for the company’s new mod-
els. To help them arrive at a decision, an experiment
is conducted using 12 of each brand. The tires are run
/ /
358 Chapter 10 One- and Two-Sample Tests of Hypotheses
until they wear out. The results are as follows:
Brand A : x̄1 = 37,900 kilometers,
s1 = 5100 kilometers.
Brand B : x̄1 = 39,800 kilometers,
s2 = 5900 kilometers.
Test the hypothesis that there is no difference in the
average wear of the two brands of tires. Assume the
populations to be approximately normally distributed
with equal variances. Use a P-value.
10.37 In Exercise 9.42 on page 295, test the hypoth-
esis that the fuel economy of Volkswagen mini-trucks,
on average, exceeds that of similarly equipped Toyota
mini-trucks by 4 kilometers per liter. Use a 0.10 level
of significance.
10.38 A UCLA researcher claims that the average life
span of mice can be extended by as much as 8 months
when the calories in their diet are reduced by approx-
imately 40% from the time they are weaned. The re-
stricted diets are enriched to normal levels by vitamins
and protein. Suppose that a random sample of 10 mice
is fed a normal diet and has an average life span of 32.1
months with a standard deviation of 3.2 months, while
a random sample of 15 mice is fed the restricted diet
and has an average life span of 37.6 months with a
standard deviation of 2.8 months. Test the hypothesis,
at the 0.05 level of significance, that the average life
span of mice on this restricted diet is increased by 8
months against the alternative that the increase is less
than 8 months. Assume the distributions of life spans
for the regular and restricted diets are approximately
normal with equal variances.
10.39 The following data represent the running times
of films produced by two motion-picture companies:
Company Time (minutes)
1 102 86 98 109 92
2 81 165 97 134 92 87 114
Test the hypothesis that the average running time of
films produced by company 2 exceeds the average run-
ning time of films produced by company 1 by 10 min-
utes against the one-sided alternative that the differ-
ence is less than 10 minutes. Use a 0.1 level of sig-
nificance and assume the distributions of times to be
approximately normal with unequal variances.
10.40 In a study conducted at Virginia Tech, the
plasma ascorbic acid levels of pregnant women were
compared for smokers versus nonsmokers. Thirty-two
women in the last three months of pregnancy, free of
major health disorders and ranging in age from 15 to
32 years, were selected for the study. Prior to the col-
lection of 20 ml of blood, the participants were told to
avoid breakfast, forgo their vitamin supplements, and
avoid foods high in ascorbic acid content. From the
blood samples, the following plasma ascorbic acid val-
ues were determined, in milligrams per 100 milliliters:
Plasma Ascorbic Acid Values
Nonsmokers Smokers
0.97 1.16 0.48
0.72 0.86 0.71
1.00 0.85 0.98
0.81 0.58 0.68
0.62 0.57 1.18
1.32 0.64 1.36
1.24 0.98 0.78
0.99 1.09 1.64
0.90 0.92
0.74 0.78
0.88 1.24
0.94 1.18
Is there sufficient evidence to conclude that there is a
difference between plasma ascorbic acid levels of smok-
ers and nonsmokers? Assume that the two sets of data
came from normal populations with unequal variances.
Use a P-value.
10.41 A study was conducted by the Department of
Zoology at Virginia Tech to determine if there is a
significant difference in the density of organisms at
two different stations located on Cedar Run, a sec-
ondary stream in the Roanoke River drainage basin.
Sewage from a sewage treatment plant and overflow
from the Federal Mogul Corporation settling pond en-
ter the stream near its headwaters. The following data
give the density measurements, in number of organisms
per square meter, at the two collecting stations:
Number of Organisms per Square Meter
Station 1 Station 2
5030 4980 2800 2810
13,700 11,910 4670 1330
10,730 8130 6890 3320
11,400 26,850 7720 1230
860 17,660 7030 2130
2200 22,800 7330 2190
4250 1130
15,040 1690
Can we conclude, at the 0.05 level of significance, that
the average densities at the two stations are equal?
Assume that the observations come from normal pop-
ulations with different variances.
10.42 Five samples of a ferrous-type substance were
used to determine if there is a difference between a
laboratory chemical analysis and an X-ray fluorescence
analysis of the iron content. Each sample was split into
two subsamples and the two types of analysis were ap-
plied. Following are the coded data showing the iron
content analysis:
/ /
Exercises 359
Sample
Analysis 1 2 3 4 5
X-ray 2.0 2.0 2.3 2.1 2.4
Chemical 2.2 1.9 2.5 2.3 2.4
Assuming that the populations are normal, test at the
0.05 level of significance whether the two methods of
analysis give, on the average, the same result.
10.43 According to published reports, practice un-
der fatigued conditions distorts mechanisms that gov-
ern performance. An experiment was conducted using
15 college males, who were trained to make a continu-
ous horizontal right-to-left arm movement from a mi-
croswitch to a barrier, knocking over the barrier co-
incident with the arrival of a clock sweephand to the
6 o’clock position. The absolute value of the differ-
ence between the time, in milliseconds, that it took to
knock over the barrier and the time for the sweephand
to reach the 6 o’clock position (500 msec) was recorded.
Each participant performed the task five times under
prefatigue and postfatigue conditions, and the sums of
the absolute differences for the five performances were
recorded.
Absolute Time Differences
Subject Prefatigue Postfatigue
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
158
92
65
98
33
89
148
58
142
117
74
66
109
57
85
91
59
215
226
223
91
92
177
134
116
153
219
143
164
100
An increase in the mean absolute time difference when
the task is performed under postfatigue conditions
would support the claim that practice under fatigued
conditions distorts mechanisms that govern perfor-
mance. Assuming the populations to be normally dis-
tributed, test this claim.
10.44 In a study conducted by the Department of
Human Nutrition and Foods at Virginia Tech, the fol-
lowing data were recorded on sorbic acid residuals, in
parts per million, in ham immediately after dipping in
a sorbate solution and after 60 days of storage:
Sorbic Acid Residuals in Ham
Slice Before Storage After Storage
1
2
3
4
5
6
7
8
224
270
400
444
590
660
1400
680
116
96
239
329
437
597
689
576
Assuming the populations to be normally distributed,
is there sufficient evidence, at the 0.05 level of signifi-
cance, to say that the length of storage influences sorbic
acid residual concentrations?
10.45 A taxi company manager is trying to decide
whether the use of radial tires instead of regular
belted tires improves fuel economy. Twelve cars were
equipped with radial tires and driven over a prescribed
test course. Without changing drivers, the same cars
were then equipped with regular belted tires and driven
once again over the test course. The gasoline consump-
tion, in kilometers per liter, was recorded as follows:
Kilometers per Liter
Car Radial Tires Belted Tires
1 4.2 4.1
2 4.7 4.9
3 6.6 6.2
4 7.0 6.9
5 6.7 6.8
6 4.5 4.4
7 5.7 5.7
8 6.0 5.8
9 7.4 6.9
10 4.9 4.7
11 6.1 6.0
12 5.2 4.9
Can we conclude that cars equipped with radial tires
give better fuel economy than those equipped with
belted tires? Assume the populations to be normally
distributed. Use a P-value in your conclusion.
10.46 In Review Exercise 9.91 on page 313, use the t-
distribution to test the hypothesis that the diet reduces
a woman’s weight by 4.5 kilograms on average against
the alternative hypothesis that the mean difference in
weight is less than 4.5 kilograms. Use a P-value.
10.47 How large a sample is required in Exercise
10.20 if the power of the test is to be 0.90 when the
true mean is 5.20? Assume that σ = 0.24.
10.48 If the distribution of life spans in Exercise 10.19
is approximately normal, how large a sample is re-
quired in order that the probability of committing a
type II error be 0.1 when the true mean is 35.9 months?
Assume that σ = 5.8 months.
360 Chapter 10 One- and Two-Sample Tests of Hypotheses
10.49 How large a sample is required in Exercise
10.24 if the power of the test is to be 0.95 when the
true average height differs from 162.5 by 3.1 centime-
ters? Use α = 0.02.
10.50 How large should the samples be in Exercise
10.31 if the power of the test is to be 0.95 when the
true difference between thread types A and B is 8 kilo-
grams?
10.51 How large a sample is required in Exercise
10.22 if the power of the test is to be 0.8 when the
true mean meditation time exceeds the hypothesized
value by 1.2σ? Use α = 0.05.
10.52 For testing
H0: μ = 14,
H1: μ = 14,
an α = 0.05 level t-test is being considered. What sam-
ple size is necessary in order for the probability to be
0.1 of falsely failing to reject H0 when the true popula-
tion mean differs from 14 by 0.5? From a preliminary
sample we estimate σ to be 1.25.
10.53 A study was conducted at the Department of
Veterinary Medicine at Virginia Tech to determine if
the “strength” of a wound from surgical incision is af-
fected by the temperature of the knife. Eight dogs
were used in the experiment. “Hot” and “cold” in-
cisions were made on the abdomen of each dog, and
the strength was measured. The resulting data appear
below.
Dog Knife Strength
1
1
2
2
3
3
4
4
Hot
Cold
Hot
Cold
Hot
Cold
Hot
Cold
5120
8200
10, 000
8600
10, 000
9200
10, 000
6200
Dog Knife Strength
5
5
6
6
7
7
8
8
Hot
Cold
Hot
Cold
Hot
Cold
Hot
Cold
10, 000
10, 000
7900
5200
510
885
1020
460
(a) Write an appropriate hypothesis to determine if
there is a significant difference in strength between
the hot and cold incisions.
(b) Test the hypothesis using a paired t-test. Use a
P-value in your conclusion.
10.54 Nine subjects were used in an experiment to
determine if exposure to carbon monoxide has an im-
pact on breathing capability. The data were collected
by personnel in the Health and Physical Education De-
partment at Virginia Tech and were analyzed in the
Statistics Consulting Center at Hokie Land. The sub-
jects were exposed to breathing chambers, one of which
contained a high concentration of CO. Breathing fre-
quency measures were made for each subject for each
chamber. The subjects were exposed to the breath-
ing chambers in random sequence. The data give the
breathing frequency, in number of breaths taken per
minute. Make a one-sided test of the hypothesis that
mean breathing frequency is the same for the two en-
vironments. Use α = 0.05. Assume that breathing
frequency is approximately normal.
Subject With CO Without CO
1 30 30
2 45 40
3 26 25
4 25 23
5 34 30
6 51 49
7 46 41
8 32 35
9 30 28
10.8 One Sample: Test on a Single Proportion
Tests of hypotheses concerning proportions are required in many areas. Politicians
are certainly interested in knowing what fraction of the voters will favor them in
the next election. All manufacturing firms are concerned about the proportion of
defective items when a shipment is made. Gamblers depend on a knowledge of the
proportion of outcomes that they consider favorable.
We shall consider the problem of testing the hypothesis that the proportion
of successes in a binomial experiment equals some specified value. That is, we
are testing the null hypothesis H0 that p = p0, where p is the parameter of the
binomial distribution. The alternative hypothesis may be one of the usual one-sided
10.8 One Sample: Test on a Single Proportion 361
or two-sided alternatives:
p  p0, p  p0, or p = p0.
The appropriate random variable on which we base our decision criterion is
the binomial random variable X, although we could just as well use the statistic
p̂ = X/n. Values of X that are far from the mean μ = np0 will lead to the rejection
of the null hypothesis. Because X is a discrete binomial variable, it is unlikely that
a critical region can be established whose size is exactly equal to a prespecified
value of α. For this reason it is preferable, in dealing with small samples, to base
our decisions on P-values. To test the hypothesis
H0: p = p0,
H1: p  p0,
we use the binomial distribution to compute the P-value
P = P(X ≤ x when p = p0).
The value x is the number of successes in our sample of size n. If this P-value is
less than or equal to α, our test is significant at the α level and we reject H0 in
favor of H1. Similarly, to test the hypothesis
H0: p = p0,
H1: p  p0,
at the α-level of significance, we compute
P = P(X ≥ x when p = p0)
and reject H0 in favor of H1 if this P-value is less than or equal to α. Finally, to
test the hypothesis
H0: p = p0,
H1: p = p0,
at the α-level of significance, we compute
P = 2P(X ≤ x when p = p0) if x  np0
or
P = 2P(X ≥ x when p = p0) if x  np0
and reject H0 in favor of H1 if the computed P-value is less than or equal to α.
The steps for testing a null hypothesis about a proportion against various al-
ternatives using the binomial probabilities of Table A.1 are as follows:
Testing a
Proportion
(Small Samples)
1. H0: p = p0.
2. One of the alternatives H1: p  p0, p  p0, or p = p0.
3. Choose a level of significance equal to α.
4. Test statistic: Binomial variable X with p = p0.
5. Computations: Find x, the number of successes, and compute the appropri-
ate P-value.
6. Decision: Draw appropriate conclusions based on the P-value.
362 Chapter 10 One- and Two-Sample Tests of Hypotheses
Example 10.9: A builder claims that heat pumps are installed in 70% of all homes being con-
structed today in the city of Richmond, Virginia. Would you agree with this claim
if a random survey of new homes in this city showed that 8 out of 15 had heat
pumps installed? Use a 0.10 level of significance.
Solution: 1. H0: p = 0.7.
2. H1: p = 0.7.
3. α = 0.10.
4. Test statistic: Binomial variable X with p = 0.7 and n = 15.
5. Computations: x = 8 and np0 = (15)(0.7) = 10.5. Therefore, from Table A.1,
the computed P-value is
P = 2P(X ≤ 8 when p = 0.7) = 2
8

x=0
b(x; 15, 0.7) = 0.2622  0.10.
6. Decision: Do not reject H0. Conclude that there is insufficient reason to
doubt the builder’s claim.
In Section 5.2, we saw that binomial probabilities can be obtained from the
actual binomial formula or from Table A.1 when n is small. For large n, approxi-
mation procedures are required. When the hypothesized value p0 is very close to 0
or 1, the Poisson distribution, with parameter μ = np0, may be used. However, the
normal curve approximation, with parameters μ = np0 and σ2
= np0q0, is usually
preferred for large n and is very accurate as long as p0 is not extremely close to 0
or to 1. If we use the normal approximation, the z-value for testing p = p0 is
given by
z =
x − np0
√
np0q0
=
p̂ − p0

p0q0/n
,
which is a value of the standard normal variable Z. Hence, for a two-tailed test
at the α-level of significance, the critical region is z  −zα/2 or z  zα/2. For the
one-sided alternative p  p0, the critical region is z  −zα, and for the alternative
p  p0, the critical region is z  zα.
Example 10.10: A commonly prescribed drug for relieving nervous tension is believed to be only
60% effective. Experimental results with a new drug administered to a random
sample of 100 adults who were suffering from nervous tension show that 70 received
relief. Is this sufficient evidence to conclude that the new drug is superior to the
one commonly prescribed? Use a 0.05 level of significance.
Solution: 1. H0: p = 0.6.
2. H1: p  0.6.
3. α = 0.05.
4. Critical region: z  1.645.
10.9 Two Samples: Tests on Two Proportions 363
5. Computations: x = 70, n = 100, p̂ = 70/100 = 0.7, and
z =
0.7 − 0.6

(0.6)(0.4)/100
= 2.04, P = P(Z  2.04)  0.0207.
6. Decision: Reject H0 and conclude that the new drug is superior.
10.9 Two Samples: Tests on Two Proportions
Situations often arise where we wish to test the hypothesis that two proportions
are equal. For example, we might want to show evidence that the proportion of
doctors who are pediatricians in one state is equal to the proportion in another
state. A person may decide to give up smoking only if he or she is convinced that
the proportion of smokers with lung cancer exceeds the proportion of nonsmokers
with lung cancer.
In general, we wish to test the null hypothesis that two proportions, or bino-
mial parameters, are equal. That is, we are testing p1 = p2 against one of the
alternatives p1  p2, p1  p2, or p1 = p2. Of course, this is equivalent to testing
the null hypothesis that p1 − p2 = 0 against one of the alternatives p1 − p2  0,
p1 − p2  0, or p1 − p2 = 0. The statistic on which we base our decision is the
random variable +
P1 − +
P2. Independent samples of sizes n1 and n2 are selected at
random from two binomial populations and the proportions of successes +
P1 and +
P2
for the two samples are computed.
In our construction of confidence intervals for p1 and p2 we noted, for n1 and n2
sufficiently large, that the point estimator +
P1 minus +
P2 was approximately normally
distributed with mean
μ
P1− 
P2
= p1 − p2
and variance
σ2

P1− 
P2
=
p1q1
n1
+
p2q2
n2
.
Therefore, our critical region(s) can be established by using the standard normal
variable
Z =
( +
P1 − +
P2) − (p1 − p2)

p1q1/n1 + p2q2/n2
.
When H0 is true, we can substitute p1 = p2 = p and q1 = q2 = q (where p and
q are the common values) in the preceding formula for Z to give the form
Z =
+
P1 − +
P2

pq(1/n1 + 1/n2)
.
To compute a value of Z, however, we must estimate the parameters p and q that
appear in the radical. Upon pooling the data from both samples, the pooled
estimate of the proportion p is
p̂ =
x1 + x2
n1 + n2
,
364 Chapter 10 One- and Two-Sample Tests of Hypotheses
where x1 and x2 are the numbers of successes in each of the two samples. Substi-
tuting p̂ for p and q̂ = 1 − p̂ for q, the z-value for testing p1= p2 is determined
from the formula
z =
p̂1 − p̂2

p̂q̂(1/n1 + 1/n2)
.
The critical regions for the appropriate alternative hypotheses are set up as before,
using critical points of the standard normal curve. Hence, for the alternative
p1 = p2 at the α-level of significance, the critical region is z  −zα/2 or z  zα/2.
For a test where the alternative is p1  p2, the critical region is z  −zα, and
when the alternative is p1  p2, the critical region is z  zα.
Example 10.11: A vote is to be taken among the residents of a town and the surrounding county
to determine whether a proposed chemical plant should be constructed. The con-
struction site is within the town limits, and for this reason many voters in the
county believe that the proposal will pass because of the large proportion of town
voters who favor the construction. To determine if there is a significant difference
in the proportions of town voters and county voters favoring the proposal, a poll is
taken. If 120 of 200 town voters favor the proposal and 240 of 500 county residents
favor it, would you agree that the proportion of town voters favoring the proposal is
higher than the proportion of county voters? Use an α = 0.05 level of significance.
Solution: Let p1 and p2 be the true proportions of voters in the town and county, respectively,
favoring the proposal.
1. H0: p1 = p2.
2. H1: p1  p2.
3. α = 0.05.
4. Critical region: z  1.645.
5. Computations:
p̂1 =
x1
n1
=
120
200
= 0.60, p̂2 =
x2
n2
=
240
500
= 0.48, and
p̂ =
x1 + x2
n1 + n2
=
120 + 240
200 + 500
= 0.51.
Therefore,
z =
0.60 − 0.48

(0.51)(0.49)(1/200 + 1/500)
= 2.9,
P = P(Z  2.9) = 0.0019.
6. Decision: Reject H0 and agree that the proportion of town voters favoring
the proposal is higher than the proportion of county voters.
Exercises 365
Exercises
10.55 A marketing expert for a pasta-making com-
pany believes that 40% of pasta lovers prefer lasagna.
If 9 out of 20 pasta lovers choose lasagna over other pas-
tas, what can be concluded about the expert’s claim?
Use a 0.05 level of significance.
10.56 Suppose that, in the past, 40% of all adults
favored capital punishment. Do we have reason to
believe that the proportion of adults favoring capital
punishment has increased if, in a random sample of 15
adults, 8 favor capital punishment? Use a 0.05 level of
significance.
10.57 A new radar device is being considered for a
certain missile defense system. The system is checked
by experimenting with aircraft in which a kill or a no
kill is simulated. If, in 300 trials, 250 kills occur, accept
or reject, at the 0.04 level of significance, the claim that
the probability of a kill with the new system does not
exceed the 0.8 probability of the existing device.
10.58 It is believed that at least 60% of the residents
in a certain area favor an annexation suit by a neigh-
boring city. What conclusion would you draw if only
110 in a sample of 200 voters favored the suit? Use a
0.05 level of significance.
10.59 A fuel oil company claims that one-fifth of the
homes in a certain city are heated by oil. Do we have
reason to believe that fewer than one-fifth are heated
by oil if, in a random sample of 1000 homes in this city,
136 are heated by oil? Use a P-value in your conclu-
sion.
10.60 At a certain college, it is estimated that at most
25% of the students ride bicycles to class. Does this
seem to be a valid estimate if, in a random sample of
90 college students, 28 are found to ride bicycles to
class? Use a 0.05 level of significance.
10.61 In a winter of an epidemic flu, the parents of
2000 babies were surveyed by researchers at a well-
known pharmaceutical company to determine if the
company’s new medicine was effective after two days.
Among 120 babies who had the flu and were given the
medicine, 29 were cured within two days. Among 280
babies who had the flu but were not given the medicine,
56 recovered within two days. Is there any significant
indication that supports the company’s claim of the
effectiveness of the medicine?
10.62 In a controlled laboratory experiment, scien-
tists at the University of Minnesota discovered that
25% of a certain strain of rats subjected to a 20% coffee
bean diet and then force-fed a powerful cancer-causing
chemical later developed cancerous tumors. Would we
have reason to believe that the proportion of rats devel-
oping tumors when subjected to this diet has increased
if the experiment were repeated and 16 of 48 rats de-
veloped tumors? Use a 0.05 level of significance.
10.63 In a study to estimate the proportion of resi-
dents in a certain city and its suburbs who favor the
construction of a nuclear power plant, it is found that
63 of 100 urban residents favor the construction while
only 59 of 125 suburban residents are in favor. Is there
a significant difference between the proportions of ur-
ban and suburban residents who favor construction of
the nuclear plant? Make use of a P-value.
10.64 In a study on the fertility of married women
conducted by Martin O’Connell and Carolyn C. Rogers
for the Census Bureau in 1979, two groups of childless
wives aged 25 to 29 were selected at random, and each
was asked if she eventually planned to have a child.
One group was selected from among wives married
less than two years and the other from among wives
married five years. Suppose that 240 of the 300 wives
married less than two years planned to have children
some day compared to 288 of the 400 wives married
five years. Can we conclude that the proportion of
wives married less than two years who planned to have
children is significantly higher than the proportion of
wives married five years? Make use of a P-value.
10.65 An urban community would like to show that
the incidence of breast cancer is higher in their area
than in a nearby rural area. (PCB levels were found to
be higher in the soil of the urban community.) If it is
found that 20 of 200 adult women in the urban com-
munity have breast cancer and 10 of 150 adult women
in the rural community have breast cancer, can we con-
clude at the 0.05 level of significance that breast cancer
is more prevalent in the urban community?
10.66 Group Project: The class should be divided
into pairs of students for this project. Suppose it is
conjectured that at least 25% of students at your uni-
versity exercise for more than two hours a week. Col-
lect data from a random sample of 50 students. Ask
each student if he or she works out for at least two
hours per week. Then do the computations that allow
either rejection or nonrejection of the above conjecture.
Show all work and quote a P-value in your conclusion.
366 Chapter 10 One- and Two-Sample Tests of Hypotheses
10.10 One- and Two-Sample Tests Concerning Variances
In this section, we are concerned with testing hypotheses concerning population
variances or standard deviations. Applications of one- and two-sample tests on
variances are certainly not difficult to motivate. Engineers and scientists are con-
fronted with studies in which they are required to demonstrate that measurements
involving products or processes adhere to specifications set by consumers. The
specifications are often met if the process variance is sufficiently small. Attention
is also focused on comparative experiments between methods or processes, where
inherent reproducibility or variability must formally be compared. In addition,
to determine if the equal variance assumption is violated, a test comparing two
variances is often applied prior to conducting a t-test on two means.
Let us first consider the problem of testing the null hypothesis H0 that the
population variance σ2
equals a specified value σ2
0 against one of the usual alter-
natives σ2
 σ2
0, σ2
 σ2
0, or σ2
= σ2
0. The appropriate statistic on which to
base our decision is the chi-squared statistic of Theorem 8.4, which was used in
Chapter 9 to construct a confidence interval for σ2
. Therefore, if we assume that
the distribution of the population being sampled is normal, the chi-squared value
for testing σ2
= σ2
0 is given by
χ2
=
(n − 1)s2
σ2
0
,
where n is the sample size, s2
is the sample variance, and σ2
0 is the value of σ2
given
by the null hypothesis. If H0 is true, χ2
is a value of the chi-squared distribution
with v = n − 1 degrees of freedom. Hence, for a two-tailed test at the α-level
of significance, the critical region is χ2
 χ2
1−α/2 or χ2
 χ2
α/2. For the one-
sided alternative σ2
 σ2
0, the critical region is χ2
 χ2
1−α, and for the one-sided
alternative σ2
 σ2
0, the critical region is χ2
 χ2
α.
Robustness of χ2
-Test to Assumption of Normality
The reader may have discerned that various tests depend, at least theoretically,
on the assumption of normality. In general, many procedures in applied statis-
tics have theoretical underpinnings that depend on the normal distribution. These
procedures vary in the degree of their dependency on the assumption of normality.
A procedure that is reasonably insensitive to the assumption is called a robust
procedure (i.e., robust to normality). The χ2
-test on a single variance is very
nonrobust to normality (i.e., the practical success of the procedure depends on
normality). As a result, the P-value computed may be appreciably different from
the actual P-value if the population sampled is not normal. Indeed, it is quite
feasible that a statistically significant P-value may not truly signal H1: σ = σ0;
rather, a significant value may be a result of the violation of the normality assump-
tions. Therefore, the analyst should approach the use of this particular χ2
-test with
caution.
Example 10.12: A manufacturer of car batteries claims that the life of the company’s batteries is
approximately normally distributed with a standard deviation equal to 0.9 year.
10.10 One- and Two-Sample Tests Concerning Variances 367
If a random sample of 10 of these batteries has a standard deviation of 1.2 years,
do you think that σ  0.9 year? Use a 0.05 level of significance.
Solution: 1. H0: σ2
= 0.81.
2. H1: σ2
 0.81.
3. α = 0.05.
4. Critical region: From Figure 10.19 we see that the null hypothesis is rejected
when χ2
 16.919, where χ2
= (n−1)s2
σ2
0
, with v = 9 degrees of freedom.
0 16.919
χ2
v = 9
0.05
Figure 10.19: Critical region for the alternative hypothesis σ  0.9.
5. Computations: s2
= 1.44, n = 10, and
χ2
=
(9)(1.44)
0.81
= 16.0, P ≈ 0.07.
6. Decision: The χ2
-statistic is not significant at the 0.05 level. However, based
on the P-value 0.07, there is evidence that σ  0.9.
Now let us consider the problem of testing the equality of the variances σ2
1 and
σ2
2 of two populations. That is, we shall test the null hypothesis H0 that σ2
1 = σ2
2
against one of the usual alternatives
σ2
1  σ2
2, σ2
1  σ2
2, or σ2
1 = σ2
2.
For independent random samples of sizes n1 and n2, respectively, from the two
populations, the f-value for testing σ2
1 = σ2
2 is the ratio
f =
s2
1
s2
2
,
where s2
1 and s2
2 are the variances computed from the two samples. If the two
populations are approximately normally distributed and the null hypothesis is true,
according to Theorem 8.8 the ratio f = s2
1/s2
2 is a value of the F-distribution with
v1 = n1 − 1 and v2 = n2 − 1 degrees of freedom. Therefore, the critical regions
368 Chapter 10 One- and Two-Sample Tests of Hypotheses
of size α corresponding to the one-sided alternatives σ2
1  σ2
2 and σ2
1  σ2
2 are,
respectively, f  f1−α(v1, v2) and f  fα(v1, v2). For the two-sided alternative
σ2
1 = σ2
2, the critical region is f  f1−α/2(v1, v2) or f  fα/2(v1, v2).
Example 10.13: In testing for the difference in the abrasive wear of the two materials in Example
10.6, we assumed that the two unknown population variances were equal. Were we
justified in making this assumption? Use a 0.10 level of significance.
Solution: Let σ2
1 and σ2
2 be the population variances for the abrasive wear of material 1 and
material 2, respectively.
1. H0: σ2
1 = σ2
2.
2. H1: σ2
1 = σ2
2.
3. α = 0.10.
4. Critical region: From Figure 10.20, we see that f0.05(11, 9) = 3.11, and, by
using Theorem 8.7, we find
f0.95(11, 9) =
1
f0.05(9, 11)
= 0.34.
Therefore, the null hypothesis is rejected when f  0.34 or f  3.11, where
f = s2
1/s2
2 with v1 = 11 and v2 = 9 degrees of freedom.
5. Computations: s2
1 = 16, s2
2 = 25, and hence f = 16
25 = 0.64.
6. Decision: Do not reject H0. Conclude that there is insufficient evidence that
the variances differ.
0 0.34 3.11
f
v1 = 11 and v2 = 9
0.05
0.05
Figure 10.20: Critical region for the alternative hypothesis σ2
1 = σ2
2.
F-Test for Testing Variances in SAS
Figure 10.18 on page 356 displays the printout of a two-sample t-test where two
means from the seedling data in Exercise 9.40 were compared. Box-and-whisker
plots in Figure 10.17 on page 355 suggest that variances are not homogeneous,
and thus the t
-statistic and its corresponding P-value are relevant. Note also that
/ /
Exercises 369
the printout displays the F-statistic for H0: σ1 = σ2 with a P-value of 0.0098,
additional evidence that more variability is to be expected when nitrogen is used
than under the no-nitrogen condition.
Exercises
10.67 The content of containers of a particular lubri-
cant is known to be normally distributed with a vari-
ance of 0.03 liter. Test the hypothesis that σ2
= 0.03
against the alternative that σ2
= 0.03 for the random
sample of 10 containers in Exercise 10.23 on page 356.
Use a P-value in your conclusion.
10.68 Past experience indicates that the time re-
quired for high school seniors to complete a standard-
ized test is a normal random variable with a standard
deviation of 6 minutes. Test the hypothesis that σ = 6
against the alternative that σ  6 if a random sample of
the test times of 20 high school seniors has a standard
deviation s = 4.51. Use a 0.05 level of significance.
10.69 Aflotoxins produced by mold on peanut crops
in Virginia must be monitored. A sample of 64 batches
of peanuts reveals levels of 24.17 ppm, on average,
with a variance of 4.25 ppm. Test the hypothesis that
σ2
= 4.2 ppm against the alternative that σ2
= 4.2
ppm. Use a P-value in your conclusion.
10.70 Past data indicate that the amount of money
contributed by the working residents of a large city to
a volunteer rescue squad is a normal random variable
with a standard deviation of $1.40. It has been sug-
gested that the contributions to the rescue squad from
just the employees of the sanitation department are
much more variable. If the contributions of a random
sample of 12 employees from the sanitation department
have a standard deviation of $1.75, can we conclude at
the 0.01 level of significance that the standard devi-
ation of the contributions of all sanitation workers is
greater than that of all workers living in the city?
10.71 A soft-drink dispensing machine is said to be
out of control if the variance of the contents exceeds
1.15 deciliters. If a random sample of 25 drinks from
this machine has a variance of 2.03 deciliters, does this
indicate at the 0.05 level of significance that the ma-
chine is out of control? Assume that the contents are
approximately normally distributed.
10.72 Large-Sample Test of σ2
= σ2
0: When n ≥
30, we can test the null hypothesis that σ2
= σ2
0, or
σ = σ0, by computing
z =
s − σ0
σ0/
√
2n
,
which is a value of a random variable whose sampling
distribution is approximately the standard normal dis-
tribution.
(a) With reference to Example 10.4, test at the 0.05
level of significance whether σ = 10.0 years against
the alternative that σ = 10.0 years.
(b) It is suspected that the variance of the distribution
of distances in kilometers traveled on 5 liters of fuel
by a new automobile model equipped with a diesel
engine is less than the variance of the distribution
of distances traveled by the same model equipped
with a six-cylinder gasoline engine, which is known
to be σ2
= 6.25. If 72 test runs of the diesel model
have a variance of 4.41, can we conclude at the
0.05 level of significance that the variance of the
distances traveled by the diesel model is less than
that of the gasoline model?
10.73 A study is conducted to compare the lengths of
time required by men and women to assemble a certain
product. Past experience indicates that the distribu-
tion of times for both men and women is approximately
normal but the variance of the times for women is less
than that for men. A random sample of times for 11
men and 14 women produced the following data:
Men Women
n1 = 11 n2 = 14
s1 = 6.1 s2 = 5.3
Test the hypothesis that σ2
1 = σ2
2 against the alterna-
tive that σ2
1  σ2
2. Use a P-value in your conclusion.
10.74 For Exercise 10.41 on page 358, test the hy-
pothesis at the 0.05 level of significance that σ2
1 = σ2
2
against the alternative that σ2
1 = σ2
2, where σ2
1 and
σ2
2 are the variances of the number of organisms per
square meter of water at the two different locations on
Cedar Run.
10.75 With reference to Exercise 10.39 on page 358,
test the hypothesis that σ2
1 = σ2
2 against the alterna-
tive that σ2
1 = σ2
2, where σ2
1 and σ2
2 are the variances
for the running times of films produced by company 1
and company 2, respectively. Use a P-value.
10.76 Two types of instruments for measuring the
amount of sulfur monoxide in the atmosphere are being
compared in an air-pollution experiment. Researchers
370 Chapter 10 One- and Two-Sample Tests of Hypotheses
wish to determine whether the two types of instruments
yield measurements having the same variability. The
readings in the following table were recorded for the
two instruments.
Sulfur Monoxide
Instrument A Instrument B
0.86 0.87
0.82 0.74
0.75 0.63
0.61 0.55
0.89 0.76
0.64 0.70
0.81 0.69
0.68 0.57
0.65 0.53
Assuming the populations of measurements to be ap-
proximately normally distributed, test the hypothesis
that σA = σB against the alternative that σA = σB.
Use a P-value.
10.77 An experiment was conducted to compare the
alcohol content of soy sauce on two different produc-
tion lines. Production was monitored eight times a day.
The data are shown here.
Production line 1:
0.48 0.39 0.42 0.52 0.40 0.48 0.52 0.52
Production line 2:
0.38 0.37 0.39 0.41 0.38 0.39 0.40 0.39
Assume both populations are normal. It is suspected
that production line 1 is not producing as consistently
as production line 2 in terms of alcohol content. Test
the hypothesis that σ1 = σ2 against the alternative
that σ1 = σ2. Use a P-value.
10.78 Hydrocarbon emissions from cars are known to
have decreased dramatically during the 1980s. A study
was conducted to compare the hydrocarbon emissions
at idling speed, in parts per million (ppm), for automo-
biles from 1980 and 1990. Twenty cars of each model
year were randomly selected, and their hydrocarbon
emission levels were recorded. The data are as follows:
1980 models:
141 359 247 940 882 494 306 210 105 880
200 223 188 940 241 190 300 435 241 380
1990 models:
140 160 20 20 223 60 20 95 360 70
220 400 217 58 235 380 200 175 85 65
Test the hypothesis that σ1 = σ2 against the alter-
native that σ1 = σ2. Assume both populations are
normal. Use a P-value.
10.11 Goodness-of-Fit Test
Throughout this chapter, we have been concerned with the testing of statistical
hypotheses about single population parameters such as μ, σ2
, and p. Now we shall
consider a test to determine if a population has a specified theoretical distribution.
The test is based on how good a fit we have between the frequency of occurrence
of observations in an observed sample and the expected frequencies obtained from
the hypothesized distribution.
To illustrate, we consider the tossing of a die. We hypothesize that the die
is honest, which is equivalent to testing the hypothesis that the distribution of
outcomes is the discrete uniform distribution
f(x) =
1
6
, x = 1, 2, . . . , 6.
Suppose that the die is tossed 120 times and each outcome is recorded. Theoret-
ically, if the die is balanced, we would expect each face to occur 20 times. The
results are given in Table 10.4.
Table 10.4: Observed and Expected Frequencies of 120 Tosses of a Die
Face: 1 2 3 4 5 6
Observed 20 22 17 18 19 24
Expected 20 20 20 20 20 20
10.11 Goodness-of-Fit Test 371
By comparing the observed frequencies with the corresponding expected fre-
quencies, we must decide whether these discrepancies are likely to occur as a result
of sampling fluctuations and the die is balanced or whether the die is not honest
and the distribution of outcomes is not uniform. It is common practice to refer
to each possible outcome of an experiment as a cell. In our illustration, we have
6 cells. The appropriate statistic on which we base our decision criterion for an
experiment involving k cells is defined by the following.
A goodness-of-fit test between observed and expected frequencies is based
on the quantity
Goodness-of-Fit
Test χ2
=
k

i=1
(oi − ei)2
ei
,
where χ2
is a value of a random variable whose sampling distribution is approx-
imated very closely by the chi-squared distribution with v = k − 1 degrees of
freedom. The symbols oi and ei represent the observed and expected frequencies,
respectively, for the ith cell.
The number of degrees of freedom associated with the chi-squared distribution
used here is equal to k − 1, since there are only k − 1 freely determined cell fre-
quencies. That is, once k − 1 cell frequencies are determined, so is the frequency
for the kth cell.
If the observed frequencies are close to the corresponding expected frequencies,
the χ2
-value will be small, indicating a good fit. If the observed frequencies differ
considerably from the expected frequencies, the χ2
-value will be large and the fit
is poor. A good fit leads to the acceptance of H0, whereas a poor fit leads to its
rejection. The critical region will, therefore, fall in the right tail of the chi-squared
distribution. For a level of significance equal to α, we find the critical value χ2
α
from Table A.5, and then χ2
 χ2
α constitutes the critical region. The decision
criterion described here should not be used unless each of the expected
frequencies is at least equal to 5. This restriction may require the combining
of adjacent cells, resulting in a reduction in the number of degrees of freedom.
From Table 10.4, we find the χ2
-value to be
χ2
=
(20 − 20)2
20
+
(22 − 20)2
20
+
(17 − 20)2
20
+
(18 − 20)2
20
+
(19 − 20)2
20
+
(24 − 20)2
20
= 1.7.
Using Table A.5, we find χ2
0.05 = 11.070 for v = 5 degrees of freedom. Since 1.7
is less than the critical value, we fail to reject H0. We conclude that there is
insufficient evidence that the die is not balanced.
As a second illustration, let us test the hypothesis that the frequency distri-
bution of battery lives given in Table 1.7 on page 23 may be approximated by
a normal distribution with mean μ = 3.5 and standard deviation σ = 0.7. The
expected frequencies for the 7 classes (cells), listed in Table 10.5, are obtained by
computing the areas under the hypothesized normal curve that fall between the
various class boundaries.
372 Chapter 10 One- and Two-Sample Tests of Hypotheses
Table 10.5: Observed and Expected Frequencies of Battery Lives, Assuming Normality
Class Boundaries oi ei
1.45−1.95
1.95−2.45
2.45−2.95
2
1
4
⎫
⎬
⎭
7
0.5
2.1
5.9
⎫
⎬
⎭
8.5
2.95−3.45
3.45−3.95
15
10
10.3
10.7
3.95−4.45
4.45−4.95
5
3

8
7.0
3.5

10.5
For example, the z-values corresponding to the boundaries of the fourth class
are
z1 =
2.95 − 3.5
0.7
= −0.79 and z2 =
3.45 − 3.5
0.7
= −0.07.
From Table A.3 we find the area between z1 = −0.79 and z2 = −0.07 to be
area = P(−0.79  Z  −0.07) = P(Z  −0.07) − P(Z  −0.79)
= 0.4721 − 0.2148 = 0.2573.
Hence, the expected frequency for the fourth class is
e4 = (0.2573)(40) = 10.3.
It is customary to round these frequencies to one decimal.
The expected frequency for the first class interval is obtained by using the total
area under the normal curve to the left of the boundary 1.95. For the last class
interval, we use the total area to the right of the boundary 4.45. All other expected
frequencies are determined by the method described for the fourth class. Note that
we have combined adjacent classes in Table 10.5 where the expected frequencies
are less than 5 (a rule of thumb in the goodness-of-fit test). Consequently, the total
number of intervals is reduced from 7 to 4, resulting in v = 3 degrees of freedom.
The χ2
-value is then given by
χ2
=
(7 − 8.5)2
8.5
+
(15 − 10.3)2
10.3
+
(10 − 10.7)2
10.7
+
(8 − 10.5)2
10.5
= 3.05.
Since the computed χ2
-value is less than χ2
0.05 = 7.815 for 3 degrees of freedom,
we have no reason to reject the null hypothesis and conclude that the normal
distribution with μ = 3.5 and σ = 0.7 provides a good fit for the distribution of
battery lives.
The chi-squared goodness-of-fit test is an important resource, particularly since
so many statistical procedures in practice depend, in a theoretical sense, on the
assumption that the data gathered come from a specific type of distribution. As
we have already seen, the normality assumption is often made. In the chapters
that follow, we shall continue to make normality assumptions in order to provide
a theoretical basis for certain tests and confidence intervals.
10.12 Test for Independence (Categorical Data) 373
There are tests in the literature that are more powerful than the chi-squared test
for testing normality. One such test is called Geary’s test. This test is based on a
very simple statistic which is a ratio of two estimators of the population standard
deviation σ. Suppose that a random sample X1, X2, . . . , Xn is taken from a normal
distribution, N(μ, σ). Consider the ratio
U =

π/2
n

i=1
|Xi − X̄|/n
%
n

i=1
(Xi − X̄)2/n
.
The reader should recognize that the denominator is a reasonable estimator of σ
whether the distribution is normal or not. The numerator is a good estimator of σ
if the distribution is normal but may overestimate or underestimate σ when there
are departures from normality. Thus, values of U differing considerably from 1.0
represent the signal that the hypothesis of normality should be rejected.
For large samples, a reasonable test is based on approximate normality of U.
The test statistic is then a standardization of U, given by
Z =
U − 1
0.2661/
√
n
.
Of course, the test procedure involves the two-sided critical region. We compute
a value of z from the data and do not reject the hypothesis of normality when
−zα/2  Z  zα/2.
A paper dealing with Geary’s test is cited in the Bibliography (Geary, 1947).
10.12 Test for Independence (Categorical Data)
The chi-squared test procedure discussed in Section 10.11 can also be used to test
the hypothesis of independence of two variables of classification. Suppose that
we wish to determine whether the opinions of the voting residents of the state
of Illinois concerning a new tax reform are independent of their levels of income.
Members of a random sample of 1000 registered voters from the state of Illinois
are classified as to whether they are in a low, medium, or high income bracket and
whether or not they favor the tax reform. The observed frequencies are presented
in Table 10.6, which is known as a contingency table.
Table 10.6: 2 × 3 Contingency Table
Income Level
Tax Reform Low Medium High Total
For 182 213 203 598
Against 154 138 110 402
Total 336 351 313 1000
374 Chapter 10 One- and Two-Sample Tests of Hypotheses
A contingency table with r rows and c columns is referred to as an r × c table
(“r × c” is read “r by c”). The row and column totals in Table 10.6 are called
marginal frequencies. Our decision to accept or reject the null hypothesis, H0,
of independence between a voter’s opinion concerning the tax reform and his or
her level of income is based upon how good a fit we have between the observed
frequencies in each of the 6 cells of Table 10.6 and the frequencies that we would
expect for each cell under the assumption that H0 is true. To find these expected
frequencies, let us define the following events:
L: A person selected is in the low-income level.
M: A person selected is in the medium-income level.
H: A person selected is in the high-income level.
F: A person selected is for the tax reform.
A: A person selected is against the tax reform.
By using the marginal frequencies, we can list the following probability esti-
mates:
P(L) =
336
1000
, P(M) =
351
1000
, P(H) =
313
1000
,
P(F) =
598
1000
, P(A) =
402
1000
.
Now, if H0 is true and the two variables are independent, we should have
P(L ∩ F) = P(L)P(F) =

336
1000
 
598
1000

,
P(L ∩ A) = P(L)P(A) =

336
1000
 
402
1000

,
P(M ∩ F) = P(M)P(F) =

351
1000
 
598
1000

,
P(M ∩ A) = P(M)P(A) =

351
1000
 
402
1000

,
P(H ∩ F) = P(H)P(F) =

313
1000
 
598
1000

,
P(H ∩ A) = P(H)P(A) =

313
1000
 
402
1000

.
The expected frequencies are obtained by multiplying each cell probability by
the total number of observations. As before, we round these frequencies to one
decimal. Thus, the expected number of low-income voters in our sample who favor
the tax reform is estimated to be

336
1000
 
598
1000

(1000) =
(336)(598)
1000
= 200.9
10.12 Test for Independence (Categorical Data) 375
when H0 is true. The general rule for obtaining the expected frequency of any cell
is given by the following formula:
expected frequency =
(column total) × (row total)
grand total
.
The expected frequency for each cell is recorded in parentheses beside the actual
observed value in Table 10.7. Note that the expected frequencies in any row or
column add up to the appropriate marginal total. In our example, we need to
compute only two expected frequencies in the top row of Table 10.7 and then find
the others by subtraction. The number of degrees of freedom associated with the
chi-squared test used here is equal to the number of cell frequencies that may be
filled in freely when we are given the marginal totals and the grand total, and in
this illustration that number is 2. A simple formula providing the correct number
of degrees of freedom is
v = (r − 1)(c − 1).
Table 10.7: Observed and Expected Frequencies
Income Level
Tax Reform Low Medium High Total
For
Against
Total
182 (200.9)
154 (135.1)
336
213 (209.9)
138 (141.1)
351
203 (187.2)
110 (125.8)
313
598
402
1000
Hence, for our example, v = (2 − 1)(3 − 1) = 2 degrees of freedom. To test the
null hypothesis of independence, we use the following decision criterion.
Test for
Independence
Calculate
χ2
=

i
(oi − ei)2
ei
,
where the summation extends over all rc cells in the r × c contingency table.
If χ2
 χ2
α with v = (r − 1)(c − 1) degrees of freedom, reject the null hypothesis
of independence at the α-level of significance; otherwise, fail to reject the null
hypothesis.
Applying this criterion to our example, we find that
χ2
=
(182 − 200.9)2
200.9
+
(213 − 209.9)2
209.9
+
(203 − 187.2)2
187.2
+
(154 − 135.1)2
135.1
+
(138 − 141.1)2
141.1
+
(110 − 125.8)2
125.8
= 7.85,
P ≈ 0.02.
From Table A.5 we find that χ2
0.05 = 5.991 for v = (2 − 1)(3 − 1) = 2 degrees of
freedom. The null hypothesis is rejected and we conclude that a voter’s opinion
concerning the tax reform and his or her level of income are not independent.
376 Chapter 10 One- and Two-Sample Tests of Hypotheses
It is important to remember that the statistic on which we base our decision
has a distribution that is only approximated by the chi-squared distribution. The
computed χ2
-values depend on the cell frequencies and consequently are discrete.
The continuous chi-squared distribution seems to approximate the discrete sam-
pling distribution of χ2
very well, provided that the number of degrees of freedom
is greater than 1. In a 2 × 2 contingency table, where we have only 1 degree of
freedom, a correction called Yates’ correction for continuity is applied. The
corrected formula then becomes
χ2
(corrected) =

i
(|oi − ei| − 0.5)2
ei
.
If the expected cell frequencies are large, the corrected and uncorrected results
are almost the same. When the expected frequencies are between 5 and 10, Yates’
correction should be applied. For expected frequencies less than 5, the Fisher-Irwin
exact test should be used. A discussion of this test may be found in Basic Concepts
of Probability and Statistics by Hodges and Lehmann (2005; see the Bibliography).
The Fisher-Irwin test may be avoided, however, by choosing a larger sample.
10.13 Test for Homogeneity
When we tested for independence in Section 10.12, a random sample of 1000 vot-
ers was selected and the row and column totals for our contingency table were
determined by chance. Another type of problem for which the method of Section
10.12 applies is one in which either the row or column totals are predetermined.
Suppose, for example, that we decide in advance to select 200 Democrats, 150
Republicans, and 150 Independents from the voters of the state of North Carolina
and record whether they are for a proposed abortion law, against it, or undecided.
The observed responses are given in Table 10.8.
Table 10.8: Observed Frequencies
Political Affiliation
Abortion Law Democrat Republican Independent Total
For
Against
Undecided
Total
82
93
25
200
70
62
18
150
62
67
21
150
214
222
64
500
Now, rather than test for independence, we test the hypothesis that the popu-
lation proportions within each row are the same. That is, we test the hypothesis
that the proportions of Democrats, Republicans, and Independents favoring the
abortion law are the same; the proportions of each political affiliation against the
law are the same; and the proportions of each political affiliation that are unde-
cided are the same. We are basically interested in determining whether the three
categories of voters are homogeneous with respect to their opinions concerning
the proposed abortion law. Such a test is called a test for homogeneity.
Assuming homogeneity, we again find the expected cell frequencies by multi-
plying the corresponding row and column totals and then dividing by the grand
10.13 Test for Homogeneity 377
total. The analysis then proceeds using the same chi-squared statistic as before.
We illustrate this process for the data of Table 10.8 in the following example.
Example 10.14: Referring to the data of Table 10.8, test the hypothesis that opinions concerning
the proposed abortion law are the same within each political affiliation. Use a 0.05
level of significance.
Solution: 1. H0: For each opinion, the proportions of Democrats, Republicans, and Inde-
pendents are the same.
2. H1: For at least one opinion, the proportions of Democrats, Republicans, and
Independents are not the same.
3. α = 0.05.
4. Critical region: χ2
 9.488 with v = 4 degrees of freedom.
5. Computations: Using the expected cell frequency formula on page 375, we
need to compute 4 cell frequencies. All other frequencies are found by sub-
traction. The observed and expected cell frequencies are displayed in Table
10.9.
Table 10.9: Observed and Expected Frequencies
Political Affiliation
Abortion Law Democrat Republican Independent Total
For
Against
Undecided
Total
82 (85.6)
93 (88.8)
25 (25.6)
200
70 (64.2)
62 (66.6)
18 (19.2)
150
62 (64.2)
67 (66.6)
21 (19.2)
150
214
222
64
500
Now,
χ2
=
(82 − 85.6)2
85.6
+
(70 − 64.2)2
64.2
+
(62 − 64.2)2
64.2
+
(93 − 88.8)2
88.8
+
(62 − 66.6)2
66.6
+
(67 − 66.6)2
66.6
+
(25 − 25.6)2
25.6
+
(18 − 19.2)2
19.2
+
(21 − 19.2)2
19.2
= 1.53.
6. Decision: Do not reject H0. There is insufficient evidence to conclude that
the proportions of Democrats, Republicans, and Independents differ for each
stated opinion.
Testing for Several Proportions
The chi-squared statistic for testing for homogeneity is also applicable when testing
the hypothesis that k binomial parameters have the same value. This is, therefore,
an extension of the test presented in Section 10.9 for determining differences be-
tween two proportions to a test for determining differences among k proportions.
Hence, we are interested in testing the null hypothesis
H0 : p1 = p2 = · · · = pk
378 Chapter 10 One- and Two-Sample Tests of Hypotheses
against the alternative hypothesis, H1, that the population proportions are not all
equal. To perform this test, we first observe independent random samples of size
n1, n2, . . . , nk from the k populations and arrange the data in a 2 × k contingency
table, Table 10.10.
Table 10.10: k Independent Binomial Samples
Sample: 1 2 · · · k
Successes x1 x2 · · · xk
Failures n1 − x1 n2 − x2 · · · nk − xk
Depending on whether the sizes of the random samples were predetermined or
occurred at random, the test procedure is identical to the test for homogeneity or
the test for independence. Therefore, the expected cell frequencies are calculated as
before and substituted, together with the observed frequencies, into the chi-squared
statistic
χ2
=

i
(oi − ei)2
ei
,
with
v = (2 − 1)(k − 1) = k − 1
degrees of freedom.
By selecting the appropriate upper-tail critical region of the form χ2
 χ2
α, we
can now reach a decision concerning H0.
Example 10.15: In a shop study, a set of data was collected to determine whether or not the
proportion of defectives produced was the same for workers on the day, evening,
and night shifts. The data collected are shown in Table 10.11.
Table 10.11: Data for Example 10.15
Shift: Day Evening Night
Defectives 45 55 70
Nondefectives 905 890 870
Use a 0.025 level of significance to determine if the proportion of defectives is the
same for all three shifts.
Solution: Let p1, p2, and p3 represent the true proportions of defectives for the day, evening,
and night shifts, respectively.
1. H0: p1 = p2 = p3.
2. H1: p1, p2, and p3 are not all equal.
3. α = 0.025.
4. Critical region: χ2
 7.378 for v = 2 degrees of freedom.
10.14 Two-Sample Case Study 379
5. Computations: Corresponding to the observed frequencies o1 = 45 and o2 =
55, we find
e1 =
(950)(170)
2835
= 57.0 and e2 =
(945)(170)
2835
= 56.7.
All other expected frequencies are found by subtraction and are displayed in
Table 10.12.
Table 10.12: Observed and Expected Frequencies
Shift: Day Evening Night Total
Defectives
Nondefectives
45 (57.0)
905 (893.0)
55 (56.7)
890 (888.3)
70 (56.3)
870 (883.7)
170
2665
Total 950 945 940 2835
Now
χ2
=
(45 − 57.0)2
57.0
+
(55 − 56.7)2
56.7
+
(70 − 56.3)2
56.3
+
(905 − 893.0)2
893.0
+
(890 − 888.3)2
888.3
+
(870 − 883.7)2
883.7
= 6.29,
P ≈ 0.04.
6. Decision: We do not reject H0 at α = 0.025. Nevertheless, with the above
P-value computed, it would certainly be dangerous to conclude that the pro-
portion of defectives produced is the same for all shifts.
Often a complete study involving the use of statistical methods in hypothesis
testing can be illustrated for the scientist or engineer using both test statistics,
complete with P-values and statistical graphics. The graphics supplement the
numerical diagnostics with pictures that show intuitively why the P-values appear
as they do, as well as how reasonable (or not) the operative assumptions are.
10.14 Two-Sample Case Study
In this section, we consider a study involving a thorough graphical and formal anal-
ysis, along with annotated computer printout and conclusions. In a data analysis
study conducted by personnel at the Statistics Consulting Center at Virginia Tech,
two different materials, alloy A and alloy B, were compared in terms of breaking
strength. Alloy B is more expensive, but it should certainly be adopted if it can
be shown to be stronger than alloy A. The consistency of performance of the two
alloys should also be taken into account.
Random samples of beams made from each alloy were selected, and strength
was measured in units of 0.001-inch deflection as a fixed force was applied at both
ends of the beam. Twenty specimens were used for each of the two alloys. The
data are given in Table 10.13.
It is important that the engineer compare the two alloys. Of concern is average
strength and reproducibility. It is of interest to determine if there is a severe
380 Chapter 10 One- and Two-Sample Tests of Hypotheses
Table 10.13: Data for Two-Sample Case Study
Alloy A Alloy B
88 82 87 75 81 80
79 85 90 77 78 81
84 88 83 86 78 77
89 80 81 84 82 78
81 85 80 80
83 87 78 76
82 80 83 85
79 78 76 79
violation of the normality assumption required of both the t- and F-tests. Figures
10.21 and 10.22 are normal quantile-quantile plots of the samples of the two alloys.
There does not appear to be any serious violation of the normality assumption.
In addition, Figure 10.23 shows two box-and-whisker plots on the same graph. The
box-and-whisker plots suggest that there is no appreciable difference in the vari-
ability of deflection for the two alloys. However, it seems that the mean deflection
for alloy B is significantly smaller, suggesting, at least graphically, that alloy B is
stronger. The sample means and standard deviations are
ȳA = 83.55, sA = 3.663; ȳB = 79.70, sB = 3.097.
The SAS printout for the PROC TTEST is shown in Figure 10.24. The F-test
suggests no significant difference in variances (P = 0.4709), and the two-sample
t-statistic for testing
H0: μA = μB,
H1: μA  μB
(t = 3.59, P = 0.0009) rejects H0 in favor of H1 and thus confirms what the
graphical information suggests. Here we use the t-test that pools the two-sample
variances together in light of the results of the F-test. On the basis of this analysis,
the adoption of alloy B would seem to be in order.
Statistical Significance and Engineering or Scientific Significance
While the statistician may feel quite comfortable with the results of the comparison
between the two alloys in the case study above, a dilemma remains for the engineer.
The analysis demonstrated a statistically significant improvement with the use
of alloy B. However, is the difference found really worth pursuing, since alloy
B is more expensive? This illustration highlights a very important issue often
overlooked by statisticians and data analysts—the distinction between statistical
significance and engineering or scientific significance. Here the average difference
in deflection is ȳA − ȳB = 0.00385 inch. In a complete analysis, the engineer must
determine if the difference is sufficient to justify the extra cost in the long run.
This is an economic and engineering issue. The reader should understand that a
statistically significant difference merely implies that the difference in the sample
10.14 Two-Sample Case Study 381
2 1 1 2
0
78
80
82
84
86
88
90
Normal Quantile
Quantile
Figure 10.21: Normal quantile-quantile plot of
data for alloy A.
76
78
80
82
84
86
2 1 1 2
0
Normal Quantile
Quantile
Figure 10.22: Normal quantile-quantile plot of
data for alloy B.
Alloy A Alloy B
75
80
85
90
Deflection
Figure 10.23: Box-and-whisker plots for both alloys.
means found in the data could hardly have occurred by chance. It does not imply
that the difference in the population means is profound or particularly significant in
the context of the problem. For example, in Section 10.4, an annotated computer
printout was used to show evidence that a pH meter was, in fact, biased. That
is, it does not demonstrate a mean pH of 7.00 for the material on which it was
tested. But the variability among the observations in the sample is very small.
The engineer may decide that the small deviations from 7.0 render the pH meter
adequate.
/ /
382 Chapter 10 One- and Two-Sample Tests of Hypotheses
The TTEST Procedure
Alloy N Mean Std Dev Std Err
Alloy A 20 83.55 3.6631 0.8191
Alloy B 20 79.7 3.0967 0.6924
Variances DF t Value Pr  |t|
Equal 38 3.59 0.0009
Unequal 37 3.59 0.0010
Equality of Variances
Num DF Den DF F Value Pr  F
19 19 1.40 0.4709
Figure 10.24: Annotated SAS printout for alloy data.
Exercises
10.79 A machine is supposed to mix peanuts, hazel-
nuts, cashews, and pecans in the ratio 5:2:2:1. A can
containing 500 of these mixed nuts was found to have
269 peanuts, 112 hazelnuts, 74 cashews, and 45 pecans.
At the 0.05 level of significance, test the hypothesis
that the machine is mixing the nuts in the ratio 5:2:2:1.
10.80 The grades in a statistics course for a particu-
lar semester were as follows:
Grade A B C D F
f 14 18 32 20 16
Test the hypothesis, at the 0.05 level of significance,
that the distribution of grades is uniform.
10.81 A die is tossed 180 times with the following
results:
x 1 2 3 4 5 6
f 28 36 36 30 27 23
Is this a balanced die? Use a 0.01 level of significance.
10.82 Three marbles are selected from an urn con-
taining 5 red marbles and 3 green marbles. After the
number X of red marbles is recorded, the marbles are
replaced in the urn and the experiment repeated 112
times. The results obtained are as follows:
x 0 1 2 3
f 1 31 55 25
Test the hypothesis, at the 0.05 level of significance,
that the recorded data may be fitted by the hypergeo-
metric distribution h(x; 8, 3, 5), x = 0, 1, 2, 3.
10.83 A coin is thrown until a head occurs and the
number X of tosses recorded. After repeating the ex-
periment 256 times, we obtained the following results:
x 1 2 3 4 5 6 7 8
f 136 60 34 12 9 1 3 1
Test the hypothesis, at the 0.05 level of significance,
that the observed distribution of X may be fitted by
the geometric distribution g(x; 1/2), x = 1, 2, 3, . . . .
10.84 For Exercise 1.18 on page 31, test the good-
ness of fit between the observed class frequencies and
the corresponding expected frequencies of a normal dis-
tribution with μ = 65 and σ = 21, using a 0.05 level of
significance.
10.85 For Exercise 1.19 on page 31, test the good-
ness of fit between the observed class frequencies and
the corresponding expected frequencies of a normal dis-
tribution with μ = 1.8 and σ = 0.4, using a 0.01 level
of significance.
10.86 In an experiment to study the dependence of
hypertension on smoking habits, the following data
were taken on 180 individuals:
Non- Moderate Heavy
smokers Smokers Smokers
Hypertension 21 36 30
No hypertension 48 26 19
Test the hypothesis that the presence or absence of hy-
pertension is independent of smoking habits. Use a
0.05 level of significance.
10.87 A random sample of 90 adults is classified ac-
cording to gender and the number of hours of television
watched during a week:
/ /
Exercises 383
Gender
Male Female
Over 25 hours 15 29
Under 25 hours 27 19
Use a 0.01 level of significance and test the hypothesis
that the time spent watching television is independent
of whether the viewer is male or female.
10.88 A random sample of 200 married men, all re-
tired, was classified according to education and number
of children:
Number of Children
Education 0–1 2–3 Over 3
Elementary 14 37 32
Secondary 19 42 17
College 12 17 10
Test the hypothesis, at the 0.05 level of significance,
that the size of a family is independent of the level of
education attained by the father.
10.89 A criminologist conducted a survey to deter-
mine whether the incidence of certain types of crime
varied from one part of a large city to another. The
particular crimes of interest were assault, burglary,
larceny, and homicide. The following table shows the
numbers of crimes committed in four areas of the city
during the past year.
Type of Crime
District Assault Burglary Larceny Homicide
1 162 118 451 18
2 310 196 996 25
3 258 193 458 10
4 280 175 390 19
Can we conclude from these data at the 0.01 level of
significance that the occurrence of these types of crime
is dependent on the city district?
10.90 According to a Johns Hopkins University study
published in the American Journal of Public Health,
widows live longer than widowers. Consider the fol-
lowing survival data collected on 100 widows and 100
widowers following the death of a spouse:
Years Lived Widow Widower
Less than 5 25 39
5 to 10 42 40
More than 10 33 21
Can we conclude at the 0.05 level of significance that
the proportions of widows and widowers are equal with
respect to the different time periods that a spouse sur-
vives after the death of his or her mate?
10.91 The following responses concerning the stan-
dard of living at the time of an independent opinion
poll of 1000 households versus one year earlier seem to
be in agreement with the results of a study published
in Across the Board (June 1981):
Standard of Living
Somewhat Not as
Period Better Same Good Total
1980: Jan. 72 144 84 300
May 63 135 102 300
Sept. 47 100 53 200
1981: Jan. 40 105 55 200
Test the hypothesis that the proportions of households
within each standard of living category are the same
for each of the four time periods. Use a P-value.
10.92 A college infirmary conducted an experiment
to determine the degree of relief provided by three
cough remedies. Each cough remedy was tried on 50
students and the following data recorded:
Cough Remedy
NyQuil Robitussin Triaminic
No relief 11 13 9
Some relief 32 28 27
Total relief 7 9 14
Test the hypothesis that the three cough remedies are
equally effective. Use a P-value in your conclusion.
10.93 To determine current attitudes about prayer
in public schools, a survey was conducted in four Vir-
ginia counties. The following table gives the attitudes
of 200 parents from Craig County, 150 parents from
Giles County, 100 parents from Franklin County, and
100 parents from Montgomery County:
County
Attitude Craig Giles Franklin Mont.
Favor 65 66 40 34
Oppose 42 30 33 42
No opinion 93 54 27 24
Test for homogeneity of attitudes among the four coun-
ties concerning prayer in the public schools. Use a P-
value in your conclusion.
10.94 A survey was conducted in Indiana, Kentucky,
and Ohio to determine the attitude of voters concern-
ing school busing. A poll of 200 voters from each of
these states yielded the following results:
Voter Attitude
Do Not
State Support Support Undecided
Indiana 82 97 21
Kentucky 107 66 27
Ohio 93 74 33
At the 0.05 level of significance, test the null hypothe-
sis that the proportions of voters within each attitude
category are the same for each of the three states.
/ /
384 Chapter 10 One- and Two-Sample Tests of Hypotheses
10.95 A survey was conducted in two Virginia cities
to determine voter sentiment about two gubernatorial
candidates in an upcoming election. Five hundred vot-
ers were randomly selected from each city and the fol-
lowing data were recorded:
City
Voter Sentiment Richmond Norfolk
Favor A
Favor B
Undecided
204
211
85
225
198
77
At the 0.05 level of significance, test the null hypoth-
esis that proportions of voters favoring candidate A,
favoring candidate B, and undecided are the same for
each city.
10.96 In a study to estimate the proportion of wives
who regularly watch soap operas, it is found that 52
of 200 wives in Denver, 31 of 150 wives in Phoenix,
and 37 of 150 wives in Rochester watch at least one
soap opera. Use a 0.05 level of significance to test the
hypothesis that there is no difference among the true
proportions of wives who watch soap operas in these
three cities.
Review Exercises
10.97 State the null and alternative hypotheses to be
used in testing the following claims and determine gen-
erally where the critical region is located:
(a) The mean snowfall at Lake George during the
month of February is 21.8 centimeters.
(b) No more than 20% of the faculty at the local uni-
versity contributed to the annual giving fund.
(c) On the average, children attend schools within 6.2
kilometers of their homes in suburban St. Louis.
(d) At least 70% of next year’s new cars will be in the
compact and subcompact category.
(e) The proportion of voters favoring the incumbent in
the upcoming election is 0.58.
(f) The average rib-eye steak at the Longhorn Steak
house weighs at least 340 grams.
10.98 A geneticist is interested in the proportions of
males and females in a population who have a cer-
tain minor blood disorder. In a random sample of 100
males, 31 are found to be afflicted, whereas only 24 of
100 females tested have the disorder. Can we conclude
at the 0.01 level of significance that the proportion of
men in the population afflicted with this blood disorder
is significantly greater than the proportion of women
afflicted?
10.99 A study was made to determine whether more
Italians than Americans prefer white champagne to
pink champagne at weddings. Of the 300 Italians
selected at random, 72 preferred white champagne,
and of the 400 Americans selected, 70 preferred white
champagne. Can we conclude that a higher proportion
of Italians than Americans prefer white champagne at
weddings? Use a 0.05 level of significance.
10.100 Consider the situation of Exercise 10.54 on
page 360. Oxygen consumption in mL/kg/min, was
also measured.
Subject With CO Without CO
1 26.46 25.41
2 17.46 22.53
3 16.32 16.32
4 20.19 27.48
5 19.84 24.97
6 20.65 21.77
7 28.21 28.17
8 33.94 32.02
9 29.32 28.96
It is conjectured that oxygen consumption should be
higher in an environment relatively free of CO. Do a
significance test and discuss the conjecture.
10.101 In a study analyzed by the Statistics Consult-
ing Center at Virginia Tech, a group of subjects was
asked to complete a certain task on the computer. The
response measured was the time to completion. The
purpose of the experiment was to test a set of facilita-
tion tools developed by the Department of Computer
Science at the university. There were 10 subjects in-
volved. With a random assignment, five were given a
standard procedure using Fortran language for comple-
tion of the task. The other five were asked to do the
task with the use of the facilitation tools. The data on
the completion times for the task are given here.
Group 1 Group 2
(Standard Procedure) (Facilitation Tool)
161 132
169 162
174 134
158 138
163 133
Assuming that the population distributions are nor-
mal and variances are the same for the two groups,
support or refute the conjecture that the facilitation
tools increase the speed with which the task can be
accomplished.
10.102 State the null and alternative hypotheses to
be used in testing the following claims, and determine
/ /
Review Exercises 385
generally where the critical region is located:
(a) At most, 20% of next year’s wheat crop will be
exported to the Soviet Union.
(b) On the average, American homemakers drink 3
cups of coffee per day.
(c) The proportion of college graduates in Virginia this
year who majored in the social sciences is at least
0.15.
(d) The average donation to the American Lung Asso-
ciation is no more than $10.
(e) Residents in suburban Richmond commute, on the
average, 15 kilometers to their place of employ-
ment.
10.103 If one can containing 500 nuts is selected
at random from each of three different distributors
of mixed nuts and there are, respectively, 345, 313,
and 359 peanuts in each of the cans, can we conclude
at the 0.01 level of significance that the mixed nuts
of the three distributors contain equal proportions of
peanuts?
10.104 A study was made to determine whether there
is a difference between the proportions of parents in
the states of Maryland (MD), Virginia (VA), Georgia
(GA), and Alabama (AL) who favor placing Bibles in
the elementary schools. The responses of 100 parents
selected at random in each of these states are recorded
in the following table:
State
Preference MD VA GA AL
Yes 65 71 78 82
No 35 29 22 18
Can we conclude that the proportions of parents who
favor placing Bibles in the schools are the same for
these four states? Use a 0.01 level of significance.
10.105 A study was conducted at the Virginia-
Maryland Regional College of Veterinary Medicine
Equine Center to determine if the performance of a
certain type of surgery on young horses had any effect
on certain kinds of blood cell types in the animal. Fluid
samples were taken from each of six foals before and af-
ter surgery. The samples were analyzed for the number
of postoperative white blood cell (WBC) leukocytes.
A preoperative measure of WBC leukocytes was also
measured. The data are given as follows:
Foal Presurgery* Postsurgery*
1 10.80 10.60
2 12.90 16.60
3 9.59 17.20
4 8.81 14.00
5 12.00 10.60
6 6.07 8.60
*All values × 10−3.
Use a paired sample t-test to determine if there is a sig-
nificant change in WBC leukocytes with the surgery.
10.106 A study was conducted at the Department of
Health and Physical Education at Virginia Tech to de-
termine if 8 weeks of training truly reduces the choles-
terol levels of the participants. A treatment group con-
sisting of 15 people was given lectures twice a week
on how to reduce cholesterol level. Another group of
18 people of similar age was randomly selected as a
control group. All participants’ cholesterol levels were
recorded at the end of the 8-week program and are
listed below.
Treatment:
129 131 154 172 115 126 175 191
122 238 159 156 176 175 126
Control:
151 132 196 195 188 198 187 168 115
165 137 208 133 217 191 193 140 146
Can we conclude, at the 5% level of significance, that
the average cholesterol level has been reduced due to
the program? Make the appropriate test on means.
10.107 In a study conducted by the Department of
Mechanical Engineering and analyzed by the Statistics
Consulting Center at Virginia Tech, steel rods supplied
by two different companies were compared. Ten sam-
ple springs were made out of the steel rods supplied by
each company, and the “bounciness” was studied. The
data are as follows:
Company A:
9.3 8.8 6.8 8.7 8.5 6.7 8.0 6.5 9.2 7.0
Company B:
11.0 9.8 9.9 10.2 10.1 9.7 11.0 11.1 10.2 9.6
Can you conclude that there is virtually no difference
in means between the steel rods supplied by the two
companies? Use a P-value to reach your conclusion.
Should variances be pooled here?
10.108 In a study conducted by the Water Resources
Center and analyzed by the Statistics Consulting Cen-
ter at Virginia Tech, two different wastewater treat-
ment plants are compared. Plant A is located where
the median household income is below $22,000 a year,
and plant B is located where the median household
income is above $60,000 a year. The amount of waste-
water treated at each plant (thousands of gallons/day)
was randomly sampled for 10 days. The data are as
follows:
Plant A:
21 19 20 23 22 28 32 19 13 18
Plant B:
20 39 24 33 30 28 30 22 33 24
Can we conclude, at the 5% level of significance, that
386 Chapter 10 One- and Two-Sample Tests of Hypotheses
the average amount of wastewater treated at the plant
in the high-income neighborhood is more than that
treated at the plant in the low-income area? Assume
normality.
10.109 The following data show the numbers of de-
fects in 100,000 lines of code in a particular type of
software program developed in the United States and
Japan. Is there enough evidence to claim that there is a
significant difference between the programs developed
in the two countries? Test on means. Should variances
be pooled?
U.S. 48 39 42 52 40 48 52 52
54 48 52 55 43 46 48 52
Japan 50 48 42 40 43 48 50 46
38 38 36 40 40 48 48 45
10.110 Studies show that the concentration of PCBs
is much higher in malignant breast tissue than in
normal breast tissue. If a study of 50 women with
breast cancer reveals an average PCB concentration
of 22.8 × 10−4
gram, with a standard deviation of
4.8 × 10−4
gram, is the mean concentration of PCBs
less than 24 × 10−4
gram?
10.111 z-Value for Testing p1−p2 = d0: To test
the null hypothesis H0 that p1 −p2 = d0, where d0 = 0,
we base our decision on
z =
p̂1 − p̂2 − d0

p̂1q̂1/n1 + p̂2q̂2/n2
,
which is a value of a random variable whose distribu-
tion approximates the standard normal distribution as
long as n1 and n2 are both large. With reference to
Example 10.11 on page 364, test the hypothesis that
the percentage of town voters favoring the construction
of the chemical plant will not exceed the percentage of
county voters by more than 3%. Use a P-value in your
conclusion.
10.15 Potential Misconceptions and Hazards;
Relationship to Material in Other Chapters
One of the easiest ways to misuse statistics relates to the final scientific conclusion
drawn when the analyst does not reject the null hypothesis H0. In this text, we
have attempted to make clear what the null hypothesis means and what the al-
ternative means, and to stress that, in a large sense, the alternative hypothesis is
much more important. Put in the form of an example, if an engineer is attempt-
ing to compare two gauges using a two-sample t-test, and H0 is “the gauges are
equivalent” while H1 is “the gauges are not equivalent,” not rejecting H0 does
not lead to the conclusion of equivalent gauges. In fact, a case can be made for
never writing or saying “accept H0”! Not rejecting H0 merely implies insufficient
evidence. Depending on the nature of the hypothesis, a lot of possibilities are still
not ruled out.
In Chapter 9, we considered the case of the large-sample confidence interval
using
z =
x̄ − μ
s/
√
n
.
In hypothesis testing, replacing σ by s for n  30 is risky. If n ≥ 30 and the
distribution is not normal but somehow close to normal, the Central Limit Theorem
is being called upon and one is relying on the fact that with n ≥ 30, s ≈ σ. Of
course, any t-test is accompanied by the concomitant assumption of normality.
As in the case of confidence intervals, the t-test is relatively robust to normality.
However, one should still use normal probability plotting, goodness-of-fit tests, or
other graphical procedures when the sample is not too small.
Most of the chapters in this text include discussions whose purpose is to relate
the chapter in question to other material that will follow. The topics of estimation
10.15 Potential Misconceptions and Hazards 387
and hypothesis testing are both used in a major way in nearly all of the tech-
niques that fall under the umbrella of “statistical methods.” This will be readily
noted by students who advance to Chapters 11 through 16. It will be obvious
that these chapters depend heavily on statistical modeling. Students will be ex-
posed to the use of modeling in a wide variety of applications in many scientific
and engineering fields. It will become obvious quite quickly that the framework
of a statistical model is useless unless data are available with which to estimate
parameters in the formulated model. This will become particularly apparent in
Chapters 11 and 12 as we introduce the notion of regression models. The concepts
and theory associated with Chapter 9 will carry over. As far as material in the
present chapter is concerned, the framework of hypothesis testing, P-values, power
of tests, and choice of sample size will collectively play a major role. Since initial
model formulation quite often must be supplemented by model editing before the
analyst is sufficiently comfortable to use the model for either process understand-
ing or prediction, Chapters 11, 12, and 15 make major use of hypothesis testing to
supplement diagnostic measures that are used to assess model quality.
This page intentionally left blank
Chapter 11
Simple Linear Regression and
Correlation
11.1 Introduction to Linear Regression
Often, in practice, one is called upon to solve problems involving sets of variables
when it is known that there exists some inherent relationship among the variables.
For example, in an industrial situation it may be known that the tar content in the
outlet stream in a chemical process is related to the inlet temperature. It may be
of interest to develop a method of prediction, that is, a procedure for estimating
the tar content for various levels of the inlet temperature from experimental infor-
mation. Now, of course, it is highly likely that for many example runs in which
the inlet temperature is the same, say 130◦
C, the outlet tar content will not be the
same. This is much like what happens when we study several automobiles with
the same engine volume. They will not all have the same gas mileage. Houses in
the same part of the country that have the same square footage of living space
will not all be sold for the same price. Tar content, gas mileage (mpg), and the
price of houses (in thousands of dollars) are natural dependent variables, or
responses, in these three scenarios. Inlet temperature, engine volume (cubic feet),
and square feet of living space are, respectively, natural independent variables,
or regressors. A reasonable form of a relationship between the response Y and
the regressor x is the linear relationship
Y = β0 + β1x,
where, of course, β0 is the intercept and β1 is the slope. The relationship is
illustrated in Figure 11.1.
If the relationship is exact, then it is a deterministic relationship between
two scientific variables and there is no random or probabilistic component to it.
However, in the examples listed above, as well as in countless other scientific and
engineering phenomena, the relationship is not deterministic (i.e., a given x does
not always give the same value for Y ). As a result, important problems here
are probabilistic in nature since the relationship above cannot be viewed as being
exact. The concept of regression analysis deals with finding the best relationship
389
390 Chapter 11 Simple Linear Regression and Correlation
x
Y
} β0
Y
=
0
+β
β
1
x
Figure 11.1: A linear relationship; β0: intercept; β1: slope.
between Y and x, quantifying the strength of that relationship, and using methods
that allow for prediction of the response values given values of the regressor x.
In many applications, there will be more than one regressor (i.e., more than
one independent variable that helps to explain Y ). For example, in the case
where the response is the price of a house, one would expect the age of the house
to contribute to the explanation of the price, so in this case the multiple regression
structure might be written
Y = β0 + β1x1 + β2x2,
where Y is price, x1 is square footage, and x2 is age in years. In the next chap-
ter, we will consider problems with multiple regressors. The resulting analysis
is termed multiple regression, while the analysis of the single regressor case is
called simple regression. As a second illustration of multiple regression, a chem-
ical engineer may be concerned with the amount of hydrogen lost from samples
of a particular metal when the material is placed in storage. In this case, there
may be two inputs, storage time x1 in hours and storage temperature x2 in degrees
centigrade. The response would then be hydrogen loss Y in parts per million.
In this chapter, we deal with the topic of simple linear regression, treating
only the case of a single regressor variable in which the relationship between y and
x is linear. For the case of more than one regressor variable, the reader is referred to
Chapter 12. Denote a random sample of size n by the set {(xi, yi); i = 1, 2, . . . , n}.
If additional samples were taken using exactly the same values of x, we should
expect the y values to vary. Hence, the value yi in the ordered pair (xi, yi) is a
value of some random variable Yi.
11.2 The Simple Linear Regression (SLR) Model
We have already confined the terminology regression analysis to situations in which
relationships among variables are not deterministic (i.e., not exact). In other words,
there must be a random component to the equation that relates the variables.
11.2 The Simple Linear Regression Model 391
This random component takes into account considerations that are not being mea-
sured or, in fact, are not understood by the scientists or engineers. Indeed, in most
applications of regression, the linear equation, say Y = β0 + β1x, is an approxima-
tion that is a simplification of something unknown and much more complicated.
For example, in our illustration involving the response Y = tar content and x =
inlet temperature, Y = β0 + β1x is likely a reasonable approximation that may be
operative within a confined range on x. More often than not, the models that are
simplifications of more complicated and unknown structures are linear in nature
(i.e., linear in the parameters β0 and β1 or, in the case of the model involving the
price, size, and age of the house, linear in the parameters β0, β1, and β2). These
linear structures are simple and empirical in nature and are thus called empirical
models.
An analysis of the relationship between Y and x requires the statement of a
statistical model. A model is often used by a statistician as a representation of
an ideal that essentially defines how we perceive that the data were generated by
the system in question. The model must include the set {(xi, yi); i = 1, 2, . . . , n}
of data involving n pairs of (x, y) values. One must bear in mind that the value yi
depends on xi via a linear structure that also has the random component involved.
The basis for the use of a statistical model relates to how the random variable
Y moves with x and the random component. The model also includes what is
assumed about the statistical properties of the random component. The statistical
model for simple linear regression is given below. The response Y is related to the
independent variable x through the equation
Simple Linear
Regression Model Y = β0 + β1x + .
In the above, β0 and β1 are unknown intercept and slope parameters, respectively,
and  is a random variable that is assumed to be distributed with E() = 0 and
Var() = σ2
. The quantity σ2
is often called the error variance or residual variance.
From the model above, several things become apparent. The quantity Y is
a random variable since  is random. The value x of the regressor variable is
not random and, in fact, is measured with negligible error. The quantity , often
called a random error or random disturbance, has constant variance. This
portion of the assumptions is often called the homogeneous variance assump-
tion. The presence of this random error, , keeps the model from becoming simply
a deterministic equation. Now, the fact that E() = 0 implies that at a specific
x the y-values are distributed around the true, or population, regression line
y = β0 + β1x. If the model is well chosen (i.e., there are no additional important
regressors and the linear approximation is good within the ranges of the data),
then positive and negative errors around the true regression are reasonable. We
must keep in mind that in practice β0 and β1 are not known and must be estimated
from data. In addition, the model described above is conceptual in nature. As a
result, we never observe the actual  values in practice and thus we can never draw
the true regression line (but we assume it is there). We can only draw an estimated
line. Figure 11.2 depicts the nature of hypothetical (x, y) data scattered around a
true regression line for a case in which only n = 5 observations are available. Let
us emphasize that what we see in Figure 11.2 is not the line that is used by the
392 Chapter 11 Simple Linear Regression and Correlation
scientist or engineer. Rather, the picture merely describes what the assumptions
mean! The regression that the user has at his or her disposal will now be described.
x
y
ε1
ε2 ε3
ε4
ε5
“True’’ Regression Line
E(Y) β 0 β1x
= +
Figure 11.2: Hypothetical (x, y) data scattered around the true regression line for
n = 5.
The Fitted Regression Line
An important aspect of regression analysis is, very simply, to estimate the parame-
ters β0 and β1 (i.e., estimate the so-called regression coefficients). The method
of estimation will be discussed in the next section. Suppose we denote the esti-
mates b0 for β0 and b1 for β1. Then the estimated or fitted regression line is
given by
ŷ = b0 + b1x,
where ŷ is the predicted or fitted value. Obviously, the fitted line is an estimate
of the true regression line. We expect that the fitted line should be closer to the
true regression line when a large amount of data are available. In the following
example, we illustrate the fitted line for a real-life pollution study.
One of the more challenging problems confronting the water pollution control
field is presented by the tanning industry. Tannery wastes are chemically complex.
They are characterized by high values of chemical oxygen demand, volatile solids,
and other pollution measures. Consider the experimental data in Table 11.1, which
were obtained from 33 samples of chemically treated waste in a study conducted
at Virginia Tech. Readings on x, the percent reduction in total solids, and y, the
percent reduction in chemical oxygen demand, were recorded.
The data of Table 11.1 are plotted in a scatter diagram in Figure 11.3. From
an inspection of this scatter diagram, it can be seen that the points closely follow a
straight line, indicating that the assumption of linearity between the two variables
appears to be reasonable.
11.2 The Simple Linear Regression Model 393
Table 11.1: Measures of Reduction in Solids and Oxygen Demand
Solids Reduction, Oxygen Demand Solids Reduction, Oxygen Demand
x (%) Reduction, y (%) x (%) Reduction, y (%)
3
7
11
15
18
27
29
30
30
31
31
32
33
33
34
36
36
5
11
21
16
16
28
27
25
35
30
40
32
34
32
34
37
38
36
37
38
39
39
39
40
41
42
42
43
44
45
46
47
50
34
36
38
37
36
45
39
41
40
44
37
44
46
46
49
51
x
y
0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54
5
10
15
20
25
30
35
40
45
50
55
y
^ = b 0
+ b 1
x
μY x
=β 0
+β1x
|
Figure 11.3: Scatter diagram with regression lines.
The fitted regression line and a hypothetical true regression line are shown on
the scatter diagram of Figure 11.3. This example will be revisited as we move on
to the method of estimation, discussed in Section 11.3.
394 Chapter 11 Simple Linear Regression and Correlation
Another Look at the Model Assumptions
It may be instructive to revisit the simple linear regression model presented previ-
ously and discuss in a graphical sense how it relates to the so-called true regression.
Let us expand on Figure 11.2 by illustrating not merely where the i fall on a graph
but also what the implication is of the normality assumption on the i.
Suppose we have a simple linear regression with n = 6 evenly spaced values of x
and a single y-value at each x. Consider the graph in Figure 11.4. This illustration
should give the reader a clear representation of the model and the assumptions
involved. The line in the graph is the true regression line. The points plotted
are actual (y, x) points which are scattered about the line. Each point is on its
own normal distribution with the center of the distribution (i.e., the mean of y)
falling on the line. This is certainly expected since E(Y ) = β0 + β1x. As a result,
the true regression line goes through the means of the response, and the
actual observations are on the distribution around the means. Note also that all
distributions have the same variance, which we referred to as σ2
. Of course, the
deviation between an individual y and the point on the line will be its individual
 value. This is clear since
yi − E(Yi) = yi − (β0 + β1xi) = i.
Thus, at a given x, Y and the corresponding  both have variance σ2
.
x
Y
μ Y x =β0 + β 1x
/
x1 x2 x3 x4 x5 x6
Figure 11.4: Individual observations around true regression line.
Note also that we have written the true regression line here as μY |x = β0 + β1x
in order to reaffirm that the line goes through the mean of the Y random variable.
11.3 Least Squares and the Fitted Model
In this section, we discuss the method of fitting an estimated regression line to
the data. This is tantamount to the determination of estimates b0 for β0 and b1
11.3 Least Squares and the Fitted Model 395
for β1. This of course allows for the computation of predicted values from the
fitted line ŷ = b0 + b1x and other types of analyses and diagnostic information
that will ascertain the strength of the relationship and the adequacy of the fitted
model. Before we discuss the method of least squares estimation, it is important
to introduce the concept of a residual. A residual is essentially an error in the fit
of the model ŷ = b0 + b1x.
Residual: Error in
Fit
Given a set of regression data {(xi, yi); i = 1, 2, . . . , n} and a fitted model, ŷi =
b0 + b1xi, the ith residual ei is given by
ei = yi − ŷi, i = 1, 2, . . . , n.
Obviously, if a set of n residuals is large, then the fit of the model is not good.
Small residuals are a sign of a good fit. Another interesting relationship which is
useful at times is the following:
yi = b0 + b1xi + ei.
The use of the above equation should result in clarification of the distinction be-
tween the residuals, ei, and the conceptual model errors, i. One must bear in
mind that whereas the i are not observed, the ei not only are observed but also
play an important role in the total analysis.
Figure 11.5 depicts the line fit to this set of data, namely ŷ = b0 + b1x, and the
line reflecting the model μY |x = β0 + β1x. Now, of course, β0 and β1 are unknown
parameters. The fitted line is an estimate of the line produced by the statistical
model. Keep in mind that the line μY |x = β0 + β1x is not known.
x
y
μY x β 0 β1x
y
^ = b0 +
= +
b1x
|
(xi, yi)
}ei
{
εi
Figure 11.5: Comparing i with the residual, ei.
The Method of Least Squares
We shall find b0 and b1, the estimates of β0 and β1, so that the sum of the squares
of the residuals is a minimum. The residual sum of squares is often called the sum
of squares of the errors about the regression line and is denoted by SSE. This
396 Chapter 11 Simple Linear Regression and Correlation
minimization procedure for estimating the parameters is called the method of
least squares. Hence, we shall find a and b so as to minimize
SSE =
n

i=1
e2
i =
n

i=1
(yi − ŷi)2
=
n

i=1
(yi − b0 − b1xi)2
.
Differentiating SSE with respect to b0 and b1, we have
∂(SSE)
∂b0
= −2
n

i=1
(yi − b0 − b1xi),
∂(SSE)
∂b1
= −2
n

i=1
(yi − b0 − b1xi)xi.
Setting the partial derivatives equal to zero and rearranging the terms, we obtain
the equations (called the normal equations)
nb0 + b1
n

i=1
xi =
n

i=1
yi, b0
n

i=1
xi + b1
n

i=1
x2
i =
n

i=1
xiyi,
which may be solved simultaneously to yield computing formulas for b0 and b1.
Estimating the
Regression
Coefficients
Given the sample {(xi, yi); i = 1, 2, . . . , n}, the least squares estimates b0 and b1
of the regression coefficients β0 and β1 are computed from the formulas
b1 =
n
n

i=1
xiyi −
 n

i=1
xi
  n

i=1
yi

n
n

i=1
x2
i −
 n

i=1
xi
2 =
n

i=1
(xi − x̄)(yi − ȳ)
n

i=1
(xi − x̄)2
and
b0 =
n

i=1
yi − b1
n

i=1
xi
n
= ȳ − b1x̄.
The calculations of b0 and b1, using the data of Table 11.1, are illustrated by the
following example.
Example 11.1: Estimate the regression line for the pollution data of Table 11.1.
Solution: 33

i=1
xi = 1104,
33

i=1
yi = 1124,
33

i=1
xiyi = 41,355,
33

i=1
x2
i = 41,086
Therefore,
b1 =
(33)(41,355) − (1104)(1124)
(33)(41,086) − (1104)2
= 0.903643 and
b0 =
1124 − (0.903643)(1104)
33
= 3.829633.
Thus, the estimated regression line is given by
ŷ = 3.8296 + 0.9036x.
Using the regression line of Example 11.1, we would predict a 31% reduction
in the chemical oxygen demand when the reduction in the total solids is 30%. The
11.3 Least Squares and the Fitted Model 397
31% reduction in the chemical oxygen demand may be interpreted as an estimate
of the population mean μY |30 or as an estimate of a new observation when the
reduction in total solids is 30%. Such estimates, however, are subject to error.
Even if the experiment were controlled so that the reduction in total solids was
30%, it is unlikely that we would measure a reduction in the chemical oxygen
demand exactly equal to 31%. In fact, the original data recorded in Table 11.1
show that measurements of 25% and 35% were recorded for the reduction in oxygen
demand when the reduction in total solids was kept at 30%.
What Is Good about Least Squares?
It should be noted that the least squares criterion is designed to provide a fitted
line that results in a “closeness” between the line and the plotted points. There
are many ways of measuring closeness. For example, one may wish to determine b0
and b1 for which
n

i=1
|yi − ŷi| is minimized or for which
n

i=1
|yi − ŷi|1.5
is minimized.
These are both viable and reasonable methods. Note that both of these, as well
as the least squares procedure, result in forcing residuals to be “small” in some
sense. One should remember that the residuals are the empirical counterpart to
the  values. Figure 11.6 illustrates a set of residuals. One should note that the
fitted line has predicted values as points on the line and hence the residuals are
vertical deviations from points to the line. As a result, the least squares procedure
produces a line that minimizes the sum of squares of vertical deviations
from the points to the line.
x
y
y
^ = b 0
+ b 1x
Figure 11.6: Residuals as vertical deviations.
/ /
398 Chapter 11 Simple Linear Regression and Correlation
Exercises
11.1 A study was conducted at Virginia Tech to de-
termine if certain static arm-strength measures have
an influence on the “dynamic lift” characteristics of an
individual. Twenty-five individuals were subjected to
strength tests and then were asked to perform a weight-
lifting test in which weight was dynamically lifted over-
head. The data are given here.
Arm Dynamic
Individual Strength, x Lift, y
1
2
3
4
5
6
7
8
9
10
11
12
17.3
19.3
19.5
19.7
22.9
23.1
26.4
26.8
27.6
28.1
28.2
28.7
71.7
48.3
88.3
75.0
91.7
100.0
73.3
65.0
75.0
88.3
68.3
96.7
13
14
15
16
17
18
19
20
21
22
23
24
25
29.0
29.6
29.9
29.9
30.3
31.3
36.0
39.5
40.4
44.3
44.6
50.4
55.9
76.7
78.3
60.0
71.7
85.0
85.0
88.3
100.0
100.0
100.0
91.7
100.0
71.7
(a) Estimate β0 and β1 for the linear regression curve
μY |x = β0 + β1x.
(b) Find a point estimate of μY |30.
(c) Plot the residuals versus the x’s (arm strength).
Comment.
11.2 The grades of a class of 9 students on a midterm
report (x) and on the final examination (y) are as fol-
lows:
x 77 50 71 72 81 94 96 99 67
y 82 66 78 34 47 85 99 99 68
(a) Estimate the linear regression line.
(b) Estimate the final examination grade of a student
who received a grade of 85 on the midterm report.
11.3 The amounts of a chemical compound y that dis-
solved in 100 grams of water at various temperatures
x were recorded as follows:
x (◦
C) y (grams)
0
15
30
45
60
75
8
12
25
31
44
48
6
10
21
33
39
51
8
14
24
28
42
44
(a) Find the equation of the regression line.
(b) Graph the line on a scatter diagram.
(c) Estimate the amount of chemical that will dissolve
in 100 grams of water at 50◦
C.
11.4 The following data were collected to determine
the relationship between pressure and the correspond-
ing scale reading for the purpose of calibration.
Pressure, x (lb/sq in.) Scale Reading, y
10 13
10 18
10 16
10 15
10 20
50 86
50 90
50 88
50 88
50 92
(a) Find the equation of the regression line.
(b) The purpose of calibration in this application is to
estimate pressure from an observed scale reading.
Estimate the pressure for a scale reading of 54 using
x̂ = (54 − b0)/b1.
11.5 A study was made on the amount of converted
sugar in a certain process at various temperatures. The
data were coded and recorded as follows:
Temperature, x Converted Sugar, y
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
8.1
7.8
8.5
9.8
9.5
8.9
8.6
10.2
9.3
9.2
10.5
(a) Estimate the linear regression line.
(b) Estimate the mean amount of converted sugar pro-
duced when the coded temperature is 1.75.
(c) Plot the residuals versus temperature. Comment.
/ /
Exercises 399
11.6 In a certain type of metal test specimen, the nor-
mal stress on a specimen is known to be functionally
related to the shear resistance. The following is a set
of coded experimental data on the two variables:
Normal Stress, x Shear Resistance, y
26.8 26.5
25.4 27.3
28.9 24.2
23.6 27.1
27.7 23.6
23.9 25.9
24.7 26.3
28.1 22.5
26.9 21.7
27.4 21.4
22.6 25.8
25.6 24.9
(a) Estimate the regression line μY |x = β0 + β1x.
(b) Estimate the shear resistance for a normal stress of
24.5.
11.7 The following is a portion of a classic data set
called the “pilot plot data” in Fitting Equations to
Data by Daniel and Wood, published in 1971. The
response y is the acid content of material produced by
titration, whereas the regressor x is the organic acid
content produced by extraction and weighing.
y x y x
76
62
66
58
88
123
55
100
75
159
70
37
82
88
43
109
48
138
164
28
(a) Plot the data; does it appear that a simple linear
regression will be a suitable model?
(b) Fit a simple linear regression; estimate a slope and
intercept.
(c) Graph the regression line on the plot in (a).
11.8 A mathematics placement test is given to all en-
tering freshmen at a small college. A student who re-
ceives a grade below 35 is denied admission to the regu-
lar mathematics course and placed in a remedial class.
The placement test scores and the final grades for 20
students who took the regular course were recorded.
(a) Plot a scatter diagram.
(b) Find the equation of the regression line to predict
course grades from placement test scores.
(c) Graph the line on the scatter diagram.
(d) If 60 is the minimum passing grade, below which
placement test score should students in the future
be denied admission to this course?
Placement Test Course Grade
50 53
35 41
35 61
40 56
55 68
65 36
35 11
60 70
90 79
35 59
90 54
80 91
60 48
60 71
60 71
40 47
55 53
50 68
65 57
50 79
11.9 A study was made by a retail merchant to deter-
mine the relation between weekly advertising expendi-
tures and sales.
Advertising Costs ($) Sales ($)
40 385
20 400
25 395
20 365
30 475
50 440
40 490
20 420
50 560
40 525
25 480
50 510
(a) Plot a scatter diagram.
(b) Find the equation of the regression line to predict
weekly sales from advertising expenditures.
(c) Estimate the weekly sales when advertising costs
are $35.
(d) Plot the residuals versus advertising costs. Com-
ment.
11.10 The following data are the selling prices z of a
certain make and model of used car w years old. Fit a
curve of the form μz|w = γδw
by means of the nonlin-
ear sample regression equation ẑ = cdw
. [Hint: Write
ln ẑ = ln c + (ln d)w = b0 + b1w.]
w (years) z (dollars) w (years) z (dollars)
1 6350 3 5395
2 5695 5 4985
2 5750 5 4895
400 Chapter 11 Simple Linear Regression and Correlation
11.11 The thrust of an engine (y) is a function of
exhaust temperature (x) in ◦
F when other important
variables are held constant. Consider the following
data.
y x y x
4300 1760 4010 1665
4650 1652 3810 1550
3200 1485 4500 1700
3150 1390 3008 1270
4950 1820
(a) Plot the data.
(b) Fit a simple linear regression to the data and plot
the line through the data.
11.12 A study was done to study the effect of ambi-
ent temperature x on the electric power consumed by
a chemical plant y. Other factors were held constant,
and the data were collected from an experimental pilot
plant.
y (BTU) x (◦
F) y (BTU) x (◦
F)
250 27 265 31
285 45 298 60
320 72 267 34
295 58 321 74
(a) Plot the data.
(b) Estimate the slope and intercept in a simple linear
regression model.
(c) Predict power consumption for an ambient temper-
ature of 65◦
F.
11.13 A study of the amount of rainfall and the quan-
tity of air pollution removed produced the following
data:
Daily Rainfall, Particulate Removed,
x (0.01 cm) y (μg/m3
)
4.3 126
4.5 121
5.9 116
5.6 118
6.1 114
5.2 118
3.8 132
2.1 141
7.5 108
(a) Find the equation of the regression line to predict
the particulate removed from the amount of daily
rainfall.
(b) Estimate the amount of particulate removed when
the daily rainfall is x = 4.8 units.
11.14 A professor in the School of Business in a uni-
versity polled a dozen colleagues about the number of
professional meetings they attended in the past five
years (x) and the number of papers they submitted
to refereed journals (y) during the same period. The
summary data are given as follows:
n = 12, x̄ = 4, ȳ = 12,
n

i=1
x2
i = 232,
n

i=1
xiyi = 318.
Fit a simple linear regression model between x and y by
finding out the estimates of intercept and slope. Com-
ment on whether attending more professional meetings
would result in publishing more papers.
11.4 Properties of the Least Squares Estimators
In addition to the assumptions that the error term in the model
Yi = β0 + β1xi + i
is a random variable with mean 0 and constant variance σ2
, suppose that we make
the further assumption that 1, 2, . . . , n are independent from run to run in the
experiment. This provides a foundation for finding the
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners
statistics and Probability Analysis for beginners

More Related Content

PDF
Probability and Statistics by sheldon ross (8th edition).pdf
DOCX
Go to TOCStatistics for the SciencesCharles Peters.docx
PDF
2020-2021 EDA 101 Handout.pdf
PDF
Introduction to Methods of Applied Mathematics
PDF
Introduction to methods of applied mathematics or Advanced Mathematical Metho...
PDF
Navarro & Foxcroft (2018). Learning statistics with jamovi (1).pdf
PDF
probability_stats_for_DS.pdf
PDF
Haltermanpythonbook.pdf
Probability and Statistics by sheldon ross (8th edition).pdf
Go to TOCStatistics for the SciencesCharles Peters.docx
2020-2021 EDA 101 Handout.pdf
Introduction to Methods of Applied Mathematics
Introduction to methods of applied mathematics or Advanced Mathematical Metho...
Navarro & Foxcroft (2018). Learning statistics with jamovi (1).pdf
probability_stats_for_DS.pdf
Haltermanpythonbook.pdf

Similar to statistics and Probability Analysis for beginners (20)

PDF
ResearchMethods_2015. Economic Research for all
PDF
A practical introduction_to_python_programming_heinold
PDF
A practical introduction_to_python_programming_heinold
PDF
A_Practical_Introduction_to_Python_Programming_Heinold.pdf
PDF
A_Practical_Introduction_to_Python_Programming_Heinold.pdf
PDF
A Practical Introduction To Python Programming
PDF
Manual Solution Probability and Statistic Hayter 4th Edition
PDF
thinkcspy3.pdf
PDF
Learn python the right way
PDF
2013McGinnissPhD
PDF
toaz.info-instructor-solution-manual-probability-and-statistics-for-engineers...
PDF
Social Media Mining _indian edition available.pdf
PDF
Social Media Mining _indian edition available.pdf
PDF
javanotes5.pdf
PDF
probabilidades.pdf
PDF
Python_Programming_and_Numerical_Methods_A_Guide_for_Engineers_and.pdf
PDF
Ric walter (auth.) numerical methods and optimization a consumer guide-sprin...
PDF
MLBOOK.pdf
PDF
Gre cram plan
PDF
Stochastic Processes and Simulations – A Machine Learning Perspective
ResearchMethods_2015. Economic Research for all
A practical introduction_to_python_programming_heinold
A practical introduction_to_python_programming_heinold
A_Practical_Introduction_to_Python_Programming_Heinold.pdf
A_Practical_Introduction_to_Python_Programming_Heinold.pdf
A Practical Introduction To Python Programming
Manual Solution Probability and Statistic Hayter 4th Edition
thinkcspy3.pdf
Learn python the right way
2013McGinnissPhD
toaz.info-instructor-solution-manual-probability-and-statistics-for-engineers...
Social Media Mining _indian edition available.pdf
Social Media Mining _indian edition available.pdf
javanotes5.pdf
probabilidades.pdf
Python_Programming_and_Numerical_Methods_A_Guide_for_Engineers_and.pdf
Ric walter (auth.) numerical methods and optimization a consumer guide-sprin...
MLBOOK.pdf
Gre cram plan
Stochastic Processes and Simulations – A Machine Learning Perspective
Ad

Recently uploaded (20)

PDF
.pdf is not working space design for the following data for the following dat...
PPT
Reliability_Chapter_ presentation 1221.5784
PPTX
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
PPT
Quality review (1)_presentation of this 21
PPTX
Database Infoormation System (DBIS).pptx
PPTX
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PDF
Lecture1 pattern recognition............
PDF
Business Analytics and business intelligence.pdf
PPTX
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
PDF
Clinical guidelines as a resource for EBP(1).pdf
PPTX
Qualitative Qantitative and Mixed Methods.pptx
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PPTX
SAP 2 completion done . PRESENTATION.pptx
PPTX
Supervised vs unsupervised machine learning algorithms
PDF
[EN] Industrial Machine Downtime Prediction
PPTX
modul_python (1).pptx for professional and student
PDF
annual-report-2024-2025 original latest.
.pdf is not working space design for the following data for the following dat...
Reliability_Chapter_ presentation 1221.5784
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
Quality review (1)_presentation of this 21
Database Infoormation System (DBIS).pptx
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
oil_refinery_comprehensive_20250804084928 (1).pptx
Lecture1 pattern recognition............
Business Analytics and business intelligence.pdf
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
Clinical guidelines as a resource for EBP(1).pdf
Qualitative Qantitative and Mixed Methods.pptx
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
SAP 2 completion done . PRESENTATION.pptx
Supervised vs unsupervised machine learning algorithms
[EN] Industrial Machine Downtime Prediction
modul_python (1).pptx for professional and student
annual-report-2024-2025 original latest.
Ad

statistics and Probability Analysis for beginners

  • 2. Probability & Statistics for Engineers & Scientists
  • 4. Probability & Statistics for Engineers & Scientists N I N T H E D I T I O N Ronald E. Walpole Roanoke College Raymond H. Myers Virginia Tech Sharon L. Myers Radford University Keying Ye University of Texas at San Antonio Prentice Hall
  • 5. Editor in Chief: Deirdre Lynch Acquisitions Editor: Christopher Cummings Executive Content Editor: Christine O’Brien Associate Editor: Christina Lepre Senior Managing Editor: Karen Wernholm Senior Production Project Manager: Tracy Patruno Design Manager: Andrea Nix Cover Designer: Heather Scott Digital Assets Manager: Marianne Groth Associate Media Producer: Vicki Dreyfus Marketing Manager: Alex Gay Marketing Assistant: Kathleen DeChavez Senior Author Support/Technology Specialist: Joe Vetere Rights and Permissions Advisor: Michael Joyce Senior Manufacturing Buyer: Carol Melville Production Coordination: Lifland et al. Bookmakers Composition: Keying Ye Cover photo: Marjory Dressler/Dressler Photo-Graphics Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and Pearson was aware of a trademark claim, the designations have been printed in initial caps or all caps. Library of Congress Cataloging-in-Publication Data Probability & statistics for engineers & scientists/Ronald E. Walpole . . . [et al.] — 9th ed. p. cm. ISBN 978-0-321-62911-1 1. Engineering—Statistical methods. 2. Probabilities. I. Walpole, Ronald E. TA340.P738 2011 519.02’462–dc22 2010004857 Copyright c 2012, 2007, 2002 Pearson Education, Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Printed in the United States of America. For information on obtaining permission for use of material in this work, please submit a written request to Pearson Education, Inc., Rights and Contracts Department, 501 Boylston Street, Suite 900, Boston, MA 02116, fax your request to 617-671-3447, or e-mail at http://www.pearsoned.com/legal/permissions.htm. 1 2 3 4 5 6 7 8 9 10—EB—14 13 12 11 10 ISBN 10: 0-321-62911-6 ISBN 13: 978-0-321-62911-1
  • 6. This book is dedicated to Billy and Julie R.H.M. and S.L.M. Limin, Carolyn and Emily K.Y.
  • 8. Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv 1 Introduction to Statistics and Data Analysis . . . . . . . . . . . 1 1.1 Overview: Statistical Inference, Samples, Populations, and the Role of Probability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Sampling Procedures; Collection of Data . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Measures of Location: The Sample Mean and Median . . . . . . . . . . . 11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.4 Measures of Variability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.5 Discrete and Continuous Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.6 Statistical Modeling, Scientific Inspection, and Graphical Diag- nostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.7 General Types of Statistical Studies: Designed Experiment, Observational Study, and Retrospective Study . . . . . . . . . . . . . . . . . . 27 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2 Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.1 Sample Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.2 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.3 Counting Sample Points. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.4 Probability of an Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 2.5 Additive Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 2.6 Conditional Probability, Independence, and the Product Rule . . . 62 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 2.7 Bayes’ Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
  • 9. viii Contents 2.8 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3 Random Variables and Probability Distributions . . . . . . 81 3.1 Concept of a Random Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.2 Discrete Probability Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.3 Continuous Probability Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 3.4 Joint Probability Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 3.5 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4 Mathematical Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.1 Mean of a Random Variable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 4.2 Variance and Covariance of Random Variables. . . . . . . . . . . . . . . . . . . 119 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 4.3 Means and Variances of Linear Combinations of Random Variables 128 4.4 Chebyshev’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 4.5 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 5 Some Discrete Probability Distributions . . . . . . . . . . . . . . . . 143 5.1 Introduction and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 5.2 Binomial and Multinomial Distributions. . . . . . . . . . . . . . . . . . . . . . . . . 143 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 5.3 Hypergeometric Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 5.4 Negative Binomial and Geometric Distributions . . . . . . . . . . . . . . . . . 158 5.5 Poisson Distribution and the Poisson Process. . . . . . . . . . . . . . . . . . . . 161 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 5.6 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
  • 10. Contents ix 6 Some Continuous Probability Distributions. . . . . . . . . . . . . 171 6.1 Continuous Uniform Distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 6.2 Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 6.3 Areas under the Normal Curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 6.4 Applications of the Normal Distribution. . . . . . . . . . . . . . . . . . . . . . . . . 182 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 6.5 Normal Approximation to the Binomial . . . . . . . . . . . . . . . . . . . . . . . . . 187 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 6.6 Gamma and Exponential Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . 194 6.7 Chi-Squared Distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 6.8 Beta Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 6.9 Lognormal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 6.10 Weibull Distribution (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 6.11 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 7 Functions of Random Variables (Optional). . . . . . . . . . . . . . 211 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 7.2 Transformations of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 7.3 Moments and Moment-Generating Functions . . . . . . . . . . . . . . . . . . . . 218 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 8 Fundamental Sampling Distributions and Data Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 8.1 Random Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 8.2 Some Important Statistics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 8.3 Sampling Distributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 8.4 Sampling Distribution of Means and the Central Limit Theorem. 233 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 8.5 Sampling Distribution of S2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 8.6 t-Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 8.7 F-Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 8.8 Quantile and Probability Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 8.9 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
  • 11. x Contents 9 One- and Two-Sample Estimation Problems. . . . . . . . . . . . 265 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 9.2 Statistical Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 9.3 Classical Methods of Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 9.4 Single Sample: Estimating the Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 9.5 Standard Error of a Point Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 9.6 Prediction Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 9.7 Tolerance Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 9.8 Two Samples: Estimating the Difference between Two Means . . . 285 9.9 Paired Observations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 9.10 Single Sample: Estimating a Proportion . . . . . . . . . . . . . . . . . . . . . . . . . 296 9.11 Two Samples: Estimating the Difference between Two Proportions 300 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 9.12 Single Sample: Estimating the Variance . . . . . . . . . . . . . . . . . . . . . . . . . 303 9.13 Two Samples: Estimating the Ratio of Two Variances . . . . . . . . . . . 305 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 9.14 Maximum Likelihood Estimation (Optional). . . . . . . . . . . . . . . . . . . . . 307 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 9.15 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 10 One- and Two-Sample Tests of Hypotheses . . . . . . . . . . . . . 319 10.1 Statistical Hypotheses: General Concepts . . . . . . . . . . . . . . . . . . . . . . . 319 10.2 Testing a Statistical Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 10.3 The Use of P-Values for Decision Making in Testing Hypotheses. 331 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 10.4 Single Sample: Tests Concerning a Single Mean . . . . . . . . . . . . . . . . . 336 10.5 Two Samples: Tests on Two Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 10.6 Choice of Sample Size for Testing Means . . . . . . . . . . . . . . . . . . . . . . . . 349 10.7 Graphical Methods for Comparing Means . . . . . . . . . . . . . . . . . . . . . . . 354 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 10.8 One Sample: Test on a Single Proportion. . . . . . . . . . . . . . . . . . . . . . . . 360 10.9 Two Samples: Tests on Two Proportions . . . . . . . . . . . . . . . . . . . . . . . . 363 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 10.10 One- and Two-Sample Tests Concerning Variances . . . . . . . . . . . . . . 366 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 10.11 Goodness-of-Fit Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 10.12 Test for Independence (Categorical Data) . . . . . . . . . . . . . . . . . . . . . . . 373
  • 12. Contents xi 10.13 Test for Homogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 10.14 Two-Sample Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 10.15 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 11 Simple Linear Regression and Correlation . . . . . . . . . . . . . . 389 11.1 Introduction to Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 11.2 The Simple Linear Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 11.3 Least Squares and the Fitted Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 11.4 Properties of the Least Squares Estimators . . . . . . . . . . . . . . . . . . . . . . 400 11.5 Inferences Concerning the Regression Coefficients. . . . . . . . . . . . . . . . 403 11.6 Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 11.7 Choice of a Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 11.8 Analysis-of-Variance Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 11.9 Test for Linearity of Regression: Data with Repeated Observations 416 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 11.10 Data Plots and Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424 11.11 Simple Linear Regression Case Study. . . . . . . . . . . . . . . . . . . . . . . . . . . . 428 11.12 Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 11.13 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 12 Multiple Linear Regression and Certain Nonlinear Regression Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 12.2 Estimating the Coefficients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 12.3 Linear Regression Model Using Matrices . . . . . . . . . . . . . . . . . . . . . . . . 447 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 12.4 Properties of the Least Squares Estimators . . . . . . . . . . . . . . . . . . . . . . 453 12.5 Inferences in Multiple Linear Regression. . . . . . . . . . . . . . . . . . . . . . . . . 455 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 12.6 Choice of a Fitted Model through Hypothesis Testing . . . . . . . . . . . 462 12.7 Special Case of Orthogonality (Optional) . . . . . . . . . . . . . . . . . . . . . . . . 467 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 12.8 Categorical or Indicator Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
  • 13. xii Contents Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476 12.9 Sequential Methods for Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . 476 12.10 Study of Residuals and Violation of Assumptions (Model Check- ing) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 12.11 Cross Validation, Cp, and Other Criteria for Model Selection. . . . 487 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 12.12 Special Nonlinear Models for Nonideal Conditions . . . . . . . . . . . . . . . 496 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501 12.13 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 13 One-Factor Experiments: General. . . . . . . . . . . . . . . . . . . . . . . . 507 13.1 Analysis-of-Variance Technique. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507 13.2 The Strategy of Experimental Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . 508 13.3 One-Way Analysis of Variance: Completely Randomized Design (One-Way ANOVA). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509 13.4 Tests for the Equality of Several Variances . . . . . . . . . . . . . . . . . . . . . . 516 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518 13.5 Single-Degree-of-Freedom Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . 520 13.6 Multiple Comparisons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529 13.7 Comparing a Set of Treatments in Blocks . . . . . . . . . . . . . . . . . . . . . . . 532 13.8 Randomized Complete Block Designs. . . . . . . . . . . . . . . . . . . . . . . . . . . . 533 13.9 Graphical Methods and Model Checking . . . . . . . . . . . . . . . . . . . . . . . . 540 13.10 Data Transformations in Analysis of Variance . . . . . . . . . . . . . . . . . . . 543 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 13.11 Random Effects Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 13.12 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553 Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 13.13 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559 14 Factorial Experiments (Two or More Factors). . . . . . . . . . 561 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 14.2 Interaction in the Two-Factor Experiment. . . . . . . . . . . . . . . . . . . . . . . 562 14.3 Two-Factor Analysis of Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575 14.4 Three-Factor Experiments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
  • 14. Contents xiii 14.5 Factorial Experiments for Random Effects and Mixed Models. . . . 588 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592 Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594 14.6 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596 15 2k Factorial Experiments and Fractions . . . . . . . . . . . . . . . . . 597 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 15.2 The 2k Factorial: Calculation of Effects and Analysis of Variance 598 15.3 Nonreplicated 2k Factorial Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . 604 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609 15.4 Factorial Experiments in a Regression Setting . . . . . . . . . . . . . . . . . . . 612 15.5 The Orthogonal Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625 15.6 Fractional Factorial Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626 15.7 Analysis of Fractional Factorial Experiments . . . . . . . . . . . . . . . . . . . . 632 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634 15.8 Higher Fractions and Screening Designs . . . . . . . . . . . . . . . . . . . . . . . . . 636 15.9 Construction of Resolution III and IV Designs with 8, 16, and 32 Design Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637 15.10 Other Two-Level Resolution III Designs; The Plackett-Burman Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638 15.11 Introduction to Response Surface Methodology . . . . . . . . . . . . . . . . . . 639 15.12 Robust Parameter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652 Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653 15.13 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 16 Nonparametric Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655 16.1 Nonparametric Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655 16.2 Signed-Rank Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663 16.3 Wilcoxon Rank-Sum Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665 16.4 Kruskal-Wallis Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670 16.5 Runs Test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671 16.6 Tolerance Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674 16.7 Rank Correlation Coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677 Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
  • 15. xiv Contents 17 Statistical Quality Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681 17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681 17.2 Nature of the Control Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683 17.3 Purposes of the Control Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683 17.4 Control Charts for Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684 17.5 Control Charts for Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697 17.6 Cusum Control Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705 Review Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706 18 Bayesian Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709 18.1 Bayesian Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709 18.2 Bayesian Inferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710 18.3 Bayes Estimates Using Decision Theory Framework . . . . . . . . . . . . . 717 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721 Appendix A: Statistical Tables and Proofs. . . . . . . . . . . . . . . . . . 725 Appendix B: Answers to Odd-Numbered Non-Review Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
  • 16. Preface General Approach and Mathematical Level Our emphasis in creating the ninth edition is less on adding new material and more on providing clarity and deeper understanding. This objective was accomplished in part by including new end-of-chapter material that adds connective tissue between chapters. We affectionately call these comments at the end of the chapter “Pot Holes.” They are very useful to remind students of the big picture and how each chapter fits into that picture, and they aid the student in learning about limitations and pitfalls that may result if procedures are misused. A deeper understanding of real-world use of statistics is made available through class projects, which were added in several chapters. These projects provide the opportunity for students alone, or in groups, to gather their own experimental data and draw inferences. In some cases, the work involves a problem whose solution will illustrate the meaning of a concept or provide an empirical understanding of an important statistical result. Some existing examples were expanded and new ones were introduced to create “case studies,” in which commentary is provided to give the student a clear understanding of a statistical concept in the context of a practical situation. In this edition, we continue to emphasize a balance between theory and appli- cations. Calculus and other types of mathematical support (e.g., linear algebra) are used at about the same level as in previous editions. The coverage of an- alytical tools in statistics is enhanced with the use of calculus when discussion centers on rules and concepts in probability. Probability distributions and sta- tistical inference are highlighted in Chapters 2 through 10. Linear algebra and matrices are very lightly applied in Chapters 11 through 15, where linear regres- sion and analysis of variance are covered. Students using this text should have had the equivalent of one semester of differential and integral calculus. Linear algebra is helpful but not necessary so long as the section in Chapter 12 on mul- tiple linear regression using matrix algebra is not covered by the instructor. As in previous editions, a large number of exercises that deal with real-life scientific and engineering applications are available to challenge the student. The many data sets associated with the exercises are available for download from the website http://www.pearsonhighered.com/datasets. xv
  • 17. xvi Preface Summary of the Changes in the Ninth Edition • Class projects were added in several chapters to provide a deeper understand- ing of the real-world use of statistics. Students are asked to produce or gather their own experimental data and draw inferences from these data. • More case studies were added and others expanded to help students under- stand the statistical methods being presented in the context of a real-life situ- ation. For example, the interpretation of confidence limits, prediction limits, and tolerance limits is given using a real-life situation. • “Pot Holes” were added at the end of some chapters and expanded in others. These comments are intended to present each chapter in the context of the big picture and discuss how the chapters relate to one another. They also provide cautions about the possible misuse of statistical techniques presented in the chapter. • Chapter 1 has been enhanced to include more on single-number statistics as well as graphical techniques. New fundamental material on sampling and experimental design is presented. • Examples added to Chapter 8 on sampling distributions are intended to moti- vate P-values and hypothesis testing. This prepares the student for the more challenging material on these topics that will be presented in Chapter 10. • Chapter 12 contains additional development regarding the effect of a single regression variable in a model in which collinearity with other variables is severe. • Chapter 15 now introduces material on the important topic of response surface methodology (RSM). The use of noise variables in RSM allows the illustration of mean and variance (dual response surface) modeling. • The central composite design (CCD) is introduced in Chapter 15. • More examples are given in Chapter 18, and the discussion of using Bayesian methods for statistical decision making has been enhanced. Content and Course Planning This text is designed for either a one- or a two-semester course. A reasonable plan for a one-semester course might include Chapters 1 through 10. This would result in a curriculum that concluded with the fundamentals of both estimation and hypothesis testing. Instructors who desire that students be exposed to simple linear regression may wish to include a portion of Chapter 11. For instructors who desire to have analysis of variance included rather than regression, the one- semester course may include Chapter 13 rather than Chapters 11 and 12. Chapter 13 features one-factor analysis of variance. Another option is to eliminate portions of Chapters 5 and/or 6 as well as Chapter 7. With this option, one or more of the discrete or continuous distributions in Chapters 5 and 6 may be eliminated. These distributions include the negative binomial, geometric, gamma, Weibull, beta, and log normal distributions. Other features that one might consider re- moving from a one-semester curriculum include maximum likelihood estimation,
  • 18. Preface xvii prediction, and/or tolerance limits in Chapter 9. A one-semester curriculum has built-in flexibility, depending on the relative interest of the instructor in regression, analysis of variance, experimental design, and response surface methods (Chapter 15). There are several discrete and continuous distributions (Chapters 5 and 6) that have applications in a variety of engineering and scientific areas. Chapters 11 through 18 contain substantial material that can be added for the second semester of a two-semester course. The material on simple and multiple linear regression is in Chapters 11 and 12, respectively. Chapter 12 alone offers a substantial amount of flexibility. Multiple linear regression includes such “special topics” as categorical or indicator variables, sequential methods of model selection such as stepwise regression, the study of residuals for the detection of violations of assumptions, cross validation and the use of the PRESS statistic as well as Cp, and logistic regression. The use of orthogonal regressors, a precursor to the experimental design in Chapter 15, is highlighted. Chapters 13 and 14 offer a relatively large amount of material on analysis of variance (ANOVA) with fixed, random, and mixed models. Chapter 15 highlights the application of two-level designs in the context of full and fractional factorial experiments (2k ). Special screening designs are illustrated. Chapter 15 also features a new section on response surface methodology (RSM) to illustrate the use of experimental design for finding optimal process conditions. The fitting of a second order model through the use of a central composite design is discussed. RSM is expanded to cover the analysis of robust parameter design type problems. Noise variables are used to accommodate dual response surface models. Chapters 16, 17, and 18 contain a moderate amount of material on nonparametric statistics, quality control, and Bayesian inference. Chapter 1 is an overview of statistical inference presented on a mathematically simple level. It has been expanded from the eighth edition to more thoroughly cover single-number statistics and graphical techniques. It is designed to give students a preliminary presentation of elementary concepts that will allow them to understand more involved details that follow. Elementary concepts in sampling, data collection, and experimental design are presented, and rudimentary aspects of graphical tools are introduced, as well as a sense of what is garnered from a data set. Stem-and-leaf plots and box-and-whisker plots have been added. Graphs are better organized and labeled. The discussion of uncertainty and variation in a system is thorough and well illustrated. There are examples of how to sort out the important characteristics of a scientific process or system, and these ideas are illustrated in practical settings such as manufacturing processes, biomedical studies, and studies of biological and other scientific systems. A contrast is made between the use of discrete and continuous data. Emphasis is placed on the use of models and the information concerning statistical models that can be obtained from graphical tools. Chapters 2, 3, and 4 deal with basic probability as well as discrete and contin- uous random variables. Chapters 5 and 6 focus on specific discrete and continuous distributions as well as relationships among them. These chapters also highlight examples of applications of the distributions in real-life scientific and engineering studies. Examples, case studies, and a large number of exercises edify the student concerning the use of these distributions. Projects bring the practical use of these distributions to life through group work. Chapter 7 is the most theoretical chapter
  • 19. xviii Preface in the text. It deals with transformation of random variables and will likely not be used unless the instructor wishes to teach a relatively theoretical course. Chapter 8 contains graphical material, expanding on the more elementary set of graphi- cal tools presented and illustrated in Chapter 1. Probability plotting is discussed and illustrated with examples. The very important concept of sampling distribu- tions is presented thoroughly, and illustrations are given that involve the central limit theorem and the distribution of a sample variance under normal, independent (i.i.d.) sampling. The t and F distributions are introduced to motivate their use in chapters to follow. New material in Chapter 8 helps the student to visualize the importance of hypothesis testing, motivating the concept of a P-value. Chapter 9 contains material on one- and two-sample point and interval esti- mation. A thorough discussion with examples points out the contrast between the different types of intervals—confidence intervals, prediction intervals, and toler- ance intervals. A case study illustrates the three types of statistical intervals in the context of a manufacturing situation. This case study highlights the differences among the intervals, their sources, and the assumptions made in their develop- ment, as well as what type of scientific study or question requires the use of each one. A new approximation method has been added for the inference concerning a proportion. Chapter 10 begins with a basic presentation on the pragmatic mean- ing of hypothesis testing, with emphasis on such fundamental concepts as null and alternative hypotheses, the role of probability and the P-value, and the power of a test. Following this, illustrations are given of tests concerning one and two sam- ples under standard conditions. The two-sample t-test with paired observations is also described. A case study helps the student to develop a clear picture of what interaction among factors really means as well as the dangers that can arise when interaction between treatments and experimental units exists. At the end of Chapter 10 is a very important section that relates Chapters 9 and 10 (estimation and hypothesis testing) to Chapters 11 through 16, where statistical modeling is prominent. It is important that the student be aware of the strong connection. Chapters 11 and 12 contain material on simple and multiple linear regression, respectively. Considerably more attention is given in this edition to the effect that collinearity among the regression variables plays. A situation is presented that shows how the role of a single regression variable can depend in large part on what regressors are in the model with it. The sequential model selection procedures (for- ward, backward, stepwise, etc.) are then revisited in regard to this concept, and the rationale for using certain P-values with these procedures is provided. Chap- ter 12 offers material on nonlinear modeling with a special presentation of logistic regression, which has applications in engineering and the biological sciences. The material on multiple regression is quite extensive and thus provides considerable flexibility for the instructor, as indicated earlier. At the end of Chapter 12 is com- mentary relating that chapter to Chapters 14 and 15. Several features were added that provide a better understanding of the material in general. For example, the end-of-chapter material deals with cautions and difficulties one might encounter. It is pointed out that there are types of responses that occur naturally in practice (e.g. proportion responses, count responses, and several others) with which stan- dard least squares regression should not be used because standard assumptions do not hold and violation of assumptions may induce serious errors. The suggestion is
  • 20. Preface xix made that data transformation on the response may alleviate the problem in some cases. Flexibility is again available in Chapters 13 and 14, on the topic of analysis of variance. Chapter 13 covers one-factor ANOVA in the context of a completely randomized design. Complementary topics include tests on variances and multiple comparisons. Comparisons of treatments in blocks are highlighted, along with the topic of randomized complete blocks. Graphical methods are extended to ANOVA to aid the student in supplementing the formal inference with a pictorial type of in- ference that can aid scientists and engineers in presenting material. A new project is given in which students incorporate the appropriate randomization into each plan and use graphical techniques and P-values in reporting the results. Chapter 14 extends the material in Chapter 13 to accommodate two or more factors that are in a factorial structure. The ANOVA presentation in Chapter 14 includes work in both random and fixed effects models. Chapter 15 offers material associated with 2k factorial designs; examples and case studies present the use of screening designs and special higher fractions of the 2k . Two new and special features are the presentations of response surface methodology (RSM) and robust parameter design. These topics are linked in a case study that describes and illustrates a dual response surface design and analysis featuring the use of process mean and variance response surfaces. Computer Software Case studies, beginning in Chapter 8, feature computer printout and graphical material generated using both SAS and MINITAB. The inclusion of the computer reflects our belief that students should have the experience of reading and inter- preting computer printout and graphics, even if the software in the text is not that which is used by the instructor. Exposure to more than one type of software can broaden the experience base for the student. There is no reason to believe that the software used in the course will be that which the student will be called upon to use in practice following graduation. Examples and case studies in the text are supplemented, where appropriate, by various types of residual plots, quantile plots, normal probability plots, and other plots. Such plots are particularly prevalent in Chapters 11 through 15. Supplements Instructor’s Solutions Manual. This resource contains worked-out solutions to all text exercises and is available for download from Pearson Education’s Instructor Resource Center. Student Solutions Manual ISBN-10: 0-321-64013-6; ISBN-13: 978-0-321-64013-0. Featuring complete solutions to selected exercises, this is a great tool for students as they study and work through the problem material. PowerPoint R Lecture Slides ISBN-10: 0-321-73731-8; ISBN-13: 978-0-321-73731- 1. These slides include most of the figures and tables from the text. Slides are available to download from Pearson Education’s Instructor Resource Center.
  • 21. xx Preface StatCrunch eText. This interactive, online textbook includes StatCrunch, a pow- erful, web-based statistical software. Embedded StatCrunch buttons allow users to open all data sets and tables from the book with the click of a button and immediately perform an analysis using StatCrunch. StatCrunchTM . StatCrunch is web-based statistical software that allows users to perform complex analyses, share data sets, and generate compelling reports of their data. Users can upload their own data to StatCrunch or search the library of over twelve thousand publicly shared data sets, covering almost any topic of interest. Interactive graphical outputs help users understand statistical concepts and are available for export to enrich reports with visual representations of data. Additional features include • A full range of numerical and graphical methods that allow users to analyze and gain insights from any data set. • Reporting options that help users create a wide variety of visually appealing representations of their data. • An online survey tool that allows users to quickly build and administer surveys via a web form. StatCrunch is available to qualified adopters. For more information, visit our website at www.statcrunch.com or contact your Pearson representative. Acknowledgments We are indebted to those colleagues who reviewed the previous editions of this book and provided many helpful suggestions for this edition. They are David Groggel, Miami University; Lance Hemlow, Raritan Valley Community College; Ying Ji, University of Texas at San Antonio; Thomas Kline, University of Northern Iowa; Sheila Lawrence, Rutgers University; Luis Moreno, Broome County Community College; Donald Waldman, University of Colorado—Boulder; and Marlene Will, Spalding University. We would also like to thank Delray Schulz, Millersville Uni- versity; Roxane Burrows, Hocking College; and Frank Chmely for ensuring the accuracy of this text. We would like to thank the editorial and production services provided by nu- merous people from Pearson/Prentice Hall, especially the editor in chief Deirdre Lynch, acquisitions editor Christopher Cummings, executive content editor Chris- tine O’Brien, production editor Tracy Patruno, and copyeditor Sally Lifland. Many useful comments and suggestions by proofreader Gail Magin are greatly appreci- ated. We thank the Virginia Tech Statistical Consulting Center, which was the source of many real-life data sets. R.H.M. S.L.M. K.Y.
  • 22. Chapter 1 Introduction to Statistics and Data Analysis 1.1 Overview: Statistical Inference, Samples, Populations, and the Role of Probability Beginning in the 1980s and continuing into the 21st century, an inordinate amount of attention has been focused on improvement of quality in American industry. Much has been said and written about the Japanese “industrial miracle,” which began in the middle of the 20th century. The Japanese were able to succeed where we and other countries had failed–namely, to create an atmosphere that allows the production of high-quality products. Much of the success of the Japanese has been attributed to the use of statistical methods and statistical thinking among management personnel. Use of Scientific Data The use of statistical methods in manufacturing, development of food products, computer software, energy sources, pharmaceuticals, and many other areas involves the gathering of information or scientific data. Of course, the gathering of data is nothing new. It has been done for well over a thousand years. Data have been collected, summarized, reported, and stored for perusal. However, there is a profound distinction between collection of scientific information and inferential statistics. It is the latter that has received rightful attention in recent decades. The offspring of inferential statistics has been a large “toolbox” of statistical methods employed by statistical practitioners. These statistical methods are de- signed to contribute to the process of making scientific judgments in the face of uncertainty and variation. The product density of a particular material from a manufacturing process will not always be the same. Indeed, if the process involved is a batch process rather than continuous, there will be not only variation in ma- terial density among the batches that come off the line (batch-to-batch variation), but also within-batch variation. Statistical methods are used to analyze data from a process such as this one in order to gain more sense of where in the process changes may be made to improve the quality of the process. In this process, qual- 1
  • 23. 2 Chapter 1 Introduction to Statistics and Data Analysis ity may well be defined in relation to closeness to a target density value in harmony with what portion of the time this closeness criterion is met. An engineer may be concerned with a specific instrument that is used to measure sulfur monoxide in the air during pollution studies. If the engineer has doubts about the effectiveness of the instrument, there are two sources of variation that must be dealt with. The first is the variation in sulfur monoxide values that are found at the same locale on the same day. The second is the variation between values observed and the true amount of sulfur monoxide that is in the air at the time. If either of these two sources of variation is exceedingly large (according to some standard set by the engineer), the instrument may need to be replaced. In a biomedical study of a new drug that reduces hypertension, 85% of patients experienced relief, while it is generally recognized that the current drug, or “old” drug, brings relief to 80% of pa- tients that have chronic hypertension. However, the new drug is more expensive to make and may result in certain side effects. Should the new drug be adopted? This is a problem that is encountered (often with much more complexity) frequently by pharmaceutical firms in conjunction with the FDA (Federal Drug Administration). Again, the consideration of variation needs to be taken into account. The “85%” value is based on a certain number of patients chosen for the study. Perhaps if the study were repeated with new patients the observed number of “successes” would be 75%! It is the natural variation from study to study that must be taken into account in the decision process. Clearly this variation is important, since variation from patient to patient is endemic to the problem. Variability in Scientific Data In the problems discussed above the statistical methods used involve dealing with variability, and in each case the variability to be studied is that encountered in scientific data. If the observed product density in the process were always the same and were always on target, there would be no need for statistical methods. If the device for measuring sulfur monoxide always gives the same value and the value is accurate (i.e., it is correct), no statistical analysis is needed. If there were no patient-to-patient variability inherent in the response to the drug (i.e., it either always brings relief or not), life would be simple for scientists in the pharmaceutical firms and FDA and no statistician would be needed in the decision process. Statistics researchers have produced an enormous number of analytical methods that allow for analysis of data from systems like those described above. This reflects the true nature of the science that we call inferential statistics, namely, using techniques that allow us to go beyond merely reporting data to drawing conclusions (or inferences) about the scientific system. Statisticians make use of fundamental laws of probability and statistical inference to draw conclusions about scientific systems. Information is gathered in the form of samples, or collections of observations. The process of sampling is introduced in Chapter 2, and the discussion continues throughout the entire book. Samples are collected from populations, which are collections of all individ- uals or individual items of a particular type. At times a population signifies a scientific system. For example, a manufacturer of computer boards may wish to eliminate defects. A sampling process may involve collecting information on 50 computer boards sampled randomly from the process. Here, the population is all
  • 24. 1.1 Overview: Statistical Inference, Samples, Populations, and the Role of Probability 3 computer boards manufactured by the firm over a specific period of time. If an improvement is made in the computer board process and a second sample of boards is collected, any conclusions drawn regarding the effectiveness of the change in pro- cess should extend to the entire population of computer boards produced under the “improved process.” In a drug experiment, a sample of patients is taken and each is given a specific drug to reduce blood pressure. The interest is focused on drawing conclusions about the population of those who suffer from hypertension. Often, it is very important to collect scientific data in a systematic way, with planning being high on the agenda. At times the planning is, by necessity, quite limited. We often focus only on certain properties or characteristics of the items or objects in the population. Each characteristic has particular engineering or, say, biological importance to the “customer,” the scientist or engineer who seeks to learn about the population. For example, in one of the illustrations above the quality of the process had to do with the product density of the output of a process. An engineer may need to study the effect of process conditions, temperature, humidity, amount of a particular ingredient, and so on. He or she can systematically move these factors to whatever levels are suggested according to whatever prescription or experimental design is desired. However, a forest scientist who is interested in a study of factors that influence wood density in a certain kind of tree cannot necessarily design an experiment. This case may require an observational study in which data are collected in the field but factor levels can not be preselected. Both of these types of studies lend themselves to methods of statistical inference. In the former, the quality of the inferences will depend on proper planning of the experiment. In the latter, the scientist is at the mercy of what can be gathered. For example, it is sad if an agronomist is interested in studying the effect of rainfall on plant yield and the data are gathered during a drought. The importance of statistical thinking by managers and the use of statistical inference by scientific personnel is widely acknowledged. Research scientists gain much from scientific data. Data provide understanding of scientific phenomena. Product and process engineers learn a great deal in their off-line efforts to improve the process. They also gain valuable insight by gathering production data (on- line monitoring) on a regular basis. This allows them to determine necessary modifications in order to keep the process at a desired level of quality. There are times when a scientific practitioner wishes only to gain some sort of summary of a set of data represented in the sample. In other words, inferential statistics is not required. Rather, a set of single-number statistics or descriptive statistics is helpful. These numbers give a sense of center of the location of the data, variability in the data, and the general nature of the distribution of observations in the sample. Though no specific statistical methods leading to statistical inference are incorporated, much can be learned. At times, descriptive statistics are accompanied by graphics. Modern statistical software packages allow for computation of means, medians, standard deviations, and other single- number statistics as well as production of graphs that show a “footprint” of the nature of the sample. Definitions and illustrations of the single-number statistics and graphs, including histograms, stem-and-leaf plots, scatter plots, dot plots, and box plots, will be given in sections that follow.
  • 25. 4 Chapter 1 Introduction to Statistics and Data Analysis The Role of Probability In this book, Chapters 2 to 6 deal with fundamental notions of probability. A thorough grounding in these concepts allows the reader to have a better under- standing of statistical inference. Without some formalism of probability theory, the student cannot appreciate the true interpretation from data analysis through modern statistical methods. It is quite natural to study probability prior to study- ing statistical inference. Elements of probability allow us to quantify the strength or “confidence” in our conclusions. In this sense, concepts in probability form a major component that supplements statistical methods and helps us gauge the strength of the statistical inference. The discipline of probability, then, provides the transition between descriptive statistics and inferential methods. Elements of probability allow the conclusion to be put into the language that the science or engineering practitioners require. An example follows that will enable the reader to understand the notion of a P-value, which often provides the “bottom line” in the interpretation of results from the use of statistical methods. Example 1.1: Suppose that an engineer encounters data from a manufacturing process in which 100 items are sampled and 10 are found to be defective. It is expected and antic- ipated that occasionally there will be defective items. Obviously these 100 items represent the sample. However, it has been determined that in the long run, the company can only tolerate 5% defective in the process. Now, the elements of prob- ability allow the engineer to determine how conclusive the sample information is regarding the nature of the process. In this case, the population conceptually represents all possible items from the process. Suppose we learn that if the process is acceptable, that is, if it does produce items no more than 5% of which are de- fective, there is a probability of 0.0282 of obtaining 10 or more defective items in a random sample of 100 items from the process. This small probability suggests that the process does, indeed, have a long-run rate of defective items that exceeds 5%. In other words, under the condition of an acceptable process, the sample in- formation obtained would rarely occur. However, it did occur! Clearly, though, it would occur with a much higher probability if the process defective rate exceeded 5% by a significant amount. From this example it becomes clear that the elements of probability aid in the translation of sample information into something conclusive or inconclusive about the scientific system. In fact, what was learned likely is alarming information to the engineer or manager. Statistical methods, which we will actually detail in Chapter 10, produced a P-value of 0.0282. The result suggests that the process very likely is not acceptable. The concept of a P-value is dealt with at length in succeeding chapters. The example that follows provides a second illustration. Example 1.2: Often the nature of the scientific study will dictate the role that probability and deductive reasoning play in statistical inference. Exercise 9.40 on page 294 provides data associated with a study conducted at the Virginia Polytechnic Institute and State University on the development of a relationship between the roots of trees and the action of a fungus. Minerals are transferred from the fungus to the trees and sugars from the trees to the fungus. Two samples of 10 northern red oak seedlings were planted in a greenhouse, one containing seedlings treated with nitrogen and
  • 26. 1.1 Overview: Statistical Inference, Samples, Populations, and the Role of Probability 5 the other containing seedlings with no nitrogen. All other environmental conditions were held constant. All seedlings contained the fungus Pisolithus tinctorus. More details are supplied in Chapter 9. The stem weights in grams were recorded after the end of 140 days. The data are given in Table 1.1. Table 1.1: Data Set for Example 1.2 No Nitrogen Nitrogen 0.32 0.26 0.53 0.43 0.28 0.47 0.37 0.49 0.47 0.52 0.43 0.75 0.36 0.79 0.42 0.86 0.38 0.62 0.43 0.46 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 Figure 1.1: A dot plot of stem weight data. In this example there are two samples from two separate populations. The purpose of the experiment is to determine if the use of nitrogen has an influence on the growth of the roots. The study is a comparative study (i.e., we seek to compare the two populations with regard to a certain important characteristic). It is instructive to plot the data as shown in the dot plot of Figure 1.1. The ◦ values represent the “nitrogen” data and the × values represent the “no-nitrogen” data. Notice that the general appearance of the data might suggest to the reader that, on average, the use of nitrogen increases the stem weight. Four nitrogen ob- servations are considerably larger than any of the no-nitrogen observations. Most of the no-nitrogen observations appear to be below the center of the data. The appearance of the data set would seem to indicate that nitrogen is effective. But how can this be quantified? How can all of the apparent visual evidence be summa- rized in some sense? As in the preceding example, the fundamentals of probability can be used. The conclusions may be summarized in a probability statement or P-value. We will not show here the statistical inference that produces the summary probability. As in Example 1.1, these methods will be discussed in Chapter 10. The issue revolves around the “probability that data like these could be observed” given that nitrogen has no effect, in other words, given that both samples were generated from the same population. Suppose that this probability is small, say 0.03. That would certainly be strong evidence that the use of nitrogen does indeed influence (apparently increases) average stem weight of the red oak seedlings.
  • 27. 6 Chapter 1 Introduction to Statistics and Data Analysis How Do Probability and Statistical Inference Work Together? It is important for the reader to understand the clear distinction between the discipline of probability, a science in its own right, and the discipline of inferen- tial statistics. As we have already indicated, the use or application of concepts in probability allows real-life interpretation of the results of statistical inference. As a result, it can be said that statistical inference makes use of concepts in probability. One can glean from the two examples above that the sample information is made available to the analyst and, with the aid of statistical methods and elements of probability, conclusions are drawn about some feature of the population (the pro- cess does not appear to be acceptable in Example 1.1, and nitrogen does appear to influence average stem weights in Example 1.2). Thus for a statistical problem, the sample along with inferential statistics allows us to draw conclu- sions about the population, with inferential statistics making clear use of elements of probability. This reasoning is inductive in nature. Now as we move into Chapter 2 and beyond, the reader will note that, unlike what we do in our two examples here, we will not focus on solving statistical problems. Many examples will be given in which no sample is involved. There will be a population clearly described with all features of the population known. Then questions of im- portance will focus on the nature of data that might hypothetically be drawn from the population. Thus, one can say that elements in probability allow us to draw conclusions about characteristics of hypothetical data taken from the population, based on known features of the population. This type of reasoning is deductive in nature. Figure 1.2 shows the fundamental relationship between probability and inferential statistics. Population Sample Probability Statistical Inference Figure 1.2: Fundamental relationship between probability and inferential statistics. Now, in the grand scheme of things, which is more important, the field of probability or the field of statistics? They are both very important and clearly are complementary. The only certainty concerning the pedagogy of the two disciplines lies in the fact that if statistics is to be taught at more than merely a “cookbook” level, then the discipline of probability must be taught first. This rule stems from the fact that nothing can be learned about a population from a sample until the analyst learns the rudiments of uncertainty in that sample. For example, consider Example 1.1. The question centers around whether or not the population, defined by the process, is no more than 5% defective. In other words, the conjecture is that on the average 5 out of 100 items are defective. Now, the sample contains 100 items and 10 are defective. Does this support the conjecture or refute it? On the
  • 28. 1.2 Sampling Procedures; Collection of Data 7 surface it would appear to be a refutation of the conjecture because 10 out of 100 seem to be “a bit much.” But without elements of probability, how do we know? Only through the study of material in future chapters will we learn the conditions under which the process is acceptable (5% defective). The probability of obtaining 10 or more defective items in a sample of 100 is 0.0282. We have given two examples where the elements of probability provide a sum- mary that the scientist or engineer can use as evidence on which to build a decision. The bridge between the data and the conclusion is, of course, based on foundations of statistical inference, distribution theory, and sampling distributions discussed in future chapters. 1.2 Sampling Procedures; Collection of Data In Section 1.1 we discussed very briefly the notion of sampling and the sampling process. While sampling appears to be a simple concept, the complexity of the questions that must be answered about the population or populations necessitates that the sampling process be very complex at times. While the notion of sampling is discussed in a technical way in Chapter 8, we shall endeavor here to give some common-sense notions of sampling. This is a natural transition to a discussion of the concept of variability. Simple Random Sampling The importance of proper sampling revolves around the degree of confidence with which the analyst is able to answer the questions being asked. Let us assume that only a single population exists in the problem. Recall that in Example 1.2 two populations were involved. Simple random sampling implies that any particular sample of a specified sample size has the same chance of being selected as any other sample of the same size. The term sample size simply means the number of elements in the sample. Obviously, a table of random numbers can be utilized in sample selection in many instances. The virtue of simple random sampling is that it aids in the elimination of the problem of having the sample reflect a different (possibly more confined) population than the one about which inferences need to be made. For example, a sample is to be chosen to answer certain questions regarding political preferences in a certain state in the United States. The sample involves the choice of, say, 1000 families, and a survey is to be conducted. Now, suppose it turns out that random sampling is not used. Rather, all or nearly all of the 1000 families chosen live in an urban setting. It is believed that political preferences in rural areas differ from those in urban areas. In other words, the sample drawn actually confined the population and thus the inferences need to be confined to the “limited population,” and in this case confining may be undesirable. If, indeed, the inferences need to be made about the state as a whole, the sample of size 1000 described here is often referred to as a biased sample. As we hinted earlier, simple random sampling is not always appropriate. Which alternative approach is used depends on the complexity of the problem. Often, for example, the sampling units are not homogeneous and naturally divide themselves into nonoverlapping groups that are homogeneous. These groups are called strata,
  • 29. 8 Chapter 1 Introduction to Statistics and Data Analysis and a procedure called stratified random sampling involves random selection of a sample within each stratum. The purpose is to be sure that each of the strata is neither over- nor underrepresented. For example, suppose a sample survey is conducted in order to gather preliminary opinions regarding a bond referendum that is being considered in a certain city. The city is subdivided into several ethnic groups which represent natural strata. In order not to disregard or overrepresent any group, separate random samples of families could be chosen from each group. Experimental Design The concept of randomness or random assignment plays a huge role in the area of experimental design, which was introduced very briefly in Section 1.1 and is an important staple in almost any area of engineering or experimental science. This will be discussed at length in Chapters 13 through 15. However, it is instructive to give a brief presentation here in the context of random sampling. A set of so-called treatments or treatment combinations becomes the populations to be studied or compared in some sense. An example is the nitrogen versus no-nitrogen treat- ments in Example 1.2. Another simple example would be “placebo” versus “active drug,” or in a corrosion fatigue study we might have treatment combinations that involve specimens that are coated or uncoated as well as conditions of low or high humidity to which the specimens are exposed. In fact, there are four treatment or factor combinations (i.e., 4 populations), and many scientific questions may be asked and answered through statistical and inferential methods. Consider first the situation in Example 1.2. There are 20 diseased seedlings involved in the exper- iment. It is easy to see from the data themselves that the seedlings are different from each other. Within the nitrogen group (or the no-nitrogen group) there is considerable variability in the stem weights. This variability is due to what is generally called the experimental unit. This is a very important concept in in- ferential statistics, in fact one whose description will not end in this chapter. The nature of the variability is very important. If it is too large, stemming from a condition of excessive nonhomogeneity in experimental units, the variability will “wash out” any detectable difference between the two populations. Recall that in this case that did not occur. The dot plot in Figure 1.1 and P-value indicated a clear distinction between these two conditions. What role do those experimental units play in the data- taking process itself? The common-sense and, indeed, quite standard approach is to assign the 20 seedlings or experimental units randomly to the two treat- ments or conditions. In the drug study, we may decide to use a total of 200 available patients, patients that clearly will be different in some sense. They are the experimental units. However, they all may have the same chronic condition for which the drug is a potential treatment. Then in a so-called completely ran- domized design, 100 patients are assigned randomly to the placebo and 100 to the active drug. Again, it is these experimental units within a group or treatment that produce the variability in data results (i.e., variability in the measured result), say blood pressure, or whatever drug efficacy value is important. In the corrosion fatigue study, the experimental units are the specimens that are the subjects of the corrosion.
  • 30. 1.2 Sampling Procedures; Collection of Data 9 Why Assign Experimental Units Randomly? What is the possible negative impact of not randomly assigning experimental units to the treatments or treatment combinations? This is seen most clearly in the case of the drug study. Among the characteristics of the patients that produce variability in the results are age, gender, and weight. Suppose merely by chance the placebo group contains a sample of people that are predominately heavier than those in the treatment group. Perhaps heavier individuals have a tendency to have a higher blood pressure. This clearly biases the result, and indeed, any result obtained through the application of statistical inference may have little to do with the drug and more to do with differences in weights among the two samples of patients. We should emphasize the attachment of importance to the term variability. Excessive variability among experimental units “camouflages” scientific findings. In future sections, we attempt to characterize and quantify measures of variability. In sections that follow, we introduce and discuss specific quantities that can be computed in samples; the quantities give a sense of the nature of the sample with respect to center of location of the data and variability in the data. A discussion of several of these single-number measures serves to provide a preview of what statistical information will be important components of the statistical methods that are used in future chapters. These measures that help characterize the nature of the data set fall into the category of descriptive statistics. This material is a prelude to a brief presentation of pictorial and graphical methods that go even further in characterization of the data set. The reader should understand that the statistical methods illustrated here will be used throughout the text. In order to offer the reader a clearer picture of what is involved in experimental design studies, we offer Example 1.3. Example 1.3: A corrosion study was made in order to determine whether coating an aluminum metal with a corrosion retardation substance reduced the amount of corrosion. The coating is a protectant that is advertised to minimize fatigue damage in this type of material. Also of interest is the influence of humidity on the amount of corrosion. A corrosion measurement can be expressed in thousands of cycles to failure. Two levels of coating, no coating and chemical corrosion coating, were used. In addition, the two relative humidity levels are 20% relative humidity and 80% relative humidity. The experiment involves four treatment combinations that are listed in the table that follows. There are eight experimental units used, and they are aluminum specimens prepared; two are assigned randomly to each of the four treatment combinations. The data are presented in Table 1.2. The corrosion data are averages of two specimens. A plot of the averages is pictured in Figure 1.3. A relatively large value of cycles to failure represents a small amount of corrosion. As one might expect, an increase in humidity appears to make the corrosion worse. The use of the chemical corrosion coating procedure appears to reduce corrosion. In this experimental design illustration, the engineer has systematically selected the four treatment combinations. In order to connect this situation to concepts with which the reader has been exposed to this point, it should be assumed that the
  • 31. 10 Chapter 1 Introduction to Statistics and Data Analysis Table 1.2: Data for Example 1.3 Average Corrosion in Coating Humidity Thousands of Cycles to Failure Uncoated 20% 975 80% 350 Chemical Corrosion 20% 1750 80% 1550 0 1000 2000 0 20% 80% Humidity Average Corrosion Uncoated Chemical Corrosion Coating Figure 1.3: Corrosion results for Example 1.3. conditions representing the four treatment combinations are four separate popula- tions and that the two corrosion values observed for each population are important pieces of information. The importance of the average in capturing and summariz- ing certain features in the population will be highlighted in Section 1.3. While we might draw conclusions about the role of humidity and the impact of coating the specimens from the figure, we cannot truly evaluate the results from an analyti- cal point of view without taking into account the variability around the average. Again, as we indicated earlier, if the two corrosion values for each treatment com- bination are close together, the picture in Figure 1.3 may be an accurate depiction. But if each corrosion value in the figure is an average of two values that are widely dispersed, then this variability may, indeed, truly “wash away” any information that appears to come through when one observes averages only. The foregoing example illustrates these concepts: (1) random assignment of treatment combinations (coating, humidity) to experi- mental units (specimens) (2) the use of sample averages (average corrosion values) in summarizing sample information (3) the need for consideration of measures of variability in the analysis of any sample or sets of samples
  • 32. 1.3 Measures of Location: The Sample Mean and Median 11 This example suggests the need for what follows in Sections 1.3 and 1.4, namely, descriptive statistics that indicate measures of center of location in a set of data, and those that measure variability. 1.3 Measures of Location: The Sample Mean and Median Measures of location are designed to provide the analyst with some quantitative values of where the center, or some other location, of data is located. In Example 1.2, it appears as if the center of the nitrogen sample clearly exceeds that of the no-nitrogen sample. One obvious and very useful measure is the sample mean. The mean is simply a numerical average. Definition 1.1: Suppose that the observations in a sample are x1, x2, . . . , xn. The sample mean, denoted by x̄, is x̄ = n i=1 xi n = x1 + x2 + · · · + xn n . There are other measures of central tendency that are discussed in detail in future chapters. One important measure is the sample median. The purpose of the sample median is to reflect the central tendency of the sample in such a way that it is uninfluenced by extreme values or outliers. Definition 1.2: Given that the observations in a sample are x1, x2, . . . , xn, arranged in increasing order of magnitude, the sample median is x̃ = x(n+1)/2, if n is odd, 1 2 (xn/2 + xn/2+1), if n is even. As an example, suppose the data set is the following: 1.7, 2.2, 3.9, 3.11, and 14.7. The sample mean and median are, respectively, x̄ = 5.12, x̃ = 3.9. Clearly, the mean is influenced considerably by the presence of the extreme obser- vation, 14.7, whereas the median places emphasis on the true “center” of the data set. In the case of the two-sample data set of Example 1.2, the two measures of central tendency for the individual samples are x̄ (no nitrogen) = 0.399 gram, x̃ (no nitrogen) = 0.38 + 0.42 2 = 0.400 gram, x̄ (nitrogen) = 0.565 gram, x̃ (nitrogen) = 0.49 + 0.52 2 = 0.505 gram. Clearly there is a difference in concept between the mean and median. It may be of interest to the reader with an engineering background that the sample mean
  • 33. 12 Chapter 1 Introduction to Statistics and Data Analysis is the centroid of the data in a sample. In a sense, it is the point at which a fulcrum can be placed to balance a system of “weights” which are the locations of the individual data. This is shown in Figure 1.4 with regard to the with-nitrogen sample. 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 x 0.565 Figure 1.4: Sample mean as a centroid of the with-nitrogen stem weight. In future chapters, the basis for the computation of x̄ is that of an estimate of the population mean. As we indicated earlier, the purpose of statistical infer- ence is to draw conclusions about population characteristics or parameters and estimation is a very important feature of statistical inference. The median and mean can be quite different from each other. Note, however, that in the case of the stem weight data the sample mean value for no-nitrogen is quite similar to the median value. Other Measures of Locations There are several other methods of quantifying the center of location of the data in the sample. We will not deal with them at this point. For the most part, alternatives to the sample mean are designed to produce values that represent compromises between the mean and the median. Rarely do we make use of these other measures. However, it is instructive to discuss one class of estimators, namely the class of trimmed means. A trimmed mean is computed by “trimming away” a certain percent of both the largest and the smallest set of values. For example, the 10% trimmed mean is found by eliminating the largest 10% and smallest 10% and computing the average of the remaining values. For example, in the case of the stem weight data, we would eliminate the largest and smallest since the sample size is 10 for each sample. So for the without-nitrogen group the 10% trimmed mean is given by x̄tr(10) = 0.32 + 0.37 + 0.47 + 0.43 + 0.36 + 0.42 + 0.38 + 0.43 8 = 0.39750, and for the 10% trimmed mean for the with-nitrogen group we have x̄tr(10) = 0.43 + 0.47 + 0.49 + 0.52 + 0.75 + 0.79 + 0.62 + 0.46 8 = 0.56625. Note that in this case, as expected, the trimmed means are close to both the mean and the median for the individual samples. The trimmed mean is, of course, more insensitive to outliers than the sample mean but not as insensitive as the median. On the other hand, the trimmed mean approach makes use of more information than the sample median. Note that the sample median is, indeed, a special case of the trimmed mean in which all of the sample data are eliminated apart from the middle one or two observations.
  • 34. / / Exercises 13 Exercises 1.1 The following measurements were recorded for the drying time, in hours, of a certain brand of latex paint. 3.4 2.5 4.8 2.9 3.6 2.8 3.3 5.6 3.7 2.8 4.4 4.0 5.2 3.0 4.8 Assume that the measurements are a simple random sample. (a) What is the sample size for the above sample? (b) Calculate the sample mean for these data. (c) Calculate the sample median. (d) Plot the data by way of a dot plot. (e) Compute the 20% trimmed mean for the above data set. (f) Is the sample mean for these data more or less de- scriptive as a center of location than the trimmed mean? 1.2 According to the journal Chemical Engineering, an important property of a fiber is its water ab- sorbency. A random sample of 20 pieces of cotton fiber was taken and the absorbency on each piece was mea- sured. The following are the absorbency values: 18.71 21.41 20.72 21.81 19.29 22.43 20.17 23.71 19.44 20.50 18.92 20.33 23.00 22.85 19.25 21.77 22.11 19.77 18.04 21.12 (a) Calculate the sample mean and median for the above sample values. (b) Compute the 10% trimmed mean. (c) Do a dot plot of the absorbency data. (d) Using only the values of the mean, median, and trimmed mean, do you have evidence of outliers in the data? 1.3 A certain polymer is used for evacuation systems for aircraft. It is important that the polymer be re- sistant to the aging process. Twenty specimens of the polymer were used in an experiment. Ten were as- signed randomly to be exposed to an accelerated batch aging process that involved exposure to high tempera- tures for 10 days. Measurements of tensile strength of the specimens were made, and the following data were recorded on tensile strength in psi: No aging: 227 222 218 217 225 218 216 229 228 221 Aging: 219 214 215 211 209 218 203 204 201 205 (a) Do a dot plot of the data. (b) From your plot, does it appear as if the aging pro- cess has had an effect on the tensile strength of this polymer? Explain. (c) Calculate the sample mean tensile strength of the two samples. (d) Calculate the median for both. Discuss the simi- larity or lack of similarity between the mean and median of each group. 1.4 In a study conducted by the Department of Me- chanical Engineering at Virginia Tech, the steel rods supplied by two different companies were compared. Ten sample springs were made out of the steel rods supplied by each company, and a measure of flexibility was recorded for each. The data are as follows: Company A: 9.3 8.8 6.8 8.7 8.5 6.7 8.0 6.5 9.2 7.0 Company B: 11.0 9.8 9.9 10.2 10.1 9.7 11.0 11.1 10.2 9.6 (a) Calculate the sample mean and median for the data for the two companies. (b) Plot the data for the two companies on the same line and give your impression regarding any appar- ent differences between the two companies. 1.5 Twenty adult males between the ages of 30 and 40 participated in a study to evaluate the effect of a specific health regimen involving diet and exercise on the blood cholesterol. Ten were randomly selected to be a control group, and ten others were assigned to take part in the regimen as the treatment group for a period of 6 months. The following data show the re- duction in cholesterol experienced for the time period for the 20 subjects: Control group: 7 3 −4 14 2 5 22 −7 9 5 Treatment group: −6 5 9 4 4 12 37 5 3 3 (a) Do a dot plot of the data for both groups on the same graph. (b) Compute the mean, median, and 10% trimmed mean for both groups. (c) Explain why the difference in means suggests one conclusion about the effect of the regimen, while the difference in medians or trimmed means sug- gests a different conclusion. 1.6 The tensile strength of silicone rubber is thought to be a function of curing temperature. A study was carried out in which samples of 12 specimens of the rub- ber were prepared using curing temperatures of 20◦ C and 45◦ C. The data below show the tensile strength values in megapascals.
  • 35. 14 Chapter 1 Introduction to Statistics and Data Analysis 20◦ C: 2.07 2.14 2.22 2.03 2.21 2.03 2.05 2.18 2.09 2.14 2.11 2.02 45◦ C: 2.52 2.15 2.49 2.03 2.37 2.05 1.99 2.42 2.08 2.42 2.29 2.01 (a) Show a dot plot of the data with both low and high temperature tensile strength values. (b) Compute sample mean tensile strength for both samples. (c) Does it appear as if curing temperature has an influence on tensile strength, based on the plot? Comment further. (d) Does anything else appear to be influenced by an increase in curing temperature? Explain. 1.4 Measures of Variability Sample variability plays an important role in data analysis. Process and product variability is a fact of life in engineering and scientific systems: The control or reduction of process variability is often a source of major difficulty. More and more process engineers and managers are learning that product quality and, as a result, profits derived from manufactured products are very much a function of process variability. As a result, much of Chapters 9 through 15 deals with data analysis and modeling procedures in which sample variability plays a major role. Even in small data analysis problems, the success of a particular statistical method may depend on the magnitude of the variability among the observations in the sample. Measures of location in a sample do not provide a proper summary of the nature of a data set. For instance, in Example 1.2 we cannot conclude that the use of nitrogen enhances growth without taking sample variability into account. While the details of the analysis of this type of data set are deferred to Chap- ter 9, it should be clear from Figure 1.1 that variability among the no-nitrogen observations and variability among the nitrogen observations are certainly of some consequence. In fact, it appears that the variability within the nitrogen sample is larger than that of the no-nitrogen sample. Perhaps there is something about the inclusion of nitrogen that not only increases the stem height (x̄ of 0.565 gram compared to an x̄ of 0.399 gram for the no-nitrogen sample) but also increases the variability in stem height (i.e., renders the stem height more inconsistent). As another example, contrast the two data sets below. Each contains two samples and the difference in the means is roughly the same for the two samples, but data set B seems to provide a much sharper contrast between the two populations from which the samples were taken. If the purpose of such an experiment is to detect differences between the two populations, the task is accomplished in the case of data set B. However, in data set A the large variability within the two samples creates difficulty. In fact, it is not clear that there is a distinction between the two populations. Data set A: X X X X X X 0 X X 0 0 X X X 0 0 0 0 0 0 0 0 Data set B: X X X X X X X X X X X 0 0 0 0 0 0 0 0 0 0 0 xX x0 xX x0
  • 36. 1.4 Measures of Variability 15 Sample Range and Sample Standard Deviation Just as there are many measures of central tendency or location, there are many measures of spread or variability. Perhaps the simplest one is the sample range Xmax − Xmin. The range can be very useful and is discussed at length in Chapter 17 on statistical quality control. The sample measure of spread that is used most often is the sample standard deviation. We again let x1, x2, . . . , xn denote sample values. Definition 1.3: The sample variance, denoted by s2 , is given by s2 = n i=1 (xi − x̄)2 n − 1 . The sample standard deviation, denoted by s, is the positive square root of s2 , that is, s = √ s2. It should be clear to the reader that the sample standard deviation is, in fact, a measure of variability. Large variability in a data set produces relatively large values of (x − x̄)2 and thus a large sample variance. The quantity n − 1 is often called the degrees of freedom associated with the variance estimate. In this simple example, the degrees of freedom depict the number of independent pieces of information available for computing variability. For example, suppose that we wish to compute the sample variance and standard deviation of the data set (5, 17, 6, 4). The sample average is x̄ = 8. The computation of the variance involves (5 − 8)2 + (17 − 8)2 + (6 − 8)2 + (4 − 8)2 = (−3)2 + 92 + (−2)2 + (−4)2 . The quantities inside parentheses sum to zero. In general, n i=1 (xi − x̄) = 0 (see Exercise 1.16 on page 31). Then the computation of a sample variance does not involve n independent squared deviations from the mean x̄. In fact, since the last value of x − x̄ is determined by the initial n − 1 of them, we say that these are n − 1 “pieces of information” that produce s2 . Thus, there are n − 1 degrees of freedom rather than n degrees of freedom for computing a sample variance. Example 1.4: In an example discussed extensively in Chapter 10, an engineer is interested in testing the “bias” in a pH meter. Data are collected on the meter by measuring the pH of a neutral substance (pH = 7.0). A sample of size 10 is taken, with results given by 7.07 7.00 7.10 6.97 7.00 7.03 7.01 7.01 6.98 7.08. The sample mean x̄ is given by x̄ = 7.07 + 7.00 + 7.10 + · · · + 7.08 10 = 7.0250.
  • 37. 16 Chapter 1 Introduction to Statistics and Data Analysis The sample variance s2 is given by s2 = 1 9 [(7.07 − 7.025)2 + (7.00 − 7.025)2 + (7.10 − 7.025)2 + · · · + (7.08 − 7.025)2 ] = 0.001939. As a result, the sample standard deviation is given by s = √ 0.001939 = 0.044. So the sample standard deviation is 0.0440 with n − 1 = 9 degrees of freedom. Units for Standard Deviation and Variance It should be apparent from Definition 1.3 that the variance is a measure of the average squared deviation from the mean x̄. We use the term average squared deviation even though the definition makes use of a division by degrees of freedom n − 1 rather than n. Of course, if n is large, the difference in the denominator is inconsequential. As a result, the sample variance possesses units that are the square of the units in the observed data whereas the sample standard deviation is found in linear units. As an example, consider the data of Example 1.2. The stem weights are measured in grams. As a result, the sample standard deviations are in grams and the variances are measured in grams2. In fact, the individual standard deviations are 0.0728 gram for the no-nitrogen case and 0.1867 gram for the nitrogen group. Note that the standard deviation does indicate considerably larger variability in the nitrogen sample. This condition was displayed in Figure 1.1. Which Variability Measure Is More Important? As we indicated earlier, the sample range has applications in the area of statistical quality control. It may appear to the reader that the use of both the sample variance and the sample standard deviation is redundant. Both measures reflect the same concept in measuring variability, but the sample standard deviation measures variability in linear units whereas the sample variance is measured in squared units. Both play huge roles in the use of statistical methods. Much of what is accomplished in the context of statistical inference involves drawing conclusions about characteristics of populations. Among these characteristics are constants which are called population parameters. Two important parameters are the population mean and the population variance. The sample variance plays an explicit role in the statistical methods used to draw inferences about the population variance. The sample standard deviation has an important role along with the sample mean in inferences that are made about the population mean. In general, the variance is considered more in inferential theory, while the standard deviation is used more in applications.
  • 38. 1.5 Discrete and Continuous Data 17 Exercises 1.7 Consider the drying time data for Exercise 1.1 on page 13. Compute the sample variance and sample standard deviation. 1.8 Compute the sample variance and standard devi- ation for the water absorbency data of Exercise 1.2 on page 13. 1.9 Exercise 1.3 on page 13 showed tensile strength data for two samples, one in which specimens were ex- posed to an aging process and one in which there was no aging of the specimens. (a) Calculate the sample variance as well as standard deviation in tensile strength for both samples. (b) Does there appear to be any evidence that aging affects the variability in tensile strength? (See also the plot for Exercise 1.3 on page 13.) 1.10 For the data of Exercise 1.4 on page 13, com- pute both the mean and the variance in “flexibility” for both company A and company B. Does there ap- pear to be a difference in flexibility between company A and company B? 1.11 Consider the data in Exercise 1.5 on page 13. Compute the sample variance and the sample standard deviation for both control and treatment groups. 1.12 For Exercise 1.6 on page 13, compute the sample standard deviation in tensile strength for the samples separately for the two temperatures. Does it appear as if an increase in temperature influences the variability in tensile strength? Explain. 1.5 Discrete and Continuous Data Statistical inference through the analysis of observational studies or designed ex- periments is used in many scientific areas. The data gathered may be discrete or continuous, depending on the area of application. For example, a chemical engineer may be interested in conducting an experiment that will lead to condi- tions where yield is maximized. Here, of course, the yield may be in percent or grams/pound, measured on a continuum. On the other hand, a toxicologist con- ducting a combination drug experiment may encounter data that are binary in nature (i.e., the patient either responds or does not). Great distinctions are made between discrete and continuous data in the prob- ability theory that allow us to draw statistical inferences. Often applications of statistical inference are found when the data are count data. For example, an en- gineer may be interested in studying the number of radioactive particles passing through a counter in, say, 1 millisecond. Personnel responsible for the efficiency of a port facility may be interested in the properties of the number of oil tankers arriving each day at a certain port city. In Chapter 5, several distinct scenarios, leading to different ways of handling data, are discussed for situations with count data. Special attention even at this early stage of the textbook should be paid to some details associated with binary data. Applications requiring statistical analysis of binary data are voluminous. Often the measure that is used in the analysis is the sample proportion. Obviously the binary situation involves two categories. If there are n units involved in the data and x is defined as the number that fall into category 1, then n − x fall into category 2. Thus, x/n is the sample proportion in category 1, and 1 − x/n is the sample proportion in category 2. In the biomedical application, 50 patients may represent the sample units, and if 20 out of 50 experienced an improvement in a stomach ailment (common to all 50) after all were given the drug, then 20 50 = 0.4 is the sample proportion for which
  • 39. 18 Chapter 1 Introduction to Statistics and Data Analysis the drug was a success and 1 − 0.4 = 0.6 is the sample proportion for which the drug was not successful. Actually the basic numerical measurement for binary data is generally denoted by either 0 or 1. For example, in our medical example, a successful result is denoted by a 1 and a nonsuccess a 0. As a result, the sample proportion is actually a sample mean of the ones and zeros. For the successful category, x1 + x2 + · · · + x50 50 = 1 + 1 + 0 + · · · + 0 + 1 50 = 20 50 = 0.4. What Kinds of Problems Are Solved in Binary Data Situations? The kinds of problems facing scientists and engineers dealing in binary data are not a great deal unlike those seen where continuous measurements are of interest. However, different techniques are used since the statistical properties of sample proportions are quite different from those of the sample means that result from averages taken from continuous populations. Consider the example data in Ex- ercise 1.6 on page 13. The statistical problem underlying this illustration focuses on whether an intervention, say, an increase in curing temperature, will alter the population mean tensile strength associated with the silicone rubber process. On the other hand, in a quality control area, suppose an automobile tire manufacturer reports that a shipment of 5000 tires selected randomly from the process results in 100 of them showing blemishes. Here the sample proportion is 100 5000 = 0.02. Following a change in the process designed to reduce blemishes, a second sample of 5000 is taken and 90 tires are blemished. The sample proportion has been reduced to 90 5000 = 0.018. The question arises, “Is the decrease in the sample proportion from 0.02 to 0.018 substantial enough to suggest a real improvement in the pop- ulation proportion?” Both of these illustrations require the use of the statistical properties of sample averages—one from samples from a continuous population, and the other from samples from a discrete (binary) population. In both cases, the sample mean is an estimate of a population parameter, a population mean in the first illustration (i.e., mean tensile strength), and a population proportion in the second case (i.e., proportion of blemished tires in the population). So here we have sample estimates used to draw scientific conclusions regarding population parameters. As we indicated in Section 1.3, this is the general theme in many practical problems using statistical inference. 1.6 Statistical Modeling, Scientific Inspection, and Graphical Diagnostics Often the end result of a statistical analysis is the estimation of parameters of a postulated model. This is natural for scientists and engineers since they often deal in modeling. A statistical model is not deterministic but, rather, must entail some probabilistic aspects. A model form is often the foundation of assumptions that are made by the analyst. For example, in Example 1.2 the scientist may wish to draw some level of distinction between the nitrogen and no-nitrogen populations through the sample information. The analysis may require a certain model for
  • 40. 1.6 Statistical Modeling, Scientific Inspection, and Graphical Diagnostics 19 the data, for example, that the two samples come from normal or Gaussian distributions. See Chapter 6 for a discussion of the normal distribution. Obviously, the user of statistical methods cannot generate sufficient informa- tion or experimental data to characterize the population totally. But sets of data are often used to learn about certain properties of the population. Scientists and engineers are accustomed to dealing with data sets. The importance of character- izing or summarizing the nature of collections of data should be obvious. Often a summary of a collection of data via a graphical display can provide insight regard- ing the system from which the data were taken. For instance, in Sections 1.1 and 1.3, we have shown dot plots. In this section, the role of sampling and the display of data for enhancement of statistical inference is explored in detail. We merely introduce some simple but often effective displays that complement the study of statistical populations. Scatter Plot At times the model postulated may take on a somewhat complicated form. Con- sider, for example, a textile manufacturer who designs an experiment where cloth specimen that contain various percentages of cotton are produced. Consider the data in Table 1.3. Table 1.3: Tensile Strength Cotton Percentage Tensile Strength 15 7, 7, 9, 8, 10 20 19, 20, 21, 20, 22 25 21, 21, 17, 19, 20 30 8, 7, 8, 9, 10 Five cloth specimens are manufactured for each of the four cotton percentages. In this case, both the model for the experiment and the type of analysis used should take into account the goal of the experiment and important input from the textile scientist. Some simple graphics can shed important light on the clear distinction between the samples. See Figure 1.5; the sample means and variability are depicted nicely in the scatter plot. One possible goal of this experiment is simply to determine which cotton percentages are truly distinct from the others. In other words, as in the case of the nitrogen/no-nitrogen data, for which cotton percentages are there clear distinctions between the populations or, more specifi- cally, between the population means? In this case, perhaps a reasonable model is that each sample comes from a normal distribution. Here the goal is very much like that of the nitrogen/no-nitrogen data except that more samples are involved. The formalism of the analysis involves notions of hypothesis testing discussed in Chapter 10. Incidentally, this formality is perhaps not necessary in light of the diagnostic plot. But does this describe the real goal of the experiment and hence the proper approach to data analysis? It is likely that the scientist anticipates the existence of a maximum population mean tensile strength in the range of cot- ton concentration in the experiment. Here the analysis of the data should revolve
  • 41. 20 Chapter 1 Introduction to Statistics and Data Analysis around a different type of model, one that postulates a type of structure relating the population mean tensile strength to the cotton concentration. In other words, a model may be written μt,c = β0 + β1C + β2C2 , where μt,c is the population mean tensile strength, which varies with the amount of cotton in the product C. The implication of this model is that for a fixed cotton level, there is a population of tensile strength measurements and the population mean is μt,c. This type of model, called a regression model, is discussed in Chapters 11 and 12. The functional form is chosen by the scientist. At times the data analysis may suggest that the model be changed. Then the data analyst “entertains” a model that may be altered after some analysis is done. The use of an empirical model is accompanied by estimation theory, where β0, β1, and β2 are estimated by the data. Further, statistical inference can then be used to determine model adequacy. 5 10 15 20 25 15 20 25 30 Tensile Strength Cotton Percentages Figure 1.5: Scatter plot of tensile strength and cotton percentages. Two points become evident from the two data illustrations here: (1) The type of model used to describe the data often depends on the goal of the experiment; and (2) the structure of the model should take advantage of nonstatistical scientific input. A selection of a model represents a fundamental assumption upon which the resulting statistical inference is based. It will become apparent throughout the book how important graphics can be. Often, plots can illustrate information that allows the results of the formal statistical inference to be better communicated to the scientist or engineer. At times, plots or exploratory data analysis can teach the analyst something not retrieved from the formal analysis. Almost any formal analysis requires assumptions that evolve from the model of the data. Graphics can nicely highlight violation of assumptions that would otherwise go unnoticed. Throughout the book, graphics are used extensively to supplement formal data analysis. The following sections reveal some graphical tools that are useful in exploratory or descriptive data analysis.
  • 42. 1.6 Statistical Modeling, Scientific Inspection, and Graphical Diagnostics 21 Stem-and-Leaf Plot Statistical data, generated in large masses, can be very useful for studying the behavior of the distribution if presented in a combined tabular and graphic display called a stem-and-leaf plot. To illustrate the construction of a stem-and-leaf plot, consider the data of Table 1.4, which specifies the “life” of 40 similar car batteries recorded to the nearest tenth of a year. The batteries are guaranteed to last 3 years. First, split each observation into two parts consisting of a stem and a leaf such that the stem represents the digit preceding the decimal and the leaf corresponds to the decimal part of the number. In other words, for the number 3.7, the digit 3 is designated the stem and the digit 7 is the leaf. The four stems 1, 2, 3, and 4 for our data are listed vertically on the left side in Table 1.5; the leaves are recorded on the right side opposite the appropriate stem value. Thus, the leaf 6 of the number 1.6 is recorded opposite the stem 1; the leaf 5 of the number 2.5 is recorded opposite the stem 2; and so forth. The number of leaves recorded opposite each stem is summarized under the frequency column. Table 1.4: Car Battery Life 2.2 4.1 3.5 4.5 3.2 3.7 3.0 2.6 3.4 1.6 3.1 3.3 3.8 3.1 4.7 3.7 2.5 4.3 3.4 3.6 2.9 3.3 3.9 3.1 3.3 3.1 3.7 4.4 3.2 4.1 1.9 3.4 4.7 3.8 3.2 2.6 3.9 3.0 4.2 3.5 Table 1.5: Stem-and-Leaf Plot of Battery Life Stem Leaf Frequency 1 2 3 4 69 25669 0011112223334445567778899 11234577 2 5 25 8 The stem-and-leaf plot of Table 1.5 contains only four stems and consequently does not provide an adequate picture of the distribution. To remedy this problem, we need to increase the number of stems in our plot. One simple way to accomplish this is to write each stem value twice and then record the leaves 0, 1, 2, 3, and 4 opposite the appropriate stem value where it appears for the first time, and the leaves 5, 6, 7, 8, and 9 opposite this same stem value where it appears for the second time. This modified double-stem-and-leaf plot is illustrated in Table 1.6, where the stems corresponding to leaves 0 through 4 have been coded by the symbol and the stems corresponding to leaves 5 through 9 by the symbol ·. In any given problem, we must decide on the appropriate stem values. This decision is made somewhat arbitrarily, although we are guided by the size of our sample. Usually, we choose between 5 and 20 stems. The smaller the number of data available, the smaller is our choice for the number of stems. For example, if
  • 43. 22 Chapter 1 Introduction to Statistics and Data Analysis the data consist of numbers from 1 to 21 representing the number of people in a cafeteria line on 40 randomly selected workdays and we choose a double-stem-and- leaf plot, the stems will be 0, 0·, 1, 1·, and 2 so that the smallest observation 1 has stem 0 and leaf 1, the number 18 has stem 1· and leaf 8, and the largest observation 21 has stem 2 and leaf 1. On the other hand, if the data consist of numbers from $18,800 to $19,600 representing the best possible deals on 100 new automobiles from a certain dealership and we choose a single-stem-and-leaf plot, the stems will be 188, 189, 190, . . . , 196 and the leaves will now each contain two digits. A car that sold for $19,385 would have a stem value of 193 and the two-digit leaf 85. Multiple-digit leaves belonging to the same stem are usually separated by commas in the stem-and-leaf plot. Decimal points in the data are generally ignored when all the digits to the right of the decimal represent the leaf. Such was the case in Tables 1.5 and 1.6. However, if the data consist of numbers ranging from 21.8 to 74.9, we might choose the digits 2, 3, 4, 5, 6, and 7 as our stems so that a number such as 48.3 would have a stem value of 4 and a leaf of 8.3. Table 1.6: Double-Stem-and-Leaf Plot of Battery Life Stem Leaf Frequency 1· 2 2· 3 3· 4 4· 69 2 5669 001111222333444 5567778899 11234 577 2 1 4 15 10 5 3 The stem-and-leaf plot represents an effective way to summarize data. Another way is through the use of the frequency distribution, where the data, grouped into different classes or intervals, can be constructed by counting the leaves be- longing to each stem and noting that each stem defines a class interval. In Table 1.5, the stem 1 with 2 leaves defines the interval 1.0–1.9 containing 2 observations; the stem 2 with 5 leaves defines the interval 2.0–2.9 containing 5 observations; the stem 3 with 25 leaves defines the interval 3.0–3.9 with 25 observations; and the stem 4 with 8 leaves defines the interval 4.0–4.9 containing 8 observations. For the double-stem-and-leaf plot of Table 1.6, the stems define the seven class intervals 1.5–1.9, 2.0–2.4, 2.5–2.9, 3.0–3.4, 3.5–3.9, 4.0–4.4, and 4.5–4.9 with frequencies 2, 1, 4, 15, 10, 5, and 3, respectively. Histogram Dividing each class frequency by the total number of observations, we obtain the proportion of the set of observations in each of the classes. A table listing relative frequencies is called a relative frequency distribution. The relative frequency distribution for the data of Table 1.4, showing the midpoint of each class interval, is given in Table 1.7. The information provided by a relative frequency distribution in tabular form is easier to grasp if presented graphically. Using the midpoint of each interval and the
  • 44. 1.6 Statistical Modeling, Scientific Inspection, and Graphical Diagnostics 23 Table 1.7: Relative Frequency Distribution of Battery Life Class Class Frequency, Relative Interval Midpoint f Frequency 1.5–1.9 1.7 2 0.050 2.0–2.4 2.2 1 0.025 2.5–2.9 2.7 4 0.100 3.0–3.4 3.2 15 0.375 3.5–3.9 3.7 10 0.250 4.0–4.4 4.2 5 0.125 4.5–4.9 4.7 3 0.075 0.375 0.250 0.125 1.7 2.2 2.7 3.2 3.7 4.2 4.7 Relativ e Frequencty Battery Life (years) Figure 1.6: Relative frequency histogram. corresponding relative frequency, we construct a relative frequency histogram (Figure 1.6). Many continuous frequency distributions can be represented graphically by the characteristic bell-shaped curve of Figure 1.7. Graphical tools such as what we see in Figures 1.6 and 1.7 aid in the characterization of the nature of the population. In Chapters 5 and 6 we discuss a property of the population called its distribution. While a more rigorous definition of a distribution or probability distribution will be given later in the text, at this point one can view it as what would be seen in Figure 1.7 in the limit as the size of the sample becomes larger. A distribution is said to be symmetric if it can be folded along a vertical axis so that the two sides coincide. A distribution that lacks symmetry with respect to a vertical axis is said to be skewed. The distribution illustrated in Figure 1.8(a) is said to be skewed to the right since it has a long right tail and a much shorter left tail. In Figure 1.8(b) we see that the distribution is symmetric, while in Figure 1.8(c) it is skewed to the left. If we rotate a stem-and-leaf plot counterclockwise through an angle of 90◦ , we observe that the resulting columns of leaves form a picture that is similar to a histogram. Consequently, if our primary purpose in looking at the data is to determine the general shape or form of the distribution, it will seldom be necessary
  • 45. 24 Chapter 1 Introduction to Statistics and Data Analysis 0 f (x) Battery Life (years) 1 2 3 4 5 6 Figure 1.7: Estimating frequency distribution. (a) (b) (c) Figure 1.8: Skewness of data. to construct a relative frequency histogram. Box-and-Whisker Plot or Box Plot Another display that is helpful for reflecting properties of a sample is the box- and-whisker plot. This plot encloses the interquartile range of the data in a box that has the median displayed within. The interquartile range has as its extremes the 75th percentile (upper quartile) and the 25th percentile (lower quartile). In addition to the box, “whiskers” extend, showing extreme observations in the sam- ple. For reasonably large samples, the display shows center of location, variability, and the degree of asymmetry. In addition, a variation called a box plot can provide the viewer with infor- mation regarding which observations may be outliers. Outliers are observations that are considered to be unusually far from the bulk of the data. There are many statistical tests that are designed to detect outliers. Technically, one may view an outlier as being an observation that represents a “rare event” (there is a small probability of obtaining a value that far from the bulk of the data). The concept of outliers resurfaces in Chapter 12 in the context of regression analysis.
  • 46. 1.6 Statistical Modeling, Scientific Inspection, and Graphical Diagnostics 25 The visual information in the box-and-whisker plot or box plot is not intended to be a formal test for outliers. Rather, it is viewed as a diagnostic tool. While the determination of which observations are outliers varies with the type of software that is used, one common procedure is to use a multiple of the interquartile range. For example, if the distance from the box exceeds 1.5 times the interquartile range (in either direction), the observation may be labeled an outlier. Example 1.5: Nicotine content was measured in a random sample of 40 cigarettes. The data are displayed in Table 1.8. Table 1.8: Nicotine Data for Example 1.5 1.09 1.92 2.31 1.79 2.28 1.74 1.47 1.97 0.85 1.24 1.58 2.03 1.70 2.17 2.55 2.11 1.86 1.90 1.68 1.51 1.64 0.72 1.69 1.85 1.82 1.79 2.46 1.88 2.08 1.67 1.37 1.93 1.40 1.64 2.09 1.75 1.63 2.37 1.75 1.69 1.0 1.5 2.0 2.5 Nicotine Figure 1.9: Box-and-whisker plot for Example 1.5. Figure 1.9 shows the box-and-whisker plot of the data, depicting the observa- tions 0.72 and 0.85 as mild outliers in the lower tail, whereas the observation 2.55 is a mild outlier in the upper tail. In this example, the interquartile range is 0.365, and 1.5 times the interquartile range is 0.5475. Figure 1.10, on the other hand, provides a stem-and-leaf plot. Example 1.6: Consider the data in Table 1.9, consisting of 30 samples measuring the thickness of paint can “ears” (see the work by Hogg and Ledolter, 1992, in the Bibliography). Figure 1.11 depicts a box-and-whisker plot for this asymmetric set of data. Notice that the left block is considerably larger than the block on the right. The median is 35. The lower quartile is 31, while the upper quartile is 36. Notice also that the extreme observation on the right is farther away from the box than the extreme observation on the left. There are no outliers in this data set.
  • 47. 26 Chapter 1 Introduction to Statistics and Data Analysis The decimal point is 1 digit(s) to the left of the | 7 | 2 8 | 5 9 | 10 | 9 11 | 12 | 4 13 | 7 14 | 07 15 | 18 16 | 3447899 17 | 045599 18 | 2568 19 | 0237 20 | 389 21 | 17 22 | 8 23 | 17 24 | 6 25 | 5 Figure 1.10: Stem-and-leaf plot for the nicotine data. Table 1.9: Data for Example 1.6 Sample Measurements Sample Measurements 1 29 36 39 34 34 16 35 30 35 29 37 2 29 29 28 32 31 17 40 31 38 35 31 3 34 34 39 38 37 18 35 36 30 33 32 4 35 37 33 38 41 19 35 34 35 30 36 5 30 29 31 38 29 20 35 35 31 38 36 6 34 31 37 39 36 21 32 36 36 32 36 7 30 35 33 40 36 22 36 37 32 34 34 8 28 28 31 34 30 23 29 34 33 37 35 9 32 36 38 38 35 24 36 36 35 37 37 10 35 30 37 35 31 25 36 30 35 33 31 11 35 30 35 38 35 26 35 30 29 38 35 12 38 34 35 35 31 27 35 36 30 34 36 13 34 35 33 30 34 28 35 30 36 29 35 14 40 35 34 33 35 29 38 36 35 31 31 15 34 35 38 35 30 30 30 34 40 28 30 There are additional ways that box-and-whisker plots and other graphical dis- plays can aid the analyst. Multiple samples can be compared graphically. Plots of data can suggest relationships between variables. Graphs can aid in the detection of anomalies or outlying observations in samples. There are other types of graphical tools and plots that are used. These are discussed in Chapter 8 after we introduce additional theoretical details.
  • 48. 1.7 General Types of Statistical Studies 27 28 30 32 34 36 38 40 Paint Figure 1.11: Box-and-whisker plot for thickness of paint can “ears.” Other Distinguishing Features of a Sample There are features of the distribution or sample other than measures of center of location and variability that further define its nature. For example, while the median divides the data (or distribution) into two parts, there are other measures that divide parts or pieces of the distribution that can be very useful. Separation is made into four parts by quartiles, with the third quartile separating the upper quarter of the data from the rest, the second quartile being the median, and the first quartile separating the lower quarter of the data from the rest. The distribution can be even more finely divided by computing percentiles of the distribution. These quantities give the analyst a sense of the so-called tails of the distribution (i.e., values that are relatively extreme, either small or large). For example, the 95th percentile separates the highest 5% from the bottom 95%. Similar definitions prevail for extremes on the lower side or lower tail of the distribution. The 1st percentile separates the bottom 1% from the rest of the distribution. The concept of percentiles will play a major role in much that will be covered in future chapters. 1.7 General Types of Statistical Studies: Designed Experiment, Observational Study, and Retrospective Study In the foregoing sections we have emphasized the notion of sampling from a pop- ulation and the use of statistical methods to learn or perhaps affirm important information about the population. The information sought and learned through the use of these statistical methods can often be influential in decision making and problem solving in many important scientific and engineering areas. As an illustra- tion, Example 1.3 describes a simple experiment in which the results may provide an aid in determining the kinds of conditions under which it is not advisable to use a particular aluminum alloy that may have a dangerous vulnerability to corrosion. The results may be of use not only to those who produce the alloy, but also to the customer who may consider using it. This illustration, as well as many more that appear in Chapters 13 through 15, highlights the concept of designing or control- ling experimental conditions (combinations of coating conditions and humidity) of
  • 49. 28 Chapter 1 Introduction to Statistics and Data Analysis interest to learn about some characteristic or measurement (level of corrosion) that results from these conditions. Statistical methods that make use of measures of central tendency in the corrosion measure, as well as measures of variability, are employed. As the reader will observe later in the text, these methods often lead to a statistical model like that discussed in Section 1.6. In this case, the model may be used to estimate (or predict) the corrosion measure as a function of humidity and the type of coating employed. Again, in developing this kind of model, descriptive statistics that highlight central tendency and variability become very useful. The information supplied in Example 1.3 illustrates nicely the types of engi- neering questions asked and answered by the use of statistical methods that are employed through a designed experiment and presented in this text. They are (i) What is the nature of the impact of relative humidity on the corrosion of the aluminum alloy within the range of relative humidity in this experiment? (ii) Does the chemical corrosion coating reduce corrosion levels and can the effect be quantified in some fashion? (iii) Is there interaction between coating type and relative humidity that impacts their influence on corrosion of the alloy? If so, what is its interpretation? What Is Interaction? The importance of questions (i) and (ii) should be clear to the reader, as they deal with issues important to both producers and users of the alloy. But what about question (iii)? The concept of interaction will be discussed at length in Chapters 14 and 15. Consider the plot in Figure 1.3. This is an illustration of the detection of interaction between two factors in a simple designed experiment. Note that the lines connecting the sample means are not parallel. Parallelism would have indicated that the effect (seen as a result of the slope of the lines) of relative humidity is the same, namely a negative effect, for both an uncoated condition and the chemical corrosion coating. Recall that the negative slope implies that corrosion becomes more pronounced as humidity rises. Lack of parallelism implies an interaction between coating type and relative humidity. The nearly “flat” line for the corrosion coating as opposed to a steeper slope for the uncoated condition suggests that not only is the chemical corrosion coating beneficial (note the displacement between the lines), but the presence of the coating renders the effect of humidity negligible. Clearly all these questions are very important to the effect of the two individual factors and to the interpretation of the interaction, if it is present. Statistical models are extremely useful in answering questions such as those listed in (i), (ii), and (iii), where the data come from a designed experiment. But one does not always have the luxury or resources to employ a designed experiment. For example, there are many instances in which the conditions of interest to the scientist or engineer cannot be implemented simply because the important factors cannot be controlled. In Example 1.3, the relative humidity and coating type (or lack of coating) are quite easy to control. This of course is the defining feature of a designed experiment. In many fields, factors that need to be studied cannot be controlled for any one of various reasons. Tight control as in Example 1.3 allows the analyst to be confident that any differences found (for example, in corrosion levels)
  • 50. 1.7 General Types of Statistical Studies 29 are due to the factors under control. As a second illustration, consider Exercise 1.6 on page 13. Suppose in this case 24 specimens of silicone rubber are selected and 12 assigned to each of the curing temperature levels. The temperatures are controlled carefully, and thus this is an example of a designed experiment with a single factor being curing temperature. Differences found in the mean tensile strength would be assumed to be attributed to the different curing temperatures. What If Factors Are Not Controlled? Suppose there are no factors controlled and no random assignment of fixed treat- ments to experimental units and yet there is a need to glean information from a data set. As an illustration, consider a study in which interest centers around the relationship between blood cholesterol levels and the amount of sodium measured in the blood. A group of individuals were monitored over time for both blood cholesterol and sodium. Certainly some useful information can be gathered from such a data set. However, it should be clear that there certainly can be no strict control of blood sodium levels. Ideally, the subjects should be divided randomly into two groups, with one group assigned a specific high level of blood sodium and the other a specific low level of blood sodium. Obviously this cannot be done. Clearly changes in cholesterol can be experienced because of changes in one of a number of other factors that were not controlled. This kind of study, without factor control, is called an observational study. Much of the time it involves a situation in which subjects are observed across time. Biological and biomedical studies are often by necessity observational studies. However, observational studies are not confined to those areas. For example, con- sider a study that is designed to determine the influence of ambient temperature on the electric power consumed by a chemical plant. Clearly, levels of ambient temper- ature cannot be controlled, and thus the data structure can only be a monitoring of the data from the plant over time. It should be apparent that the striking difference between a well-designed ex- periment and observational studies is the difficulty in determination of true cause and effect with the latter. Also, differences found in the fundamental response (e.g., corrosion levels, blood cholesterol, plant electric power consumption) may be due to other underlying factors that were not controlled. Ideally, in a designed experiment the nuisance factors would be equalized via the randomization process. Certainly changes in blood cholesterol could be due to fat intake, exercise activity, and so on. Electric power consumption could be affected by the amount of product produced or even the purity of the product produced. Another often ignored disadvantage of an observational study when compared to carefully designed experiments is that, unlike the latter, observational studies are at the mercy of nature, environmental or other uncontrolled circumstances that impact the ranges of factors of interest. For example, in the biomedical study regarding the influence of blood sodium levels on blood cholesterol, it is possible that there is indeed a strong influence but the particular data set used did not involve enough observed variation in sodium levels because of the nature of the subjects chosen. Of course, in a designed experiment, the analyst chooses and controls ranges of factors.
  • 51. / / 30 Chapter 1 Introduction to Statistics and Data Analysis A third type of statistical study which can be very useful but has clear dis- advantages when compared to a designed experiment is a retrospective study. This type of study uses strictly historical data, data taken over a specific period of time. One obvious advantage of retrospective data is that there is reduced cost in collecting the data. However, as one might expect, there are clear disadvantages. (i) Validity and reliability of historical data are often in doubt. (ii) If time is an important aspect of the structure of the data, there may be data missing. (iii) There may be errors in collection of the data that are not known. (iv) Again, as in the case of observational data, there is no control on the ranges of the measured variables (the factors in a study). Indeed, the ranges found in historical data may not be relevant for current studies. In Section 1.6, some attention was given to modeling of relationships among vari- ables. We introduced the notion of regression analysis, which is covered in Chapters 11 and 12 and is illustrated as a form of data analysis for designed experiments discussed in Chapters 14 and 15. In Section 1.6, a model relating population mean tensile strength of cloth to percentages of cotton was used for illustration, where 20 specimens of cloth represented the experimental units. In that case, the data came from a simple designed experiment where the individual cotton percentages were selected by the scientist. Often both observational data and retrospective data are used for the purpose of observing relationships among variables through model-building procedures dis- cussed in Chapters 11 and 12. While the advantages of designed experiments certainly apply when the goal is statistical model building, there are many areas in which designing of experiments is not possible. Thus, observational or historical data must be used. We refer here to a historical data set that is found in Exercise 12.5 on page 450. The goal is to build a model that will result in an equation or relationship that relates monthly electric power consumed to average ambient temperature x1, the number of days in the month x2, the average product purity x3, and the tons of product produced x4. The data are the past year’s historical data. Exercises 1.13 A manufacturer of electronic components is in- terested in determining the lifetime of a certain type of battery. A sample, in hours of life, is as follows: 123, 116, 122, 110, 175, 126, 125, 111, 118, 117. (a) Find the sample mean and median. (b) What feature in this data set is responsible for the substantial difference between the two? 1.14 A tire manufacturer wants to determine the in- ner diameter of a certain grade of tire. Ideally, the diameter would be 570 mm. The data are as follows: 572, 572, 573, 568, 569, 575, 565, 570. (a) Find the sample mean and median. (b) Find the sample variance, standard deviation, and range. (c) Using the calculated statistics in parts (a) and (b), can you comment on the quality of the tires? 1.15 Five independent coin tosses result in HHHHH. It turns out that if the coin is fair the probability of this outcome is (1/2)5 = 0.03125. Does this produce strong evidence that the coin is not fair? Comment and use the concept of P-value discussed in Section 1.1.
  • 52. / / Exercises 31 1.16 Show that the n pieces of information in n i=1 (xi − x̄)2 are not independent; that is, show that n i=1 (xi − x̄) = 0. 1.17 A study of the effects of smoking on sleep pat- terns is conducted. The measure observed is the time, in minutes, that it takes to fall asleep. These data are obtained: Smokers: 69.3 56.0 22.1 47.6 53.2 48.1 52.7 34.4 60.2 43.8 23.2 13.8 Nonsmokers: 28.6 25.1 26.4 34.9 29.8 28.4 38.5 30.2 30.6 31.8 41.6 21.1 36.0 37.9 13.9 (a) Find the sample mean for each group. (b) Find the sample standard deviation for each group. (c) Make a dot plot of the data sets A and B on the same line. (d) Comment on what kind of impact smoking appears to have on the time required to fall asleep. 1.18 The following scores represent the final exami- nation grades for an elementary statistics course: 23 60 79 32 57 74 52 70 82 36 80 77 81 95 41 65 92 85 55 76 52 10 64 75 78 25 80 98 81 67 41 71 83 54 64 72 88 62 74 43 60 78 89 76 84 48 84 90 15 79 34 67 17 82 69 74 63 80 85 61 (a) Construct a stem-and-leaf plot for the examination grades in which the stems are 1, 2, 3, . . . , 9. (b) Construct a relative frequency histogram, draw an estimate of the graph of the distribution, and dis- cuss the skewness of the distribution. (c) Compute the sample mean, sample median, and sample standard deviation. 1.19 The following data represent the length of life in years, measured to the nearest tenth, of 30 similar fuel pumps: 2.0 3.0 0.3 3.3 1.3 0.4 0.2 6.0 5.5 6.5 0.2 2.3 1.5 4.0 5.9 1.8 4.7 0.7 4.5 0.3 1.5 0.5 2.5 5.0 1.0 6.0 5.6 6.0 1.2 0.2 (a) Construct a stem-and-leaf plot for the life in years of the fuel pumps, using the digit to the left of the decimal point as the stem for each observation. (b) Set up a relative frequency distribution. (c) Compute the sample mean, sample range, and sam- ple standard deviation. 1.20 The following data represent the length of life, in seconds, of 50 fruit flies subject to a new spray in a controlled laboratory experiment: 17 20 10 9 23 13 12 19 18 24 12 14 6 9 13 6 7 10 13 7 16 18 8 13 3 32 9 7 10 11 13 7 18 7 10 4 27 19 16 8 7 10 5 14 15 10 9 6 7 15 (a) Construct a double-stem-and-leaf plot for the life span of the fruit flies using the stems 0, 0·, 1, 1·, 2, 2·, and 3 such that stems coded by the symbols and · are associated, respectively, with leaves 0 through 4 and 5 through 9. (b) Set up a relative frequency distribution. (c) Construct a relative frequency histogram. (d) Find the median. 1.21 The lengths of power failures, in minutes, are recorded in the following table. 22 18 135 15 90 78 69 98 102 83 55 28 121 120 13 22 124 112 70 66 74 89 103 24 21 112 21 40 98 87 132 115 21 28 43 37 50 96 118 158 74 78 83 93 95 (a) Find the sample mean and sample median of the power-failure times. (b) Find the sample standard deviation of the power- failure times. 1.22 The following data are the measures of the di- ameters of 36 rivet heads in 1/100 of an inch. 6.72 6.77 6.82 6.70 6.78 6.70 6.62 6.75 6.66 6.66 6.64 6.76 6.73 6.80 6.72 6.76 6.76 6.68 6.66 6.62 6.72 6.76 6.70 6.78 6.76 6.67 6.70 6.72 6.74 6.81 6.79 6.78 6.66 6.76 6.76 6.72 (a) Compute the sample mean and sample standard deviation. (b) Construct a relative frequency histogram of the data. (c) Comment on whether or not there is any clear in- dication that the sample came from a population that has a bell-shaped distribution. 1.23 The hydrocarbon emissions at idling speed in parts per million (ppm) for automobiles of 1980 and 1990 model years are given for 20 randomly selected cars.
  • 53. / / 32 Chapter 1 Introduction to Statistics and Data Analysis 1980 models: 141 359 247 940 882 494 306 210 105 880 200 223 188 940 241 190 300 435 241 380 1990 models: 140 160 20 20 223 60 20 95 360 70 220 400 217 58 235 380 200 175 85 65 (a) Construct a dot plot as in Figure 1.1. (b) Compute the sample means for the two years and superimpose the two means on the plots. (c) Comment on what the dot plot indicates regarding whether or not the population emissions changed from 1980 to 1990. Use the concept of variability in your comments. 1.24 The following are historical data on staff salaries (dollars per pupil) for 30 schools sampled in the eastern part of the United States in the early 1970s. 3.79 2.99 2.77 2.91 3.10 1.84 2.52 3.22 2.45 2.14 2.67 2.52 2.71 2.75 3.57 3.85 3.36 2.05 2.89 2.83 3.13 2.44 2.10 3.71 3.14 3.54 2.37 2.68 3.51 3.37 (a) Compute the sample mean and sample standard deviation. (b) Construct a relative frequency histogram of the data. (c) Construct a stem-and-leaf display of the data. 1.25 The following data set is related to that in Ex- ercise 1.24. It gives the percentages of the families that are in the upper income level, for the same individual schools in the same order as in Exercise 1.24. 72.2 31.9 26.5 29.1 27.3 8.6 22.3 26.5 20.4 12.8 25.1 19.2 24.1 58.2 68.1 89.2 55.1 9.4 14.5 13.9 20.7 17.9 8.5 55.4 38.1 54.2 21.5 26.2 59.1 43.3 (a) Calculate the sample mean. (b) Calculate the sample median. (c) Construct a relative frequency histogram of the data. (d) Compute the 10% trimmed mean. Compare with the results in (a) and (b) and comment. 1.26 Suppose it is of interest to use the data sets in Exercises 1.24 and 1.25 to derive a model that would predict staff salaries as a function of percentage of fam- ilies in a high income level for current school systems. Comment on any disadvantage in carrying out this type of analysis. 1.27 A study is done to determine the influence of the wear, y, of a bearing as a function of the load, x, on the bearing. A designed experiment is used for this study. Three levels of load were used, 700 lb, 1000 lb, and 1300 lb. Four specimens were used at each level, and the sample means were, respectively, 210, 325, and 375. (a) Plot average wear against load. (b) From the plot in (a), does it appear as if a relation- ship exists between wear and load? (c) Suppose we look at the individual wear values for each of the four specimens at each load level (see the data that follow). Plot the wear results for all specimens against the three load values. (d) From your plot in (c), does it appear as if a clear relationship exists? If your answer is different from that in (b), explain why. x 700 1000 1300 y1 145 250 150 y2 105 195 180 y3 260 375 420 y4 330 480 750 ȳ1 = 210 ȳ2 = 325 ȳ3 = 375 1.28 Many manufacturing companies in the United States and abroad use molded parts as components of a process. Shrinkage is often a major problem. Thus, a molded die for a part is built larger than nominal size to allow for part shrinkage. In an injection molding study it is known that the shrinkage is influenced by many factors, among which are the injection velocity in ft/sec and mold temperature in ◦ C. The following two data sets show the results of a designed experiment in which injection velocity was held at two levels (low and high) and mold temperature was held constant at a low level. The shrinkage is measured in cm × 104 . Shrinkage values at low injection velocity: 72.68 72.62 72.58 72.48 73.07 72.55 72.42 72.84 72.58 72.92 Shrinkage values at high injection velocity: 71.62 71.68 71.74 71.48 71.55 71.52 71.71 71.56 71.70 71.50 (a) Construct a dot plot of both data sets on the same graph. Indicate on the plot both shrinkage means, that for low injection velocity and high injection velocity. (b) Based on the graphical results in (a), using the lo- cation of the two means and your sense of variabil- ity, what do you conclude regarding the effect of injection velocity on shrinkage at low mold tem- perature? 1.29 Use the data in Exercise 1.24 to construct a box plot. 1.30 Below are the lifetimes, in hours, of fifty 40-watt, 110-volt internally frosted incandescent lamps, taken from forced life tests:
  • 54. Exercises 33 919 1196 785 1126 936 918 1156 920 948 1067 1092 1162 1170 929 950 905 972 1035 1045 855 1195 1195 1340 1122 938 970 1237 956 1102 1157 978 832 1009 1157 1151 1009 765 958 902 1022 1333 811 1217 1085 896 958 1311 1037 702 923 Construct a box plot for these data. 1.31 Consider the situation of Exercise 1.28. But now use the following data set, in which shrinkage is mea- sured once again at low injection velocity and high in- jection velocity. However, this time the mold temper- ature is raised to a high level and held constant. Shrinkage values at low injection velocity: 76.20 76.09 75.98 76.15 76.17 75.94 76.12 76.18 76.25 75.82 Shrinkage values at high injection velocity: 93.25 93.19 92.87 93.29 93.37 92.98 93.47 93.75 93.89 91.62 (a) As in Exercise 1.28, construct a dot plot with both data sets on the same graph and identify both means (i.e., mean shrinkage for low injection ve- locity and for high injection velocity). (b) As in Exercise 1.28, comment on the influence of injection velocity on shrinkage for high mold tem- perature. Take into account the position of the two means and the variability around each mean. (c) Compare your conclusion in (b) with that in (b) of Exercise 1.28 in which mold temperature was held at a low level. Would you say that there is an interaction between injection velocity and mold temperature? Explain. 1.32 Use the results of Exercises 1.28 and 1.31 to cre- ate a plot that illustrates the interaction evident from the data. Use the plot in Figure 1.3 in Example 1.3 as a guide. Could the type of information found in Exer- cises 1.28 and 1.31 have been found in an observational study in which there was no control on injection veloc- ity and mold temperature by the analyst? Explain why or why not. 1.33 Group Project: Collect the shoe size of every- one in the class. Use the sample means and variances and the types of plots presented in this chapter to sum- marize any features that draw a distinction between the distributions of shoe sizes for males and females. Do the same for the height of everyone in the class.
  • 56. Chapter 2 Probability 2.1 Sample Space In the study of statistics, we are concerned basically with the presentation and interpretation of chance outcomes that occur in a planned study or scientific investigation. For example, we may record the number of accidents that occur monthly at the intersection of Driftwood Lane and Royal Oak Drive, hoping to justify the installation of a traffic light; we might classify items coming off an as- sembly line as “defective” or “nondefective”; or we may be interested in the volume of gas released in a chemical reaction when the concentration of an acid is varied. Hence, the statistician is often dealing with either numerical data, representing counts or measurements, or categorical data, which can be classified according to some criterion. We shall refer to any recording of information, whether it be numerical or categorical, as an observation. Thus, the numbers 2, 0, 1, and 2, representing the number of accidents that occurred for each month from January through April during the past year at the intersection of Driftwood Lane and Royal Oak Drive, constitute a set of observations. Similarly, the categorical data N, D, N, N, and D, representing the items found to be defective or nondefective when five items are inspected, are recorded as observations. Statisticians use the word experiment to describe any process that generates a set of data. A simple example of a statistical experiment is the tossing of a coin. In this experiment, there are only two possible outcomes, heads or tails. Another experiment might be the launching of a missile and observing of its velocity at specified times. The opinions of voters concerning a new sales tax can also be considered as observations of an experiment. We are particularly interested in the observations obtained by repeating the experiment several times. In most cases, the outcomes will depend on chance and, therefore, cannot be predicted with certainty. If a chemist runs an analysis several times under the same conditions, he or she will obtain different measurements, indicating an element of chance in the experimental procedure. Even when a coin is tossed repeatedly, we cannot be certain that a given toss will result in a head. However, we know the entire set of possibilities for each toss. Given the discussion in Section 1.7, we should deal with the breadth of the term experiment. Three types of statistical studies were reviewed, and several examples were given of each. In each of the three cases, designed experiments, observational studies, and retrospective studies, the end result was a set of data that of course is 35
  • 57. 36 Chapter 2 Probability subject to uncertainty. Though only one of these has the word experiment in its description, the process of generating the data or the process of observing the data is part of an experiment. The corrosion study discussed in Section 1.2 certainly involves an experiment, with measures of corrosion representing the data. The ex- ample given in Section 1.7 in which blood cholesterol and sodium were observed on a group of individuals represented an observational study (as opposed to a designed experiment), and yet the process generated data and the outcome is subject to un- certainty. Thus, it is an experiment. A third example in Section 1.7 represented a retrospective study in which historical data on monthly electric power consump- tion and average monthly ambient temperature were observed. Even though the data may have been in the files for decades, the process is still referred to as an experiment. Definition 2.1: The set of all possible outcomes of a statistical experiment is called the sample space and is represented by the symbol S. Each outcome in a sample space is called an element or a member of the sample space, or simply a sample point. If the sample space has a finite number of elements, we may list the members separated by commas and enclosed in braces. Thus, the sample space S, of possible outcomes when a coin is flipped, may be written S = {H, T}, where H and T correspond to heads and tails, respectively. Example 2.1: Consider the experiment of tossing a die. If we are interested in the number that shows on the top face, the sample space is S1 = {1, 2, 3, 4, 5, 6}. If we are interested only in whether the number is even or odd, the sample space is simply S2 = {even, odd}. Example 2.1 illustrates the fact that more than one sample space can be used to describe the outcomes of an experiment. In this case, S1 provides more information than S2. If we know which element in S1 occurs, we can tell which outcome in S2 occurs; however, a knowledge of what happens in S2 is of little help in determining which element in S1 occurs. In general, it is desirable to use the sample space that gives the most information concerning the outcomes of the experiment. In some experiments, it is helpful to list the elements of the sample space systematically by means of a tree diagram. Example 2.2: An experiment consists of flipping a coin and then flipping it a second time if a head occurs. If a tail occurs on the first flip, then a die is tossed once. To list the elements of the sample space providing the most information, we construct the tree diagram of Figure 2.1. The various paths along the branches of the tree give the distinct sample points. Starting with the top left branch and moving to the right along the first path, we get the sample point HH, indicating the possibility that heads occurs on two successive flips of the coin. Likewise, the sample point T3 indicates the possibility that the coin will show a tail followed by a 3 on the toss of the die. By proceeding along all paths, we see that the sample space is S = {HH, HT, T1, T2, T3, T4, T5, T6}.
  • 58. 2.1 Sample Space 37 H T H HH T HT T 1 T 2 T 3 T 4 T 5 T 6 1 2 3 4 5 6 First Outcome Second Outcome Sample Point Figure 2.1: Tree diagram for Example 2.2. Many of the concepts in this chapter are best illustrated with examples involving the use of dice and cards. These are particularly important applications to use early in the learning process, to facilitate the flow of these new concepts into scientific and engineering examples such as the following. Example 2.3: Suppose that three items are selected at random from a manufacturing process. Each item is inspected and classified defective, D, or nondefective, N. To list the elements of the sample space providing the most information, we construct the tree diagram of Figure 2.2. Now, the various paths along the branches of the tree give the distinct sample points. Starting with the first path, we get the sample point DDD, indicating the possibility that all three items inspected are defective. As we proceed along the other paths, we see that the sample space is S = {DDD, DDN, DND, DNN, NDD, NDN, NND, NNN}. Sample spaces with a large or infinite number of sample points are best de- scribed by a statement or rule method. For example, if the possible outcomes of an experiment are the set of cities in the world with a population over 1 million, our sample space is written S = {x | x is a city with a population over 1 million}, which reads “S is the set of all x such that x is a city with a population over 1 million.” The vertical bar is read “such that.” Similarly, if S is the set of all points (x, y) on the boundary or the interior of a circle of radius 2 with center at the origin, we write the rule S = {(x, y) | x2 + y2 ≤ 4}.
  • 59. 38 Chapter 2 Probability D N D N D N D DDD N DDN D DND N DNN D NDD N NDN D NND N NNN First Item Second Item Third Item Sample Point Figure 2.2: Tree diagram for Example 2.3. Whether we describe the sample space by the rule method or by listing the elements will depend on the specific problem at hand. The rule method has practi- cal advantages, particularly for many experiments where listing becomes a tedious chore. Consider the situation of Example 2.3 in which items from a manufacturing process are either D, defective, or N, nondefective. There are many important statistical procedures called sampling plans that determine whether or not a “lot” of items is considered satisfactory. One such plan involves sampling until k defec- tives are observed. Suppose the experiment is to sample items randomly until one defective item is observed. The sample space for this case is S = {D, ND, NND, NNND, . . . }. 2.2 Events For any given experiment, we may be interested in the occurrence of certain events rather than in the occurrence of a specific element in the sample space. For in- stance, we may be interested in the event A that the outcome when a die is tossed is divisible by 3. This will occur if the outcome is an element of the subset A = {3, 6} of the sample space S1 in Example 2.1. As a further illustration, we may be inter- ested in the event B that the number of defectives is greater than 1 in Example 2.3. This will occur if the outcome is an element of the subset B = {DDN, DND, NDD, DDD} of the sample space S. To each event we assign a collection of sample points, which constitute a subset of the sample space. That subset represents all of the elements for which the event is true.
  • 60. 2.2 Events 39 Definition 2.2: An event is a subset of a sample space. Example 2.4: Given the sample space S = {t | t ≥ 0}, where t is the life in years of a certain electronic component, then the event A that the component fails before the end of the fifth year is the subset A = {t | 0 ≤ t 5}. It is conceivable that an event may be a subset that includes the entire sample space S or a subset of S called the null set and denoted by the symbol φ, which contains no elements at all. For instance, if we let A be the event of detecting a microscopic organism by the naked eye in a biological experiment, then A = φ. Also, if B = {x | x is an even factor of 7}, then B must be the null set, since the only possible factors of 7 are the odd numbers 1 and 7. Consider an experiment where the smoking habits of the employees of a man- ufacturing firm are recorded. A possible sample space might classify an individual as a nonsmoker, a light smoker, a moderate smoker, or a heavy smoker. Let the subset of smokers be some event. Then all the nonsmokers correspond to a different event, also a subset of S, which is called the complement of the set of smokers. Definition 2.3: The complement of an event A with respect to S is the subset of all elements of S that are not in A. We denote the complement of A by the symbol A . Example 2.5: Let R be the event that a red card is selected from an ordinary deck of 52 playing cards, and let S be the entire deck. Then R is the event that the card selected from the deck is not a red card but a black card. Example 2.6: Consider the sample space S = {book, cell phone, mp3, paper, stationery, laptop}. Let A = {book, stationery, laptop, paper}. Then the complement of A is A = {cell phone, mp3}. We now consider certain operations with events that will result in the formation of new events. These new events will be subsets of the same sample space as the given events. Suppose that A and B are two events associated with an experiment. In other words, A and B are subsets of the same sample space S. For example, in the tossing of a die we might let A be the event that an even number occurs and B the event that a number greater than 3 shows. Then the subsets A = {2, 4, 6} and B = {4, 5, 6} are subsets of the same sample space S = {1, 2, 3, 4, 5, 6}. Note that both A and B will occur on a given toss if the outcome is an element of the subset {4, 6}, which is just the intersection of A and B. Definition 2.4: The intersection of two events A and B, denoted by the symbol A ∩ B, is the event containing all elements that are common to A and B. Example 2.7: Let E be the event that a person selected at random in a classroom is majoring in engineering, and let F be the event that the person is female. Then E ∩ F is the event of all female engineering students in the classroom.
  • 61. 40 Chapter 2 Probability Example 2.8: Let V = {a, e, i, o, u} and C = {l, r, s, t}; then it follows that V ∩ C = φ. That is, V and C have no elements in common and, therefore, cannot both simultaneously occur. For certain statistical experiments it is by no means unusual to define two events, A and B, that cannot both occur simultaneously. The events A and B are then said to be mutually exclusive. Stated more formally, we have the following definition: Definition 2.5: Two events A and B are mutually exclusive, or disjoint, if A ∩ B = φ, that is, if A and B have no elements in common. Example 2.9: A cable television company offers programs on eight different channels, three of which are affiliated with ABC, two with NBC, and one with CBS. The other two are an educational channel and the ESPN sports channel. Suppose that a person subscribing to this service turns on a television set without first selecting the channel. Let A be the event that the program belongs to the NBC network and B the event that it belongs to the CBS network. Since a television program cannot belong to more than one network, the events A and B have no programs in common. Therefore, the intersection A ∩ B contains no programs, and consequently the events A and B are mutually exclusive. Often one is interested in the occurrence of at least one of two events associated with an experiment. Thus, in the die-tossing experiment, if A = {2, 4, 6} and B = {4, 5, 6}, we might be interested in either A or B occurring or both A and B occurring. Such an event, called the union of A and B, will occur if the outcome is an element of the subset {2, 4, 5, 6}. Definition 2.6: The union of the two events A and B, denoted by the symbol A∪B, is the event containing all the elements that belong to A or B or both. Example 2.10: Let A = {a, b, c} and B = {b, c, d, e}; then A ∪ B = {a, b, c, d, e}. Example 2.11: Let P be the event that an employee selected at random from an oil drilling com- pany smokes cigarettes. Let Q be the event that the employee selected drinks alcoholic beverages. Then the event P ∪ Q is the set of all employees who either drink or smoke or do both. Example 2.12: If M = {x | 3 x 9} and N = {y | 5 y 12}, then M ∪ N = {z | 3 z 12}. The relationship between events and the corresponding sample space can be illustrated graphically by means of Venn diagrams. In a Venn diagram we let the sample space be a rectangle and represent events by circles drawn inside the rectangle. Thus, in Figure 2.3, we see that A ∩ B = regions 1 and 2, B ∩ C = regions 1 and 3,
  • 62. 2.2 Events 41 A B C S 1 4 3 2 7 6 5 Figure 2.3: Events represented by various regions. A ∪ C = regions 1, 2, 3, 4, 5, and 7, B ∩ A = regions 4 and 7, A ∩ B ∩ C = region 1, (A ∪ B) ∩ C = regions 2, 6, and 7, and so forth. A B C S Figure 2.4: Events of the sample space S. In Figure 2.4, we see that events A, B, and C are all subsets of the sample space S. It is also clear that event B is a subset of event A; event B ∩ C has no elements and hence B and C are mutually exclusive; event A ∩ C has at least one element; and event A ∪ B = A. Figure 2.4 might, therefore, depict a situation where we select a card at random from an ordinary deck of 52 playing cards and observe whether the following events occur: A: the card is red,
  • 63. / / 42 Chapter 2 Probability B: the card is the jack, queen, or king of diamonds, C: the card is an ace. Clearly, the event A ∩ C consists of only the two red aces. Several results that follow from the foregoing definitions, which may easily be verified by means of Venn diagrams, are as follows: 1. A ∩ φ = φ. 2. A ∪ φ = A. 3. A ∩ A = φ. 4. A ∪ A = S. 5. S = φ. 6. φ = S. 7. (A ) = A. 8. (A ∩ B) = A ∪ B . 9. (A ∪ B) = A ∩ B . Exercises 2.1 List the elements of each of the following sample spaces: (a) the set of integers between 1 and 50 divisible by 8; (b) the set S = {x | x2 + 4x − 5 = 0}; (c) the set of outcomes when a coin is tossed until a tail or three heads appear; (d) the set S = {x | x is a continent}; (e) the set S = {x | 2x − 4 ≥ 0 and x 1}. 2.2 Use the rule method to describe the sample space S consisting of all points in the first quadrant inside a circle of radius 3 with center at the origin. 2.3 Which of the following events are equal? (a) A = {1, 3}; (b) B = {x | x is a number on a die}; (c) C = {x | x2 − 4x + 3 = 0}; (d) D = {x | x is the number of heads when six coins are tossed}. 2.4 An experiment involves tossing a pair of dice, one green and one red, and recording the numbers that come up. If x equals the outcome on the green die and y the outcome on the red die, describe the sample space S (a) by listing the elements (x, y); (b) by using the rule method. 2.5 An experiment consists of tossing a die and then flipping a coin once if the number on the die is even. If the number on the die is odd, the coin is flipped twice. Using the notation 4H, for example, to denote the out- come that the die comes up 4 and then the coin comes up heads, and 3HT to denote the outcome that the die comes up 3 followed by a head and then a tail on the coin, construct a tree diagram to show the 18 elements of the sample space S. 2.6 Two jurors are selected from 4 alternates to serve at a murder trial. Using the notation A1A3, for exam- ple, to denote the simple event that alternates 1 and 3 are selected, list the 6 elements of the sample space S. 2.7 Four students are selected at random from a chemistry class and classified as male or female. List the elements of the sample space S1, using the letter M for male and F for female. Define a second sample space S2 where the elements represent the number of females selected. 2.8 For the sample space of Exercise 2.4, (a) list the elements corresponding to the event A that the sum is greater than 8; (b) list the elements corresponding to the event B that a 2 occurs on either die; (c) list the elements corresponding to the event C that a number greater than 4 comes up on the green die; (d) list the elements corresponding to the event A ∩ C; (e) list the elements corresponding to the event A ∩ B; (f) list the elements corresponding to the event B ∩ C; (g) construct a Venn diagram to illustrate the intersec- tions and unions of the events A, B, and C. 2.9 For the sample space of Exercise 2.5, (a) list the elements corresponding to the event A that a number less than 3 occurs on the die; (b) list the elements corresponding to the event B that two tails occur; (c) list the elements corresponding to the event A ;
  • 64. / / Exercises 43 (d) list the elements corresponding to the event A ∩B; (e) list the elements corresponding to the event A ∪ B. 2.10 An engineering firm is hired to determine if cer- tain waterways in Virginia are safe for fishing. Samples are taken from three rivers. (a) List the elements of a sample space S, using the letters F for safe to fish and N for not safe to fish. (b) List the elements of S corresponding to event E that at least two of the rivers are safe for fishing. (c) Define an event that has as its elements the points {FFF, NFF, FFN, NFN}. 2.11 The resumés of two male applicants for a college teaching position in chemistry are placed in the same file as the resumés of two female applicants. Two po- sitions become available, and the first, at the rank of assistant professor, is filled by selecting one of the four applicants at random. The second position, at the rank of instructor, is then filled by selecting at random one of the remaining three applicants. Using the notation M2F1, for example, to denote the simple event that the first position is filled by the second male applicant and the second position is then filled by the first female applicant, (a) list the elements of a sample space S; (b) list the elements of S corresponding to event A that the position of assistant professor is filled by a male applicant; (c) list the elements of S corresponding to event B that exactly one of the two positions is filled by a male applicant; (d) list the elements of S corresponding to event C that neither position is filled by a male applicant; (e) list the elements of S corresponding to the event A ∩ B; (f) list the elements of S corresponding to the event A ∪ C; (g) construct a Venn diagram to illustrate the intersec- tions and unions of the events A, B, and C. 2.12 Exercise and diet are being studied as possi- ble substitutes for medication to lower blood pressure. Three groups of subjects will be used to study the ef- fect of exercise. Group 1 is sedentary, while group 2 walks and group 3 swims for 1 hour a day. Half of each of the three exercise groups will be on a salt-free diet. An additional group of subjects will not exercise or re- strict their salt, but will take the standard medication. Use Z for sedentary, W for walker, S for swimmer, Y for salt, N for no salt, M for medication, and F for medication free. (a) Show all of the elements of the sample space S. (b) Given that A is the set of nonmedicated subjects and B is the set of walkers, list the elements of A ∪ B. (c) List the elements of A ∩ B. 2.13 Construct a Venn diagram to illustrate the pos- sible intersections and unions for the following events relative to the sample space consisting of all automo- biles made in the United States. F : Four door, S : Sun roof, P : Power steering. 2.14 If S = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} and A = {0, 2, 4, 6, 8}, B = {1, 3, 5, 7, 9}, C = {2, 3, 4, 5}, and D = {1, 6, 7}, list the elements of the sets correspond- ing to the following events: (a) A ∪ C; (b) A ∩ B; (c) C ; (d) (C ∩ D) ∪ B; (e) (S ∩ C) ; (f) A ∩ C ∩ D . 2.15 Consider the sample space S = {copper, sodium, nitrogen, potassium, uranium, oxygen, zinc} and the events A = {copper, sodium, zinc}, B = {sodium, nitrogen, potassium}, C = {oxygen}. List the elements of the sets corresponding to the fol- lowing events: (a) A ; (b) A ∪ C; (c) (A ∩ B ) ∪ C ; (d) B ∩ C ; (e) A ∩ B ∩ C; (f) (A ∪ B ) ∩ (A ∩ C). 2.16 If S = {x | 0 x 12}, M = {x | 1 x 9}, and N = {x | 0 x 5}, find (a) M ∪ N; (b) M ∩ N; (c) M ∩ N . 2.17 Let A, B, and C be events relative to the sam- ple space S. Using Venn diagrams, shade the areas representing the following events: (a) (A ∩ B) ; (b) (A ∪ B) ; (c) (A ∩ C) ∪ B.
  • 65. 44 Chapter 2 Probability 2.18 Which of the following pairs of events are mutu- ally exclusive? (a) A golfer scoring the lowest 18-hole round in a 72- hole tournament and losing the tournament. (b) A poker player getting a flush (all cards in the same suit) and 3 of a kind on the same 5-card hand. (c) A mother giving birth to a baby girl and a set of twin daughters on the same day. (d) A chess player losing the last game and winning the match. 2.19 Suppose that a family is leaving on a summer vacation in their camper and that M is the event that they will experience mechanical problems, T is the event that they will receive a ticket for committing a traffic violation, and V is the event that they will ar- rive at a campsite with no vacancies. Referring to the Venn diagram of Figure 2.5, state in words the events represented by the following regions: (a) region 5; (b) region 3; (c) regions 1 and 2 together; (d) regions 4 and 7 together; (e) regions 3, 6, 7, and 8 together. 2.20 Referring to Exercise 2.19 and the Venn diagram of Figure 2.5, list the numbers of the regions that rep- resent the following events: (a) The family will experience no mechanical problems and will not receive a ticket for a traffic violation but will arrive at a campsite with no vacancies. (b) The family will experience both mechanical prob- lems and trouble in locating a campsite with a va- cancy but will not receive a ticket for a traffic vio- lation. (c) The family will either have mechanical trouble or arrive at a campsite with no vacancies but will not receive a ticket for a traffic violation. (d) The family will not arrive at a campsite with no vacancies. M T V 1 2 3 4 5 7 6 8 Figure 2.5: Venn diagram for Exercises 2.19 and 2.20. 2.3 Counting Sample Points One of the problems that the statistician must consider and attempt to evaluate is the element of chance associated with the occurrence of certain events when an experiment is performed. These problems belong in the field of probability, a subject to be introduced in Section 2.4. In many cases, we shall be able to solve a probability problem by counting the number of points in the sample space without actually listing each element. The fundamental principle of counting, often referred to as the multiplication rule, is stated in Rule 2.1.
  • 66. 2.3 Counting Sample Points 45 Rule 2.1: If an operation can be performed in n1 ways, and if for each of these ways a second operation can be performed in n2 ways, then the two operations can be performed together in n1n2 ways. Example 2.13: How many sample points are there in the sample space when a pair of dice is thrown once? Solution: The first die can land face-up in any one of n1 = 6 ways. For each of these 6 ways, the second die can also land face-up in n2 = 6 ways. Therefore, the pair of dice can land in n1n2 = (6)(6) = 36 possible ways. Example 2.14: A developer of a new subdivision offers prospective home buyers a choice of Tudor, rustic, colonial, and traditional exterior styling in ranch, two-story, and split-level floor plans. In how many different ways can a buyer order one of these homes? Exterior Style Floor Plan T u d o r Rustic Colonial T r a d i t i o n a l Split-Level Split-Level Two-Story Two-Story Ranch Ranch Split-Level Split-Level Two-Story Two-Story Ranch Ranch Figure 2.6: Tree diagram for Example 2.14. Solution: Since n1 = 4 and n2 = 3, a buyer must choose from n1n2 = (4)(3) = 12 possible homes. The answers to the two preceding examples can be verified by constructing tree diagrams and counting the various paths along the branches. For instance,
  • 67. 46 Chapter 2 Probability in Example 2.14 there will be n1 = 4 branches corresponding to the different exterior styles, and then there will be n2 = 3 branches extending from each of these 4 branches to represent the different floor plans. This tree diagram yields the n1n2 = 12 choices of homes given by the paths along the branches, as illustrated in Figure 2.6. Example 2.15: If a 22-member club needs to elect a chair and a treasurer, how many different ways can these two to be elected? Solution: For the chair position, there are 22 total possibilities. For each of those 22 pos- sibilities, there are 21 possibilities to elect the treasurer. Using the multiplication rule, we obtain n1 × n2 = 22 × 21 = 462 different ways. The multiplication rule, Rule 2.1 may be extended to cover any number of operations. Suppose, for instance, that a customer wishes to buy a new cell phone and can choose from n1 = 5 brands, n2 = 5 sets of capability, and n3 = 4 colors. These three classifications result in n1n2n3 = (5)(5)(4) = 100 different ways for a customer to order one of these phones. The generalized multiplication rule covering k operations is stated in the following. Rule 2.2: If an operation can be performed in n1 ways, and if for each of these a second operation can be performed in n2 ways, and for each of the first two a third operation can be performed in n3 ways, and so forth, then the sequence of k operations can be performed in n1n2 · · · nk ways. Example 2.16: Sam is going to assemble a computer by himself. He has the choice of chips from two brands, a hard drive from four, memory from three, and an accessory bundle from five local stores. How many different ways can Sam order the parts? Solution: Since n1 = 2, n2 = 4, n3 = 3, and n4 = 5, there are nl × n2 × n3 × n4 = 2 × 4 × 3 × 5 = 120 different ways to order the parts. Example 2.17: How many even four-digit numbers can be formed from the digits 0, 1, 2, 5, 6, and 9 if each digit can be used only once? Solution: Since the number must be even, we have only n1 = 3 choices for the units position. However, for a four-digit number the thousands position cannot be 0. Hence, we consider the units position in two parts, 0 or not 0. If the units position is 0 (i.e., n1 = 1), we have n2 = 5 choices for the thousands position, n3 = 4 for the hundreds position, and n4 = 3 for the tens position. Therefore, in this case we have a total of n1n2n3n4 = (1)(5)(4)(3) = 60 even four-digit numbers. On the other hand, if the units position is not 0 (i.e., n1 = 2), we have n2 = 4 choices for the thousands position, n3 = 4 for the hundreds position, and n4 = 3 for the tens position. In this situation, there are a total of n1n2n3n4 = (2)(4)(4)(3) = 96
  • 68. 2.3 Counting Sample Points 47 even four-digit numbers. Since the above two cases are mutually exclusive, the total number of even four-digit numbers can be calculated as 60 + 96 = 156. Frequently, we are interested in a sample space that contains as elements all possible orders or arrangements of a group of objects. For example, we may want to know how many different arrangements are possible for sitting 6 people around a table, or we may ask how many different orders are possible for drawing 2 lottery tickets from a total of 20. The different arrangements are called permutations. Definition 2.7: A permutation is an arrangement of all or part of a set of objects. Consider the three letters a, b, and c. The possible permutations are abc, acb, bac, bca, cab, and cba. Thus, we see that there are 6 distinct arrangements. Using Rule 2.2, we could arrive at the answer 6 without actually listing the different orders by the following arguments: There are n1 = 3 choices for the first position. No matter which letter is chosen, there are always n2 = 2 choices for the second position. No matter which two letters are chosen for the first two positions, there is only n3 = 1 choice for the last position, giving a total of n1n2n3 = (3)(2)(1) = 6 permutations by Rule 2.2. In general, n distinct objects can be arranged in n(n − 1)(n − 2) · · · (3)(2)(1) ways. There is a notation for such a number. Definition 2.8: For any non-negative integer n, n!, called “n factorial,” is defined as n! = n(n − 1) · · · (2)(1), with special case 0! = 1. Using the argument above, we arrive at the following theorem. Theorem 2.1: The number of permutations of n objects is n!. The number of permutations of the four letters a, b, c, and d will be 4! = 24. Now consider the number of permutations that are possible by taking two letters at a time from four. These would be ab, ac, ad, ba, bc, bd, ca, cb, cd, da, db, and dc. Using Rule 2.1 again, we have two positions to fill, with n1 = 4 choices for the first and then n2 = 3 choices for the second, for a total of n1n2 = (4)(3) = 12 permutations. In general, n distinct objects taken r at a time can be arranged in n(n − 1)(n − 2) · · · (n − r + 1) ways. We represent this product by the symbol nPr = n! (n − r)! .
  • 69. 48 Chapter 2 Probability As a result, we have the theorem that follows. Theorem 2.2: The number of permutations of n distinct objects taken r at a time is nPr = n! (n − r)! . Example 2.18: In one year, three awards (research, teaching, and service) will be given to a class of 25 graduate students in a statistics department. If each student can receive at most one award, how many possible selections are there? Solution: Since the awards are distinguishable, it is a permutation problem. The total number of sample points is 25P3 = 25! (25 − 3)! = 25! 22! = (25)(24)(23) = 13, 800. Example 2.19: A president and a treasurer are to be chosen from a student club consisting of 50 people. How many different choices of officers are possible if (a) there are no restrictions; (b) A will serve only if he is president; (c) B and C will serve together or not at all; (d) D and E will not serve together? Solution: (a) The total number of choices of officers, without any restrictions, is 50P2 = 50! 48! = (50)(49) = 2450. (b) Since A will serve only if he is president, we have two situations here: (i) A is selected as the president, which yields 49 possible outcomes for the treasurer’s position, or (ii) officers are selected from the remaining 49 people without A, which has the number of choices 49P2 = (49)(48) = 2352. Therefore, the total number of choices is 49 + 2352 = 2401. (c) The number of selections when B and C serve together is 2. The number of selections when both B and C are not chosen is 48P2 = 2256. Therefore, the total number of choices in this situation is 2 + 2256 = 2258. (d) The number of selections when D serves as an officer but not E is (2)(48) = 96, where 2 is the number of positions D can take and 48 is the number of selections of the other officer from the remaining people in the club except E. The number of selections when E serves as an officer but not D is also (2)(48) = 96. The number of selections when both D and E are not chosen is 48P2 = 2256. Therefore, the total number of choices is (2)(96) + 2256 = 2448. This problem also has another short solution: Since D and E can only serve together in 2 ways, the answer is 2450 − 2 = 2448.
  • 70. 2.3 Counting Sample Points 49 Permutations that occur by arranging objects in a circle are called circular permutations. Two circular permutations are not considered different unless corresponding objects in the two arrangements are preceded or followed by a dif- ferent object as we proceed in a clockwise direction. For example, if 4 people are playing bridge, we do not have a new permutation if they all move one position in a clockwise direction. By considering one person in a fixed position and arranging the other three in 3! ways, we find that there are 6 distinct arrangements for the bridge game. Theorem 2.3: The number of permutations of n objects arranged in a circle is (n − 1)!. So far we have considered permutations of distinct objects. That is, all the objects were completely different or distinguishable. Obviously, if the letters b and c are both equal to x, then the 6 permutations of the letters a, b, and c become axx, axx, xax, xax, xxa, and xxa, of which only 3 are distinct. Therefore, with 3 letters, 2 being the same, we have 3!/2! = 3 distinct permutations. With 4 different letters a, b, c, and d, we have 24 distinct permutations. If we let a = b = x and c = d = y, we can list only the following distinct permutations: xxyy, xyxy, yxxy, yyxx, xyyx, and yxyx. Thus, we have 4!/(2! 2!) = 6 distinct permutations. Theorem 2.4: The number of distinct permutations of n things of which n1 are of one kind, n2 of a second kind, . . . , nk of a kth kind is n! n1!n2! · · · nk! . Example 2.20: In a college football training session, the defensive coordinator needs to have 10 players standing in a row. Among these 10 players, there are 1 freshman, 2 sopho- mores, 4 juniors, and 3 seniors. How many different ways can they be arranged in a row if only their class level will be distinguished? Solution: Directly using Theorem 2.4, we find that the total number of arrangements is 10! 1! 2! 4! 3! = 12, 600. Often we are concerned with the number of ways of partitioning a set of n objects into r subsets called cells. A partition has been achieved if the intersection of every possible pair of the r subsets is the empty set φ and if the union of all subsets gives the original set. The order of the elements within a cell is of no importance. Consider the set {a, e, i, o, u}. The possible partitions into two cells in which the first cell contains 4 elements and the second cell 1 element are {(a, e, i, o), (u)}, {(a, i, o, u), (e)}, {(e, i, o, u), (a)}, {(a, e, o, u), (i)}, {(a, e, i, u), (o)}. We see that there are 5 ways to partition a set of 4 elements into two subsets, or cells, containing 4 elements in the first cell and 1 element in the second.
  • 71. 50 Chapter 2 Probability The number of partitions for this illustration is denoted by the symbol 5 4, 1 = 5! 4! 1! = 5, where the top number represents the total number of elements and the bottom numbers represent the number of elements going into each cell. We state this more generally in Theorem 2.5. Theorem 2.5: The number of ways of partitioning a set of n objects into r cells with n1 elements in the first cell, n2 elements in the second, and so forth, is n n1, n2, . . . , nr = n! n1!n2! · · · nr! , where n1 + n2 + · · · + nr = n. Example 2.21: In how many ways can 7 graduate students be assigned to 1 triple and 2 double hotel rooms during a conference? Solution: The total number of possible partitions would be 7 3, 2, 2 = 7! 3! 2! 2! = 210. In many problems, we are interested in the number of ways of selecting r objects from n without regard to order. These selections are called combinations. A combination is actually a partition with two cells, the one cell containing the r objects selected and the other cell containing the (n − r) objects that are left. The number of such combinations, denoted by n r, n − r , is usually shortened to n r , since the number of elements in the second cell must be n − r. Theorem 2.6: The number of combinations of n distinct objects taken r at a time is n r = n! r!(n − r)! . Example 2.22: A young boy asks his mother to get 5 Game-BoyTM cartridges from his collection of 10 arcade and 5 sports games. How many ways are there that his mother can get 3 arcade and 2 sports games? Solution: The number of ways of selecting 3 cartridges from 10 is 10 3 = 10! 3! (10 − 3)! = 120. The number of ways of selecting 2 cartridges from 5 is 5 2 = 5! 2! 3! = 10.
  • 72. / / Exercises 51 Using the multiplication rule (Rule 2.1) with n1 = 120 and n2 = 10, we have (120)(10) = 1200 ways. Example 2.23: How many different letter arrangements can be made from the letters in the word STATISTICS? Solution: Using the same argument as in the discussion for Theorem 2.6, in this example we can actually apply Theorem 2.5 to obtain 10 3, 3, 2, 1, 1 = 10! 3! 3! 2! 1! 1! = 50, 400. Here we have 10 total letters, with 2 letters (S, T) appearing 3 times each, letter I appearing twice, and letters A and C appearing once each. On the other hand, this result can be directly obtained by using Theorem 2.4. Exercises 2.21 Registrants at a large convention are offered 6 sightseeing tours on each of 3 days. In how many ways can a person arrange to go on a sightseeing tour planned by this convention? 2.22 In a medical study, patients are classified in 8 ways according to whether they have blood type AB+ , AB− , A+ , A− , B+ , B− , O+ , or O− , and also accord- ing to whether their blood pressure is low, normal, or high. Find the number of ways in which a patient can be classified. 2.23 If an experiment consists of throwing a die and then drawing a letter at random from the English alphabet, how many points are there in the sample space? 2.24 Students at a private liberal arts college are clas- sified as being freshmen, sophomores, juniors, or se- niors, and also according to whether they are male or female. Find the total number of possible classifica- tions for the students of that college. 2.25 A certain brand of shoes comes in 5 different styles, with each style available in 4 distinct colors. If the store wishes to display pairs of these shoes showing all of its various styles and colors, how many different pairs will the store have on display? 2.26 A California study concluded that following 7 simple health rules can extend a man’s life by 11 years on the average and a woman’s life by 7 years. These 7 rules are as follows: no smoking, get regular exer- cise, use alcohol only in moderation, get 7 to 8 hours of sleep, maintain proper weight, eat breakfast, and do not eat between meals. In how many ways can a person adopt 5 of these rules to follow (a) if the person presently violates all 7 rules? (b) if the person never drinks and always eats break- fast? 2.27 A developer of a new subdivision offers a prospective home buyer a choice of 4 designs, 3 differ- ent heating systems, a garage or carport, and a patio or screened porch. How many different plans are available to this buyer? 2.28 A drug for the relief of asthma can be purchased from 5 different manufacturers in liquid, tablet, or capsule form, all of which come in regular and extra strength. How many different ways can a doctor pre- scribe the drug for a patient suffering from asthma? 2.29 In a fuel economy study, each of 3 race cars is tested using 5 different brands of gasoline at 7 test sites located in different regions of the country. If 2 drivers are used in the study, and test runs are made once un- der each distinct set of conditions, how many test runs are needed? 2.30 In how many different ways can a true-false test consisting of 9 questions be answered? 2.31 A witness to a hit-and-run accident told the po- lice that the license number contained the letters RLH followed by 3 digits, the first of which was a 5. If the witness cannot recall the last 2 digits, but is cer- tain that all 3 digits are different, find the maximum number of automobile registrations that the police may have to check.
  • 73. 52 Chapter 2 Probability 2.32 (a) In how many ways can 6 people be lined up to get on a bus? (b) If 3 specific persons, among 6, insist on following each other, how many ways are possible? (c) If 2 specific persons, among 6, refuse to follow each other, how many ways are possible? 2.33 If a multiple-choice test consists of 5 questions, each with 4 possible answers of which only 1 is correct, (a) in how many different ways can a student check off one answer to each question? (b) in how many ways can a student check off one answer to each question and get all the answers wrong? 2.34 (a) How many distinct permutations can be made from the letters of the word COLUMNS? (b) How many of these permutations start with the let- ter M? 2.35 A contractor wishes to build 9 houses, each dif- ferent in design. In how many ways can he place these houses on a street if 6 lots are on one side of the street and 3 lots are on the opposite side? 2.36 (a) How many three-digit numbers can be formed from the digits 0, 1, 2, 3, 4, 5, and 6 if each digit can be used only once? (b) How many of these are odd numbers? (c) How many are greater than 330? 2.37 In how many ways can 4 boys and 5 girls sit in a row if the boys and girls must alternate? 2.38 Four married couples have bought 8 seats in the same row for a concert. In how many different ways can they be seated (a) with no restrictions? (b) if each couple is to sit together? (c) if all the men sit together to the right of all the women? 2.39 In a regional spelling bee, the 8 finalists consist of 3 boys and 5 girls. Find the number of sample points in the sample space S for the number of possible orders at the conclusion of the contest for (a) all 8 finalists; (b) the first 3 positions. 2.40 In how many ways can 5 starting positions on a basketball team be filled with 8 men who can play any of the positions? 2.41 Find the number of ways that 6 teachers can be assigned to 4 sections of an introductory psychol- ogy course if no teacher is assigned to more than one section. 2.42 Three lottery tickets for first, second, and third prizes are drawn from a group of 40 tickets. Find the number of sample points in S for awarding the 3 prizes if each contestant holds only 1 ticket. 2.43 In how many ways can 5 different trees be planted in a circle? 2.44 In how many ways can a caravan of 8 covered wagons from Arizona be arranged in a circle? 2.45 How many distinct permutations can be made from the letters of the word INFINITY ? 2.46 In how many ways can 3 oaks, 4 pines, and 2 maples be arranged along a property line if one does not distinguish among trees of the same kind? 2.47 How many ways are there to select 3 candidates from 8 equally qualified recent graduates for openings in an accounting firm? 2.48 How many ways are there that no two students will have the same birth date in a class of size 60? 2.4 Probability of an Event Perhaps it was humankind’s unquenchable thirst for gambling that led to the early development of probability theory. In an effort to increase their winnings, gam- blers called upon mathematicians to provide optimum strategies for various games of chance. Some of the mathematicians providing these strategies were Pascal, Leibniz, Fermat, and James Bernoulli. As a result of this development of prob- ability theory, statistical inference, with all its predictions and generalizations, has branched out far beyond games of chance to encompass many other fields as- sociated with chance occurrences, such as politics, business, weather forecasting,
  • 74. 2.4 Probability of an Event 53 and scientific research. For these predictions and generalizations to be reasonably accurate, an understanding of basic probability theory is essential. What do we mean when we make the statement “John will probably win the tennis match,” or “I have a fifty-fifty chance of getting an even number when a die is tossed,” or “The university is not likely to win the football game tonight,” or “Most of our graduating class will likely be married within 3 years”? In each case, we are expressing an outcome of which we are not certain, but owing to past information or from an understanding of the structure of the experiment, we have some degree of confidence in the validity of the statement. Throughout the remainder of this chapter, we consider only those experiments for which the sample space contains a finite number of elements. The likelihood of the occurrence of an event resulting from such a statistical experiment is evaluated by means of a set of real numbers, called weights or probabilities, ranging from 0 to 1. To every point in the sample space we assign a probability such that the sum of all probabilities is 1. If we have reason to believe that a certain sample point is quite likely to occur when the experiment is conducted, the probability assigned should be close to 1. On the other hand, a probability closer to 0 is assigned to a sample point that is not likely to occur. In many experiments, such as tossing a coin or a die, all the sample points have the same chance of occurring and are assigned equal probabilities. For points outside the sample space, that is, for simple events that cannot possibly occur, we assign a probability of 0. To find the probability of an event A, we sum all the probabilities assigned to the sample points in A. This sum is called the probability of A and is denoted by P(A). Definition 2.9: The probability of an event A is the sum of the weights of all sample points in A. Therefore, 0 ≤ P(A) ≤ 1, P(φ) = 0, and P(S) = 1. Furthermore, if A1, A2, A3, . . . is a sequence of mutually exclusive events, then P(A1 ∪ A2 ∪ A3 ∪ · · · ) = P(A1) + P(A2) + P(A3) + · · · . Example 2.24: A coin is tossed twice. What is the probability that at least 1 head occurs? Solution: The sample space for this experiment is S = {HH, HT, TH, TT}. If the coin is balanced, each of these outcomes is equally likely to occur. Therefore, we assign a probability of ω to each sample point. Then 4ω = 1, or ω = 1/4. If A represents the event of at least 1 head occurring, then A = {HH, HT, TH} and P(A) = 1 4 + 1 4 + 1 4 = 3 4 . Example 2.25: A die is loaded in such a way that an even number is twice as likely to occur as an odd number. If E is the event that a number less than 4 occurs on a single toss of the die, find P(E).
  • 75. 54 Chapter 2 Probability Solution: The sample space is S = {1, 2, 3, 4, 5, 6}. We assign a probability of w to each odd number and a probability of 2w to each even number. Since the sum of the probabilities must be 1, we have 9w = 1 or w = 1/9. Hence, probabilities of 1/9 and 2/9 are assigned to each odd and even number, respectively. Therefore, E = {1, 2, 3} and P(E) = 1 9 + 2 9 + 1 9 = 4 9 . Example 2.26: In Example 2.25, let A be the event that an even number turns up and let B be the event that a number divisible by 3 occurs. Find P(A ∪ B) and P(A ∩ B). Solution: For the events A = {2, 4, 6} and B = {3, 6}, we have A ∪ B = {2, 3, 4, 6} and A ∩ B = {6}. By assigning a probability of 1/9 to each odd number and 2/9 to each even number, we have P(A ∪ B) = 2 9 + 1 9 + 2 9 + 2 9 = 7 9 and P(A ∩ B) = 2 9 . If the sample space for an experiment contains N elements, all of which are equally likely to occur, we assign a probability equal to 1/N to each of the N points. The probability of any event A containing n of these N sample points is then the ratio of the number of elements in A to the number of elements in S. Rule 2.3: If an experiment can result in any one of N different equally likely outcomes, and if exactly n of these outcomes correspond to event A, then the probability of event A is P(A) = n N . Example 2.27: A statistics class for engineers consists of 25 industrial, 10 mechanical, 10 electrical, and 8 civil engineering students. If a person is randomly selected by the instruc- tor to answer a question, find the probability that the student chosen is (a) an industrial engineering major and (b) a civil engineering or an electrical engineering major. Solution: Denote by I, M, E, and C the students majoring in industrial, mechanical, electri- cal, and civil engineering, respectively. The total number of students in the class is 53, all of whom are equally likely to be selected. (a) Since 25 of the 53 students are majoring in industrial engineering, the prob- ability of event I, selecting an industrial engineering major at random, is P(I) = 25 53 . (b) Since 18 of the 53 students are civil or electrical engineering majors, it follows that P(C ∪ E) = 18 53 .
  • 76. 2.4 Probability of an Event 55 Example 2.28: In a poker hand consisting of 5 cards, find the probability of holding 2 aces and 3 jacks. Solution: The number of ways of being dealt 2 aces from 4 cards is 4 2 = 4! 2! 2! = 6, and the number of ways of being dealt 3 jacks from 4 cards is 4 3 = 4! 3! 1! = 4. By the multiplication rule (Rule 2.1), there are n = (6)(4) = 24 hands with 2 aces and 3 jacks. The total number of 5-card poker hands, all of which are equally likely, is N = 52 5 = 52! 5! 47! = 2,598,960. Therefore, the probability of getting 2 aces and 3 jacks in a 5-card poker hand is P(C) = 24 2, 598, 960 = 0.9 × 10−5 . If the outcomes of an experiment are not equally likely to occur, the probabil- ities must be assigned on the basis of prior knowledge or experimental evidence. For example, if a coin is not balanced, we could estimate the probabilities of heads and tails by tossing the coin a large number of times and recording the outcomes. According to the relative frequency definition of probability, the true probabil- ities would be the fractions of heads and tails that occur in the long run. Another intuitive way of understanding probability is the indifference approach. For in- stance, if you have a die that you believe is balanced, then using this indifference approach, you determine that the probability that each of the six sides will show up after a throw is 1/6. To find a numerical value that represents adequately the probability of winning at tennis, we must depend on our past performance at the game as well as that of the opponent and, to some extent, our belief in our ability to win. Similarly, to find the probability that a horse will win a race, we must arrive at a probability based on the previous records of all the horses entered in the race as well as the records of the jockeys riding the horses. Intuition would undoubtedly also play a part in determining the size of the bet that we might be willing to wager. The use of intuition, personal beliefs, and other indirect information in arriving at probabilities is referred to as the subjective definition of probability. In most of the applications of probability in this book, the relative frequency interpretation of probability is the operative one. Its foundation is the statistical experiment rather than subjectivity, and it is best viewed as the limiting relative frequency. As a result, many applications of probability in science and engineer- ing must be based on experiments that can be repeated. Less objective notions of probability are encountered when we assign probabilities based on prior informa- tion and opinions, as in “There is a good chance that the Giants will lose the Super
  • 77. 56 Chapter 2 Probability Bowl.” When opinions and prior information differ from individual to individual, subjective probability becomes the relevant resource. In Bayesian statistics (see Chapter 18), a more subjective interpretation of probability will be used, based on an elicitation of prior probability information. 2.5 Additive Rules Often it is easiest to calculate the probability of some event from known prob- abilities of other events. This may well be true if the event in question can be represented as the union of two other events or as the complement of some event. Several important laws that frequently simplify the computation of probabilities follow. The first, called the additive rule, applies to unions of events. Theorem 2.7: If A and B are two events, then P(A ∪ B) = P(A) + P(B) − P(A ∩ B). A B A B S Figure 2.7: Additive rule of probability. Proof: Consider the Venn diagram in Figure 2.7. The P(A ∪ B) is the sum of the prob- abilities of the sample points in A ∪ B. Now P(A) + P(B) is the sum of all the probabilities in A plus the sum of all the probabilities in B. Therefore, we have added the probabilities in (A ∩ B) twice. Since these probabilities add up to P(A ∩ B), we must subtract this probability once to obtain the sum of the probabilities in A ∪ B. Corollary 2.1: If A and B are mutually exclusive, then P(A ∪ B) = P(A) + P(B). Corollary 2.1 is an immediate result of Theorem 2.7, since if A and B are mutually exclusive, A ∩ B = 0 and then P(A ∩ B) = P(φ) = 0. In general, we can write Corollary 2.2.
  • 78. 2.5 Additive Rules 57 Corollary 2.2: If A1, A2, . . . , An are mutually exclusive, then P(A1 ∪ A2 ∪ · · · ∪ An) = P(A1) + P(A2) + · · · + P(An). A collection of events {A1, A2, . . . , An} of a sample space S is called a partition of S if A1, A2, . . . , An are mutually exclusive and A1 ∪ A2 ∪ · · · ∪ An = S. Thus, we have Corollary 2.3: If A1, A2, . . . , An is a partition of sample space S, then P(A1 ∪ A2 ∪ · · · ∪ An) = P(A1) + P(A2) + · · · + P(An) = P(S) = 1. As one might expect, Theorem 2.7 extends in an analogous fashion. Theorem 2.8: For three events A, B, and C, P(A ∪ B ∪ C) = P(A) + P(B) + P(C) − P(A ∩ B) − P(A ∩ C) − P(B ∩ C) + P(A ∩ B ∩ C). Example 2.29: John is going to graduate from an industrial engineering department in a university by the end of the semester. After being interviewed at two companies he likes, he assesses that his probability of getting an offer from company A is 0.8, and his probability of getting an offer from company B is 0.6. If he believes that the probability that he will get offers from both companies is 0.5, what is the probability that he will get at least one offer from these two companies? Solution: Using the additive rule, we have P(A ∪ B) = P(A) + P(B) − P(A ∩ B) = 0.8 + 0.6 − 0.5 = 0.9. Example 2.30: What is the probability of getting a total of 7 or 11 when a pair of fair dice is tossed? Solution: Let A be the event that 7 occurs and B the event that 11 comes up. Now, a total of 7 occurs for 6 of the 36 sample points, and a total of 11 occurs for only 2 of the sample points. Since all sample points are equally likely, we have P(A) = 1/6 and P(B) = 1/18. The events A and B are mutually exclusive, since a total of 7 and 11 cannot both occur on the same toss. Therefore, P(A ∪ B) = P(A) + P(B) = 1 6 + 1 18 = 2 9 . This result could also have been obtained by counting the total number of points for the event A ∪ B, namely 8, and writing P(A ∪ B) = n N = 8 36 = 2 9 .
  • 79. 58 Chapter 2 Probability Theorem 2.7 and its three corollaries should help the reader gain more insight into probability and its interpretation. Corollaries 2.1 and 2.2 suggest the very intuitive result dealing with the probability of occurrence of at least one of a number of events, no two of which can occur simultaneously. The probability that at least one occurs is the sum of the probabilities of occurrence of the individual events. The third corollary simply states that the highest value of a probability (unity) is assigned to the entire sample space S. Example 2.31: If the probabilities are, respectively, 0.09, 0.15, 0.21, and 0.23 that a person pur- chasing a new automobile will choose the color green, white, red, or blue, what is the probability that a given buyer will purchase a new automobile that comes in one of those colors? Solution: Let G, W, R, and B be the events that a buyer selects, respectively, a green, white, red, or blue automobile. Since these four events are mutually exclusive, the probability is P(G ∪ W ∪ R ∪ B) = P(G) + P(W) + P(R) + P(B) = 0.09 + 0.15 + 0.21 + 0.23 = 0.68. Often it is more difficult to calculate the probability that an event occurs than it is to calculate the probability that the event does not occur. Should this be the case for some event A, we simply find P(A ) first and then, using Theorem 2.7, find P(A) by subtraction. Theorem 2.9: If A and A are complementary events, then P(A) + P(A ) = 1. Proof: Since A ∪ A = S and the sets A and A are disjoint, 1 = P(S) = P(A ∪ A ) = P(A) + P(A ). Example 2.32: If the probabilities that an automobile mechanic will service 3, 4, 5, 6, 7, or 8 or more cars on any given workday are, respectively, 0.12, 0.19, 0.28, 0.24, 0.10, and 0.07, what is the probability that he will service at least 5 cars on his next day at work? Solution: Let E be the event that at least 5 cars are serviced. Now, P(E) = 1 − P(E ), where E is the event that fewer than 5 cars are serviced. Since P(E ) = 0.12 + 0.19 = 0.31, it follows from Theorem 2.9 that P(E) = 1 − 0.31 = 0.69. Example 2.33: Suppose the manufacturer’s specifications for the length of a certain type of com- puter cable are 2000 ± 10 millimeters. In this industry, it is known that small cable is just as likely to be defective (not meeting specifications) as large cable. That is,
  • 80. / / Exercises 59 the probability of randomly producing a cable with length exceeding 2010 millime- ters is equal to the probability of producing a cable with length smaller than 1990 millimeters. The probability that the production procedure meets specifications is known to be 0.99. (a) What is the probability that a cable selected randomly is too large? (b) What is the probability that a randomly selected cable is larger than 1990 millimeters? Solution: Let M be the event that a cable meets specifications. Let S and L be the events that the cable is too small and too large, respectively. Then (a) P(M) = 0.99 and P(S) = P(L) = (1 − 0.99)/2 = 0.005. (b) Denoting by X the length of a randomly selected cable, we have P(1990 ≤ X ≤ 2010) = P(M) = 0.99. Since P(X ≥ 2010) = P(L) = 0.005, P(X ≥ 1990) = P(M) + P(L) = 0.995. This also can be solved by using Theorem 2.9: P(X ≥ 1990) + P(X 1990) = 1. Thus, P(X ≥ 1990) = 1 − P(S) = 1 − 0.005 = 0.995. Exercises 2.49 Find the errors in each of the following state- ments: (a) The probabilities that an automobile salesperson will sell 0, 1, 2, or 3 cars on any given day in Febru- ary are, respectively, 0.19, 0.38, 0.29, and 0.15. (b) The probability that it will rain tomorrow is 0.40, and the probability that it will not rain tomorrow is 0.52. (c) The probabilities that a printer will make 0, 1, 2, 3, or 4 or more mistakes in setting a document are, respectively, 0.19, 0.34, −0.25, 0.43, and 0.29. (d) On a single draw from a deck of playing cards, the probability of selecting a heart is 1/4, the probabil- ity of selecting a black card is 1/2, and the proba- bility of selecting both a heart and a black card is 1/8. 2.50 Assuming that all elements of S in Exercise 2.8 on page 42 are equally likely to occur, find (a) the probability of event A; (b) the probability of event C; (c) the probability of event A ∩ C. 2.51 A box contains 500 envelopes, of which 75 con- tain $100 in cash, 150 contain $25, and 275 contain $10. An envelope may be purchased for $25. What is the sample space for the different amounts of money? Assign probabilities to the sample points and then find the probability that the first envelope purchased con- tains less than $100. 2.52 Suppose that in a senior college class of 500 stu- dents it is found that 210 smoke, 258 drink alcoholic beverages, 216 eat between meals, 122 smoke and drink alcoholic beverages, 83 eat between meals and drink alcoholic beverages, 97 smoke and eat between meals, and 52 engage in all three of these bad health practices. If a member of this senior class is selected at random, find the probability that the student (a) smokes but does not drink alcoholic beverages; (b) eats between meals and drinks alcoholic beverages but does not smoke; (c) neither smokes nor eats between meals. 2.53 The probability that an American industry will locate in Shanghai, China, is 0.7, the probability that
  • 81. / / 60 Chapter 2 Probability it will locate in Beijing, China, is 0.4, and the proba- bility that it will locate in either Shanghai or Beijing or both is 0.8. What is the probability that the industry will locate (a) in both cities? (b) in neither city? 2.54 From past experience, a stockbroker believes that under present economic conditions a customer will invest in tax-free bonds with a probability of 0.6, will invest in mutual funds with a probability of 0.3, and will invest in both tax-free bonds and mutual funds with a probability of 0.15. At this time, find the prob- ability that a customer will invest (a) in either tax-free bonds or mutual funds; (b) in neither tax-free bonds nor mutual funds. 2.55 If each coded item in a catalog begins with 3 distinct letters followed by 4 distinct nonzero digits, find the probability of randomly selecting one of these coded items with the first letter a vowel and the last digit even. 2.56 An automobile manufacturer is concerned about a possible recall of its best-selling four-door sedan. If there were a recall, there is a probability of 0.25 of a defect in the brake system, 0.18 of a defect in the trans- mission, 0.17 of a defect in the fuel system, and 0.40 of a defect in some other area. (a) What is the probability that the defect is the brakes or the fueling system if the probability of defects in both systems simultaneously is 0.15? (b) What is the probability that there are no defects in either the brakes or the fueling system? 2.57 If a letter is chosen at random from the English alphabet, find the probability that the letter (a) is a vowel exclusive of y; (b) is listed somewhere ahead of the letter j; (c) is listed somewhere after the letter g. 2.58 A pair of fair dice is tossed. Find the probability of getting (a) a total of 8; (b) at most a total of 5. 2.59 In a poker hand consisting of 5 cards, find the probability of holding (a) 3 aces; (b) 4 hearts and 1 club. 2.60 If 3 books are picked at random from a shelf con- taining 5 novels, 3 books of poems, and a dictionary, what is the probability that (a) the dictionary is selected? (b) 2 novels and 1 book of poems are selected? 2.61 In a high school graduating class of 100 stu- dents, 54 studied mathematics, 69 studied history, and 35 studied both mathematics and history. If one of these students is selected at random, find the proba- bility that (a) the student took mathematics or history; (b) the student did not take either of these subjects; (c) the student took history but not mathematics. 2.62 Dom’s Pizza Company uses taste testing and statistical analysis of the data prior to marketing any new product. Consider a study involving three types of crusts (thin, thin with garlic and oregano, and thin with bits of cheese). Dom’s is also studying three sauces (standard, a new sauce with more garlic, and a new sauce with fresh basil). (a) How many combinations of crust and sauce are in- volved? (b) What is the probability that a judge will get a plain thin crust with a standard sauce for his first taste test? 2.63 According to Consumer Digest (July/August 1996), the probable location of personal computers (PC) in the home is as follows: Adult bedroom: 0.03 Child bedroom: 0.15 Other bedroom: 0.14 Office or den: 0.40 Other rooms: 0.28 (a) What is the probability that a PC is in a bedroom? (b) What is the probability that it is not in a bedroom? (c) Suppose a household is selected at random from households with a PC; in what room would you expect to find a PC? 2.64 Interest centers around the life of an electronic component. Suppose it is known that the probabil- ity that the component survives for more than 6000 hours is 0.42. Suppose also that the probability that the component survives no longer than 4000 hours is 0.04. (a) What is the probability that the life of the compo- nent is less than or equal to 6000 hours? (b) What is the probability that the life is greater than 4000 hours?
  • 82. / / Exercises 61 2.65 Consider the situation of Exercise 2.64. Let A be the event that the component fails a particular test and B be the event that the component displays strain but does not actually fail. Event A occurs with prob- ability 0.20, and event B occurs with probability 0.35. (a) What is the probability that the component does not fail the test? (b) What is the probability that the component works perfectly well (i.e., neither displays strain nor fails the test)? (c) What is the probability that the component either fails or shows strain in the test? 2.66 Factory workers are constantly encouraged to practice zero tolerance when it comes to accidents in factories. Accidents can occur because the working en- vironment or conditions themselves are unsafe. On the other hand, accidents can occur due to carelessness or so-called human error. In addition, the worker’s shift, 7:00 A.M.–3:00 P.M. (day shift), 3:00 P.M.–11:00 P.M. (evening shift), or 11:00 P.M.–7:00 A.M. (graveyard shift), may be a factor. During the last year, 300 acci- dents have occurred. The percentages of the accidents for the condition combinations are as follows: Unsafe Human Shift Conditions Error Day 5% 32% Evening 6% 25% Graveyard 2% 30% If an accident report is selected randomly from the 300 reports, (a) what is the probability that the accident occurred on the graveyard shift? (b) what is the probability that the accident occurred due to human error? (c) what is the probability that the accident occurred due to unsafe conditions? (d) what is the probability that the accident occurred on either the evening or the graveyard shift? 2.67 Consider the situation of Example 2.32 on page 58. (a) What is the probability that no more than 4 cars will be serviced by the mechanic? (b) What is the probability that he will service fewer than 8 cars? (c) What is the probability that he will service either 3 or 4 cars? 2.68 Interest centers around the nature of an oven purchased at a particular department store. It can be either a gas or an electric oven. Consider the decisions made by six distinct customers. (a) Suppose that the probability is 0.40 that at most two of these individuals purchase an electric oven. What is the probability that at least three purchase the electric oven? (b) Suppose it is known that the probability that all six purchase the electric oven is 0.007 while 0.104 is the probability that all six purchase the gas oven. What is the probability that at least one of each type is purchased? 2.69 It is common in many industrial areas to use a filling machine to fill boxes full of product. This occurs in the food industry as well as other areas in which the product is used in the home, for example, detergent. These machines are not perfect, and indeed they may A, fill to specification, B, underfill, and C, overfill. Generally, the practice of underfilling is that which one hopes to avoid. Let P(B) = 0.001 while P(A) = 0.990. (a) Give P(C). (b) What is the probability that the machine does not underfill? (c) What is the probability that the machine either overfills or underfills? 2.70 Consider the situation of Exercise 2.69. Suppose 50,000 boxes of detergent are produced per week and suppose also that those underfilled are “sent back,” with customers requesting reimbursement of the pur- chase price. Suppose also that the cost of production is known to be $4.00 per box while the purchase price is $4.50 per box. (a) What is the weekly profit under the condition of no defective boxes? (b) What is the loss in profit expected due to under- filling? 2.71 As the situation of Exercise 2.69 might suggest, statistical procedures are often used for control of qual- ity (i.e., industrial quality control). At times, the weight of a product is an important variable to con- trol. Specifications are given for the weight of a certain packaged product, and a package is rejected if it is ei- ther too light or too heavy. Historical data suggest that 0.95 is the probability that the product meets weight specifications whereas 0.002 is the probability that the product is too light. For each single packaged product, the manufacturer invests $20.00 in production and the purchase price for the consumer is $25.00. (a) What is the probability that a package chosen ran- domly from the production line is too heavy? (b) For each 10,000 packages sold, what profit is re- ceived by the manufacturer if all packages meet weight specification? (c) Assuming that all defective packages are rejected
  • 83. 62 Chapter 2 Probability and rendered worthless, how much is the profit re- duced on 10,000 packages due to failure to meet weight specification? 2.72 Prove that P(A ∩ B ) = 1 + P(A ∩ B) − P(A) − P(B). 2.6 Conditional Probability, Independence, and the Product Rule One very important concept in probability theory is conditional probability. In some applications, the practitioner is interested in the probability structure under certain restrictions. For instance, in epidemiology, rather than studying the chance that a person from the general population has diabetes, it might be of more interest to know this probability for a distinct group such as Asian women in the age range of 35 to 50 or Hispanic men in the age range of 40 to 60. This type of probability is called a conditional probability. Conditional Probability The probability of an event B occurring when it is known that some event A has occurred is called a conditional probability and is denoted by P(B|A). The symbol P(B|A) is usually read “the probability that B occurs given that A occurs” or simply “the probability of B, given A.” Consider the event B of getting a perfect square when a die is tossed. The die is constructed so that the even numbers are twice as likely to occur as the odd numbers. Based on the sample space S = {1, 2, 3, 4, 5, 6}, with probabilities of 1/9 and 2/9 assigned, respectively, to the odd and even numbers, the probability of B occurring is 1/3. Now suppose that it is known that the toss of the die resulted in a number greater than 3. We are now dealing with a reduced sample space A = {4, 5, 6}, which is a subset of S. To find the probability that B occurs, relative to the space A, we must first assign new probabilities to the elements of A proportional to their original probabilities such that their sum is 1. Assigning a probability of w to the odd number in A and a probability of 2w to the two even numbers, we have 5w = 1, or w = 1/5. Relative to the space A, we find that B contains the single element 4. Denoting this event by the symbol B|A, we write B|A = {4}, and hence P(B|A) = 2 5 . This example illustrates that events may have different probabilities when consid- ered relative to different sample spaces. We can also write P(B|A) = 2 5 = 2/9 5/9 = P(A ∩ B) P(A) , where P(A ∩ B) and P(A) are found from the original sample space S. In other words, a conditional probability relative to a subspace A of S may be calculated directly from the probabilities assigned to the elements of the original sample space S.
  • 84. 2.6 Conditional Probability, Independence, and the Product Rule 63 Definition 2.10: The conditional probability of B, given A, denoted by P(B|A), is defined by P(B|A) = P(A ∩ B) P(A) , provided P(A) 0. As an additional illustration, suppose that our sample space S is the population of adults in a small town who have completed the requirements for a college degree. We shall categorize them according to gender and employment status. The data are given in Table 2.1. Table 2.1: Categorization of the Adults in a Small Town Employed Unemployed Total Male Female 460 140 40 260 500 400 Total 600 300 900 One of these individuals is to be selected at random for a tour throughout the country to publicize the advantages of establishing new industries in the town. We shall be concerned with the following events: M: a man is chosen, E: the one chosen is employed. Using the reduced sample space E, we find that P(M|E) = 460 600 = 23 30 . Let n(A) denote the number of elements in any set A. Using this notation, since each adult has an equal chance of being selected, we can write P(M|E) = n(E ∩ M) n(E) = n(E ∩ M)/n(S) n(E)/n(S) = P(E ∩ M) P(E) , where P(E ∩ M) and P(E) are found from the original sample space S. To verify this result, note that P(E) = 600 900 = 2 3 and P(E ∩ M) = 460 900 = 23 45 . Hence, P(M|E) = 23/45 2/3 = 23 30 , as before. Example 2.34: The probability that a regularly scheduled flight departs on time is P(D) = 0.83; the probability that it arrives on time is P(A) = 0.82; and the probability that it departs and arrives on time is P(D ∩ A) = 0.78. Find the probability that a plane
  • 85. 64 Chapter 2 Probability (a) arrives on time, given that it departed on time, and (b) departed on time, given that it has arrived on time. Solution: Using Definition 2.10, we have the following. (a) The probability that a plane arrives on time, given that it departed on time, is P(A|D) = P(D ∩ A) P(D) = 0.78 0.83 = 0.94. (b) The probability that a plane departed on time, given that it has arrived on time, is P(D|A) = P(D ∩ A) P(A) = 0.78 0.82 = 0.95. The notion of conditional probability provides the capability of reevaluating the idea of probability of an event in light of additional information, that is, when it is known that another event has occurred. The probability P(A|B) is an updating of P(A) based on the knowledge that event B has occurred. In Example 2.34, it is important to know the probability that the flight arrives on time. One is given the information that the flight did not depart on time. Armed with this additional information, one can calculate the more pertinent probability P(A|D ), that is, the probability that it arrives on time, given that it did not depart on time. In many situations, the conclusions drawn from observing the more important condi- tional probability change the picture entirely. In this example, the computation of P(A|D ) is P(A|D ) = P(A ∩ D ) P(D) = 0.82 − 0.78 0.17 = 0.24. As a result, the probability of an on-time arrival is diminished severely in the presence of the additional information. Example 2.35: The concept of conditional probability has countless uses in both industrial and biomedical applications. Consider an industrial process in the textile industry in which strips of a particular type of cloth are being produced. These strips can be defective in two ways, length and nature of texture. For the case of the latter, the process of identification is very complicated. It is known from historical information on the process that 10% of strips fail the length test, 5% fail the texture test, and only 0.8% fail both tests. If a strip is selected randomly from the process and a quick measurement identifies it as failing the length test, what is the probability that it is texture defective? Solution: Consider the events L: length defective, T: texture defective. Given that the strip is length defective, the probability that this strip is texture defective is given by P(T|L) = P(T ∩ L) P(L) = 0.008 0.1 = 0.08. Thus, knowing the conditional probability provides considerably more information than merely knowing P(T).
  • 86. 2.6 Conditional Probability, Independence, and the Product Rule 65 Independent Events In the die-tossing experiment discussed on page 62, we note that P(B|A) = 2/5 whereas P(B) = 1/3. That is, P(B|A) = P(B), indicating that B depends on A. Now consider an experiment in which 2 cards are drawn in succession from an ordinary deck, with replacement. The events are defined as A: the first card is an ace, B: the second card is a spade. Since the first card is replaced, our sample space for both the first and the second draw consists of 52 cards, containing 4 aces and 13 spades. Hence, P(B|A) = 13 52 = 1 4 and P(B) = 13 52 = 1 4 . That is, P(B|A) = P(B). When this is true, the events A and B are said to be independent. Although conditional probability allows for an alteration of the probability of an event in the light of additional material, it also enables us to understand better the very important concept of independence or, in the present context, independent events. In the airport illustration in Example 2.34, P(A|D) differs from P(A). This suggests that the occurrence of D influenced A, and this is certainly expected in this illustration. However, consider the situation where we have events A and B and P(A|B) = P(A). In other words, the occurrence of B had no impact on the odds of occurrence of A. Here the occurrence of A is independent of the occurrence of B. The importance of the concept of independence cannot be overemphasized. It plays a vital role in material in virtually all chapters in this book and in all areas of applied statistics. Definition 2.11: Two events A and B are independent if and only if P(B|A) = P(B) or P(A|B) = P(A), assuming the existences of the conditional probabilities. Otherwise, A and B are dependent. The condition P(B|A) = P(B) implies that P(A|B) = P(A), and conversely. For the card-drawing experiments, where we showed that P(B|A) = P(B) = 1/4, we also can see that P(A|B) = P(A) = 1/13. The Product Rule, or the Multiplicative Rule Multiplying the formula in Definition 2.10 by P(A), we obtain the following im- portant multiplicative rule (or product rule), which enables us to calculate
  • 87. 66 Chapter 2 Probability the probability that two events will both occur. Theorem 2.10: If in an experiment the events A and B can both occur, then P(A ∩ B) = P(A)P(B|A), provided P(A) 0. Thus, the probability that both A and B occur is equal to the probability that A occurs multiplied by the conditional probability that B occurs, given that A occurs. Since the events A ∩ B and B ∩ A are equivalent, it follows from Theorem 2.10 that we can also write P(A ∩ B) = P(B ∩ A) = P(B)P(A|B). In other words, it does not matter which event is referred to as A and which event is referred to as B. Example 2.36: Suppose that we have a fuse box containing 20 fuses, of which 5 are defective. If 2 fuses are selected at random and removed from the box in succession without replacing the first, what is the probability that both fuses are defective? Solution: We shall let A be the event that the first fuse is defective and B the event that the second fuse is defective; then we interpret A ∩ B as the event that A occurs and then B occurs after A has occurred. The probability of first removing a defective fuse is 1/4; then the probability of removing a second defective fuse from the remaining 4 is 4/19. Hence, P(A ∩ B) = 1 4 4 19 = 1 19 . Example 2.37: One bag contains 4 white balls and 3 black balls, and a second bag contains 3 white balls and 5 black balls. One ball is drawn from the first bag and placed unseen in the second bag. What is the probability that a ball now drawn from the second bag is black? Solution: Let B1, B2, and W1 represent, respectively, the drawing of a black ball from bag 1, a black ball from bag 2, and a white ball from bag 1. We are interested in the union of the mutually exclusive events B1 ∩ B2 and W1 ∩ B2. The various possibilities and their probabilities are illustrated in Figure 2.8. Now P[(B1 ∩ B2) or (W1 ∩ B2)] = P(B1 ∩ B2) + P(W1 ∩ B2) = P(B1)P(B2|B1) + P(W1)P(B2|W1) = 3 7 6 9 + 4 7 5 9 = 38 63 . If, in Example 2.36, the first fuse is replaced and the fuses thoroughly rear- ranged before the second is removed, then the probability of a defective fuse on the second selection is still 1/4; that is, P(B|A) = P(B) and the events A and B are independent. When this is true, we can substitute P(B) for P(B|A) in Theorem 2.10 to obtain the following special multiplicative rule.
  • 88. 2.6 Conditional Probability, Independence, and the Product Rule 67 Bag 1 4W, 3B Bag 2 3W, 6B Bag 2 4W, 5B P(B1 ∩B2)=(3/7)(6/9) P(B1 ∩W2)=(3/7)(3/9) P(W1 ∩B2)=(4/7)(5/9) P(W1 ∩W2) =(4/7)(4/9) B 3/7 4/7 W B 6/9 W 3/9 B 6/9 4/9 W Figure 2.8: Tree diagram for Example 2.37. Theorem 2.11: Two events A and B are independent if and only if P(A ∩ B) = P(A)P(B). Therefore, to obtain the probability that two independent events will both occur, we simply find the product of their individual probabilities. Example 2.38: A small town has one fire engine and one ambulance available for emergencies. The probability that the fire engine is available when needed is 0.98, and the probability that the ambulance is available when called is 0.92. In the event of an injury resulting from a burning building, find the probability that both the ambulance and the fire engine will be available, assuming they operate independently. Solution: Let A and B represent the respective events that the fire engine and the ambulance are available. Then P(A ∩ B) = P(A)P(B) = (0.98)(0.92) = 0.9016. Example 2.39: An electrical system consists of four components as illustrated in Figure 2.9. The system works if components A and B work and either of the components C or D works. The reliability (probability of working) of each component is also shown in Figure 2.9. Find the probability that (a) the entire system works and (b) the component C does not work, given that the entire system works. Assume that the four components work independently. Solution: In this configuration of the system, A, B, and the subsystem C and D constitute a serial circuit system, whereas the subsystem C and D itself is a parallel circuit system. (a) Clearly the probability that the entire system works can be calculated as
  • 89. 68 Chapter 2 Probability follows: P[A ∩ B ∩ (C ∪ D)] = P(A)P(B)P(C ∪ D) = P(A)P(B)[1 − P(C ∩ D )] = P(A)P(B)[1 − P(C )P(D )] = (0.9)(0.9)[1 − (1 − 0.8)(1 − 0.8)] = 0.7776. The equalities above hold because of the independence among the four com- ponents. (b) To calculate the conditional probability in this case, notice that P = P(the system works but C does not work) P(the system works) = P(A ∩ B ∩ C ∩ D) P(the system works) = (0.9)(0.9)(1 − 0.8)(0.8) 0.7776 = 0.1667. A B C D 0.9 0.9 0.8 0.8 Figure 2.9: An electrical system for Example 2.39. The multiplicative rule can be extended to more than two-event situations. Theorem 2.12: If, in an experiment, the events A1, A2, . . . , Ak can occur, then P(A1 ∩ A2 ∩ · · · ∩ Ak) = P(A1)P(A2|A1)P(A3|A1 ∩ A2) · · · P(Ak|A1 ∩ A2 ∩ · · · ∩ Ak−1). If the events A1, A2, . . . , Ak are independent, then P(A1 ∩ A2 ∩ · · · ∩ Ak) = P(A1)P(A2) · · · P(Ak). Example 2.40: Three cards are drawn in succession, without replacement, from an ordinary deck of playing cards. Find the probability that the event A1 ∩ A2 ∩ A3 occurs, where A1 is the event that the first card is a red ace, A2 is the event that the second card is a 10 or a jack, and A3 is the event that the third card is greater than 3 but less than 7. Solution: First we define the events A1: the first card is a red ace, A2: the second card is a 10 or a jack,
  • 90. / / Exercises 69 A3: the third card is greater than 3 but less than 7. Now P(A1) = 2 52 , P(A2|A1) = 8 51 , P(A3|A1 ∩ A2) = 12 50 , and hence, by Theorem 2.12, P(A1 ∩ A2 ∩ A3) = P(A1)P(A2|A1)P(A3|A1 ∩ A2) = 2 52 8 51 12 50 = 8 5525 . The property of independence stated in Theorem 2.11 can be extended to deal with more than two events. Consider, for example, the case of three events A, B, and C. It is not sufficient to only have that P(A ∩ B ∩ C) = P(A)P(B)P(C) as a definition of independence among the three. Suppose A = B and C = φ, the null set. Although A∩B∩C = φ, which results in P(A∩B∩C) = 0 = P(A)P(B)P(C), events A and B are not independent. Hence, we have the following definition. Definition 2.12: A collection of events A = {A1, . . . , An} are mutually independent if for any subset of A, Ai1 , . . . , Aik , for k ≤ n, we have P(Ai1 ∩ · · · ∩ Aik ) = P(Ai1 ) · · · P(Aik ). Exercises 2.73 If R is the event that a convict committed armed robbery and D is the event that the convict pushed dope, state in words what probabilities are expressed by (a) P(R|D); (b) P(D |R); (c) P(R |D ). 2.74 A class in advanced physics is composed of 10 juniors, 30 seniors, and 10 graduate students. The final grades show that 3 of the juniors, 10 of the seniors, and 5 of the graduate students received an A for the course. If a student is chosen at random from this class and is found to have earned an A, what is the probability that he or she is a senior? 2.75 A random sample of 200 adults are classified be- low by sex and their level of education attained. Education Male Female Elementary 38 45 Secondary 28 50 College 22 17 If a person is picked at random from this group, find the probability that (a) the person is a male, given that the person has a secondary education; (b) the person does not have a college degree, given that the person is a female. 2.76 In an experiment to study the relationship of hy- pertension and smoking habits, the following data are collected for 180 individuals: Moderate Heavy Nonsmokers Smokers Smokers H 21 36 30 NH 48 26 19 where H and NH in the table stand for Hypertension and Nonhypertension, respectively. If one of these indi- viduals is selected at random, find the probability that the person is (a) experiencing hypertension, given that the person is a heavy smoker; (b) a nonsmoker, given that the person is experiencing no hypertension. 2.77 In the senior year of a high school graduating class of 100 students, 42 studied mathematics, 68 stud- ied psychology, 54 studied history, 22 studied both mathematics and history, 25 studied both mathematics and psychology, 7 studied history but neither mathe- matics nor psychology, 10 studied all three subjects, and 8 did not take any of the three. Randomly select
  • 91. / / 70 Chapter 2 Probability a student from the class and find the probabilities of the following events. (a) A person enrolled in psychology takes all three sub- jects. (b) A person not taking psychology is taking both his- tory and mathematics. 2.78 A manufacturer of a flu vaccine is concerned about the quality of its flu serum. Batches of serum are processed by three different departments having rejec- tion rates of 0.10, 0.08, and 0.12, respectively. The in- spections by the three departments are sequential and independent. (a) What is the probability that a batch of serum sur- vives the first departmental inspection but is re- jected by the second department? (b) What is the probability that a batch of serum is rejected by the third department? 2.79 In USA Today (Sept. 5, 1996), the results of a survey involving the use of sleepwear while traveling were listed as follows: Male Female Total Underwear 0.220 0.024 0.244 Nightgown 0.002 0.180 0.182 Nothing 0.160 0.018 0.178 Pajamas 0.102 0.073 0.175 T-shirt 0.046 0.088 0.134 Other 0.084 0.003 0.087 (a) What is the probability that a traveler is a female who sleeps in the nude? (b) What is the probability that a traveler is male? (c) Assuming the traveler is male, what is the proba- bility that he sleeps in pajamas? (d) What is the probability that a traveler is male if the traveler sleeps in pajamas or a T-shirt? 2.80 The probability that an automobile being filled with gasoline also needs an oil change is 0.25; the prob- ability that it needs a new oil filter is 0.40; and the probability that both the oil and the filter need chang- ing is 0.14. (a) If the oil has to be changed, what is the probability that a new oil filter is needed? (b) If a new oil filter is needed, what is the probability that the oil has to be changed? 2.81 The probability that a married man watches a certain television show is 0.4, and the probability that a married woman watches the show is 0.5. The proba- bility that a man watches the show, given that his wife does, is 0.7. Find the probability that (a) a married couple watches the show; (b) a wife watches the show, given that her husband does; (c) at least one member of a married couple will watch the show. 2.82 For married couples living in a certain suburb, the probability that the husband will vote on a bond referendum is 0.21, the probability that the wife will vote on the referendum is 0.28, and the probability that both the husband and the wife will vote is 0.15. What is the probability that (a) at least one member of a married couple will vote? (b) a wife will vote, given that her husband will vote? (c) a husband will vote, given that his wife will not vote? 2.83 The probability that a vehicle entering the Lu- ray Caverns has Canadian license plates is 0.12; the probability that it is a camper is 0.28; and the proba- bility that it is a camper with Canadian license plates is 0.09. What is the probability that (a) a camper entering the Luray Caverns has Canadian license plates? (b) a vehicle with Canadian license plates entering the Luray Caverns is a camper? (c) a vehicle entering the Luray Caverns does not have Canadian plates or is not a camper? 2.84 The probability that the head of a household is home when a telemarketing representative calls is 0.4. Given that the head of the house is home, the proba- bility that goods will be bought from the company is 0.3. Find the probability that the head of the house is home and goods are bought from the company. 2.85 The probability that a doctor correctly diag- noses a particular illness is 0.7. Given that the doctor makes an incorrect diagnosis, the probability that the patient files a lawsuit is 0.9. What is the probability that the doctor makes an incorrect diagnosis and the patient sues? 2.86 In 1970, 11% of Americans completed four years of college; 43% of them were women. In 1990, 22% of Americans completed four years of college; 53% of them were women (Time, Jan. 19, 1996). (a) Given that a person completed four years of college in 1970, what is the probability that the person was a woman? (b) What is the probability that a woman finished four years of college in 1990? (c) What is the probability that a man had not finished college in 1990?
  • 92. Exercises 71 2.87 A real estate agent has 8 master keys to open several new homes. Only 1 master key will open any given house. If 40% of these homes are usually left unlocked, what is the probability that the real estate agent can get into a specific home if the agent selects 3 master keys at random before leaving the office? 2.88 Before the distribution of certain statistical soft- ware, every fourth compact disk (CD) is tested for ac- curacy. The testing process consists of running four independent programs and checking the results. The failure rates for the four testing programs are, respec- tively, 0.01, 0.03, 0.02, and 0.01. (a) What is the probability that a CD was tested and failed any test? (b) Given that a CD was tested, what is the probability that it failed program 2 or 3? (c) In a sample of 100, how many CDs would you ex- pect to be rejected? (d) Given that a CD was defective, what is the proba- bility that it was tested? 2.89 A town has two fire engines operating indepen- dently. The probability that a specific engine is avail- able when needed is 0.96. (a) What is the probability that neither is available when needed? (b) What is the probability that a fire engine is avail- able when needed? 2.90 Pollution of the rivers in the United States has been a problem for many years. Consider the following events: A: the river is polluted, B : a sample of water tested detects pollution, C : fishing is permitted. Assume P(A) = 0.3, P(B|A) = 0.75, P(B|A ) = 0.20, P(C|A∩B) = 0.20, P(C|A ∩B) = 0.15, P(C|A∩B ) = 0.80, and P(C|A ∩ B ) = 0.90. (a) Find P(A ∩ B ∩ C). (b) Find P(B ∩ C). (c) Find P(C). (d) Find the probability that the river is polluted, given that fishing is permitted and the sample tested did not detect pollution. 2.91 Find the probability of randomly selecting 4 good quarts of milk in succession from a cooler con- taining 20 quarts of which 5 have spoiled, by using (a) the first formula of Theorem 2.12 on page 68; (b) the formulas of Theorem 2.6 and Rule 2.3 on pages 50 and 54, respectively. 2.92 Suppose the diagram of an electrical system is as given in Figure 2.10. What is the probability that the system works? Assume the components fail inde- pendently. 2.93 A circuit system is given in Figure 2.11. Assume the components fail independently. (a) What is the probability that the entire system works? (b) Given that the system works, what is the probabil- ity that the component A is not working? 2.94 In the situation of Exercise 2.93, it is known that the system does not work. What is the probability that the component A also does not work? D A B C 0.9 0.95 0.7 0.8 Figure 2.10: Diagram for Exercise 2.92. A B C D E 0.7 0.7 0.8 0.8 0.8 Figure 2.11: Diagram for Exercise 2.93.
  • 93. 72 Chapter 2 Probability 2.7 Bayes’ Rule Bayesian statistics is a collection of tools that is used in a special form of statistical inference which applies in the analysis of experimental data in many practical situations in science and engineering. Bayes’ rule is one of the most important rules in probability theory. It is the foundation of Bayesian inference, which will be discussed in Chapter 18. Total Probability Let us now return to the illustration of Section 2.6, where an individual is being selected at random from the adults of a small town to tour the country and publicize the advantages of establishing new industries in the town. Suppose that we are now given the additional information that 36 of those employed and 12 of those unemployed are members of the Rotary Club. We wish to find the probability of the event A that the individual selected is a member of the Rotary Club. Referring to Figure 2.12, we can write A as the union of the two mutually exclusive events E ∩A and E ∩A. Hence, A = (E ∩A)∪(E ∩A), and by Corollary 2.1 of Theorem 2.7, and then Theorem 2.10, we can write P(A) = P[(E ∩ A) ∪ (E ∩ A)] = P(E ∩ A) + P(E ∩ A) = P(E)P(A|E) + P(E )P(A|E ). E E A E A E A Figure 2.12: Venn diagram for the events A, E, and E . The data of Section 2.6, together with the additional data given above for the set A, enable us to compute P(E) = 600 900 = 2 3 , P(A|E) = 36 600 = 3 50 , and P(E ) = 1 3 , P(A|E ) = 12 300 = 1 25 . If we display these probabilities by means of the tree diagram of Figure 2.13, where the first branch yields the probability P(E)P(A|E) and the second branch yields
  • 94. 2.7 Bayes’ Rule 73 E' P(A|E) 1/25 A' P(E')P(A|E') P(E)P(A|E) P(A|E) = 3/50 P ( E ) = 2 / 3 E A P ( E ' ) = 1 / 3 Figure 2.13: Tree diagram for the data on page 63, using additional information on page 72. the probability P(E )P(A|E ), it follows that P(A) = 2 3 3 50 + 1 3 1 25 = 4 75 . A generalization of the foregoing illustration to the case where the sample space is partitioned into k subsets is covered by the following theorem, sometimes called the theorem of total probability or the rule of elimination. Theorem 2.13: If the events B1, B2, . . . , Bk constitute a partition of the sample space S such that P(Bi) = 0 for i = 1, 2, . . . , k, then for any event A of S, P(A) = k i=1 P(Bi ∩ A) = k i=1 P(Bi)P(A|Bi). A B1 B2 B3 B4 B5 … Figure 2.14: Partitioning the sample space S.
  • 95. 74 Chapter 2 Probability Proof: Consider the Venn diagram of Figure 2.14. The event A is seen to be the union of the mutually exclusive events B1 ∩ A, B2 ∩ A, . . . , Bk ∩ A; that is, A = (B1 ∩ A) ∪ (B2 ∩ A) ∪ · · · ∪ (Bk ∩ A). Using Corollary 2.2 of Theorem 2.7 and Theorem 2.10, we have P(A) = P[(B1 ∩ A) ∪ (B2 ∩ A) ∪ · · · ∪ (Bk ∩ A)] = P(B1 ∩ A) + P(B2 ∩ A) + · · · + P(Bk ∩ A) = k i=1 P(Bi ∩ A) = k i=1 P(Bi)P(A|Bi). Example 2.41: In a certain assembly plant, three machines, B1, B2, and B3, make 30%, 45%, and 25%, respectively, of the products. It is known from past experience that 2%, 3%, and 2% of the products made by each machine, respectively, are defective. Now, suppose that a finished product is randomly selected. What is the probability that it is defective? Solution: Consider the following events: A: the product is defective, B1: the product is made by machine B1, B2: the product is made by machine B2, B3: the product is made by machine B3. Applying the rule of elimination, we can write P(A) = P(B1)P(A|B1) + P(B2)P(A|B2) + P(B3)P(A|B3). Referring to the tree diagram of Figure 2.15, we find that the three branches give the probabilities P(B1)P(A|B1) = (0.3)(0.02) = 0.006, P(B2)P(A|B2) = (0.45)(0.03) = 0.0135, P(B3)P(A|B3) = (0.25)(0.02) = 0.005, and hence P(A) = 0.006 + 0.0135 + 0.005 = 0.0245.
  • 96. 2.7 Bayes’ Rule 75 A P(A | B 1 ) = 0.02 P(A | B 3 ) = 0.02 P(A | B 2 ) = 0.03 P(B 2 ) = 0.45 B 1 B 2 B 3 A A P ( B 1 ) = 0 . 3 P ( B 3 ) = 0 . 2 5 Figure 2.15: Tree diagram for Example 2.41. Bayes’ Rule Instead of asking for P(A) in Example 2.41, by the rule of elimination, suppose that we now consider the problem of finding the conditional probability P(Bi|A). In other words, suppose that a product was randomly selected and it is defective. What is the probability that this product was made by machine Bi? Questions of this type can be answered by using the following theorem, called Bayes’ rule: Theorem 2.14: (Bayes’ Rule) If the events B1, B2, . . . , Bk constitute a partition of the sample space S such that P(Bi) = 0 for i = 1, 2, . . . , k, then for any event A in S such that P(A) = 0, P(Br|A) = P(Br ∩ A) k i=1 P(Bi ∩ A) = P(Br)P(A|Br) k i=1 P(Bi)P(A|Bi) for r = 1, 2, . . . , k. Proof: By the definition of conditional probability, P(Br|A) = P(Br ∩ A) P(A) , and then using Theorem 2.13 in the denominator, we have P(Br|A) = P(Br ∩ A) k i=1 P(Bi ∩ A) = P(Br)P(A|Br) k i=1 P(Bi)P(A|Bi) , which completes the proof. Example 2.42: With reference to Example 2.41, if a product was chosen randomly and found to be defective, what is the probability that it was made by machine B3? Solution: Using Bayes’ rule to write P(B3|A) = P(B3)P(A|B3) P(B1)P(A|B1) + P(B2)P(A|B2) + P(B3)P(A|B3) ,
  • 97. / / 76 Chapter 2 Probability and then substituting the probabilities calculated in Example 2.41, we have P(B3|A) = 0.005 0.006 + 0.0135 + 0.005 = 0.005 0.0245 = 10 49 . In view of the fact that a defective product was selected, this result suggests that it probably was not made by machine B3. Example 2.43: A manufacturing firm employs three analytical plans for the design and devel- opment of a particular product. For cost reasons, all three are used at varying times. In fact, plans 1, 2, and 3 are used for 30%, 20%, and 50% of the products, respectively. The defect rate is different for the three procedures as follows: P(D|P1) = 0.01, P(D|P2) = 0.03, P(D|P3) = 0.02, where P(D|Pj) is the probability of a defective product, given plan j. If a random product was observed and found to be defective, which plan was most likely used and thus responsible? Solution: From the statement of the problem P(P1) = 0.30, P(P2) = 0.20, and P(P3) = 0.50, we must find P(Pj|D) for j = 1, 2, 3. Bayes’ rule (Theorem 2.14) shows P(P1|D) = P(P1)P(D|P1) P(P1)P(D|P1) + P(P2)P(D|P2) + P(P3)P(D|P3) = (0.30)(0.01) (0.3)(0.01) + (0.20)(0.03) + (0.50)(0.02) = 0.003 0.019 = 0.158. Similarly, P(P2|D) = (0.03)(0.20) 0.019 = 0.316 and P(P3|D) = (0.02)(0.50) 0.019 = 0.526. The conditional probability of a defect given plan 3 is the largest of the three; thus a defective for a random product is most likely the result of the use of plan 3. Using Bayes’ rule, a statistical methodology called the Bayesian approach has attracted a lot of attention in applications. An introduction to the Bayesian method will be discussed in Chapter 18. Exercises 2.95 In a certain region of the country it is known from past experience that the probability of selecting an adult over 40 years of age with cancer is 0.05. If the probability of a doctor correctly diagnosing a per- son with cancer as having the disease is 0.78 and the probability of incorrectly diagnosing a person without cancer as having the disease is 0.06, what is the prob- ability that an adult over 40 years of age is diagnosed as having cancer? 2.96 Police plan to enforce speed limits by using radar traps at four different locations within the city limits. The radar traps at each of the locations L1, L2, L3, and L4 will be operated 40%, 30%, 20%, and 30% of
  • 98. / / Review Exercises 77 the time. If a person who is speeding on her way to work has probabilities of 0.2, 0.1, 0.5, and 0.2, respec- tively, of passing through these locations, what is the probability that she will receive a speeding ticket? 2.97 Referring to Exercise 2.95, what is the probabil- ity that a person diagnosed as having cancer actually has the disease? 2.98 If the person in Exercise 2.96 received a speed- ing ticket on her way to work, what is the probability that she passed through the radar trap located at L2? 2.99 Suppose that the four inspectors at a film fac- tory are supposed to stamp the expiration date on each package of film at the end of the assembly line. John, who stamps 20% of the packages, fails to stamp the expiration date once in every 200 packages; Tom, who stamps 60% of the packages, fails to stamp the expira- tion date once in every 100 packages; Jeff, who stamps 15% of the packages, fails to stamp the expiration date once in every 90 packages; and Pat, who stamps 5% of the packages, fails to stamp the expiration date once in every 200 packages. If a customer complains that her package of film does not show the expiration date, what is the probability that it was inspected by John? 2.100 A regional telephone company operates three identical relay stations at different locations. During a one-year period, the number of malfunctions reported by each station and the causes are shown below. Station A B C Problems with electricity supplied 2 1 1 Computer malfunction 4 3 2 Malfunctioning electrical equipment 5 4 2 Caused by other human errors 7 7 5 Suppose that a malfunction was reported and it was found to be caused by other human errors. What is the probability that it came from station C? 2.101 A paint-store chain produces and sells latex and semigloss paint. Based on long-range sales, the probability that a customer will purchase latex paint is 0.75. Of those that purchase latex paint, 60% also pur- chase rollers. But only 30% of semigloss paint buyers purchase rollers. A randomly selected buyer purchases a roller and a can of paint. What is the probability that the paint is latex? 2.102 Denote by A, B, and C the events that a grand prize is behind doors A, B, and C, respectively. Sup- pose you randomly picked a door, say A. The game host opened a door, say B, and showed there was no prize behind it. Now the host offers you the option of either staying at the door that you picked (A) or switching to the remaining unopened door (C). Use probability to explain whether you should switch or not. Review Exercises 2.103 A truth serum has the property that 90% of the guilty suspects are properly judged while, of course, 10% of the guilty suspects are improperly found inno- cent. On the other hand, innocent suspects are mis- judged 1% of the time. If the suspect was selected from a group of suspects of which only 5% have ever committed a crime, and the serum indicates that he is guilty, what is the probability that he is innocent? 2.104 An allergist claims that 50% of the patients she tests are allergic to some type of weed. What is the probability that (a) exactly 3 of her next 4 patients are allergic to weeds? (b) none of her next 4 patients is allergic to weeds? 2.105 By comparing appropriate regions of Venn di- agrams, verify that (a) (A ∩ B) ∪ (A ∩ B ) = A; (b) A ∩ (B ∪ C) = (A ∩ B ) ∪ (A ∩ C). 2.106 The probabilities that a service station will pump gas into 0, 1, 2, 3, 4, or 5 or more cars during a certain 30-minute period are 0.03, 0.18, 0.24, 0.28, 0.10, and 0.17, respectively. Find the probability that in this 30-minute period (a) more than 2 cars receive gas; (b) at most 4 cars receive gas; (c) 4 or more cars receive gas. 2.107 How many bridge hands are possible contain- ing 4 spades, 6 diamonds, 1 club, and 2 hearts? 2.108 If the probability is 0.1 that a person will make a mistake on his or her state income tax return, find the probability that (a) four totally unrelated persons each make a mistake; (b) Mr. Jones and Ms. Clark both make mistakes, and Mr. Roberts and Ms. Williams do not make a mistake.
  • 99. / / 78 Chapter 2 Probability 2.109 A large industrial firm uses three local motels to provide overnight accommodations for its clients. From past experience it is known that 20% of the clients are assigned rooms at the Ramada Inn, 50% at the Sheraton, and 30% at the Lakeview Motor Lodge. If the plumbing is faulty in 5% of the rooms at the Ra- mada Inn, in 4% of the rooms at the Sheraton, and in 8% of the rooms at the Lakeview Motor Lodge, what is the probability that (a) a client will be assigned a room with faulty plumbing? (b) a person with a room having faulty plumbing was assigned accommodations at the Lakeview Motor Lodge? 2.110 The probability that a patient recovers from a delicate heart operation is 0.8. What is the probability that (a) exactly 2 of the next 3 patients who have this op- eration survive? (b) all of the next 3 patients who have this operation survive? 2.111 In a certain federal prison, it is known that 2/3 of the inmates are under 25 years of age. It is also known that 3/5 of the inmates are male and that 5/8 of the inmates are female or 25 years of age or older. What is the probability that a prisoner selected at random from this prison is female and at least 25 years old? 2.112 From 4 red, 5 green, and 6 yellow apples, how many selections of 9 apples are possible if 3 of each color are to be selected? 2.113 From a box containing 6 black balls and 4 green balls, 3 balls are drawn in succession, each ball being re- placed in the box before the next draw is made. What is the probability that (a) all 3 are the same color? (b) each color is represented? 2.114 A shipment of 12 television sets contains 3 de- fective sets. In how many ways can a hotel purchase 5 of these sets and receive at least 2 of the defective sets? 2.115 A certain federal agency employs three con- sulting firms (A, B, and C) with probabilities 0.40, 0.35, and 0.25, respectively. From past experience it is known that the probability of cost overruns for the firms are 0.05, 0.03, and 0.15, respectively. Suppose a cost overrun is experienced by the agency. (a) What is the probability that the consulting firm involved is company C? (b) What is the probability that it is company A? 2.116 A manufacturer is studying the effects of cook- ing temperature, cooking time, and type of cooking oil for making potato chips. Three different temperatures, 4 different cooking times, and 3 different oils are to be used. (a) What is the total number of combinations to be studied? (b) How many combinations will be used for each type of oil? (c) Discuss why permutations are not an issue in this exercise. 2.117 Consider the situation in Exercise 2.116, and suppose that the manufacturer can try only two com- binations in a day. (a) What is the probability that any given set of two runs is chosen? (b) What is the probability that the highest tempera- ture is used in either of these two combinations? 2.118 A certain form of cancer is known to be found in women over 60 with probability 0.07. A blood test exists for the detection of the disease, but the test is not infallible. In fact, it is known that 10% of the time the test gives a false negative (i.e., the test incorrectly gives a negative result) and 5% of the time the test gives a false positive (i.e., incorrectly gives a positive result). If a woman over 60 is known to have taken the test and received a favorable (i.e., negative) result, what is the probability that she has the disease? 2.119 A producer of a certain type of electronic com- ponent ships to suppliers in lots of twenty. Suppose that 60% of all such lots contain no defective compo- nents, 30% contain one defective component, and 10% contain two defective components. A lot is picked, two components from the lot are randomly selected and tested, and neither is defective. (a) What is the probability that zero defective compo- nents exist in the lot? (b) What is the probability that one defective exists in the lot? (c) What is the probability that two defectives exist in the lot? 2.120 A rare disease exists with which only 1 in 500 is affected. A test for the disease exists, but of course it is not infallible. A correct positive result (patient actually has the disease) occurs 95% of the time, while a false positive result (patient does not have the dis-
  • 100. 8.9 Potential Misconceptions and Hazards 79 ease) occurs 1% of the time. If a randomly selected individual is tested and the result is positive, what is the probability that the individual has the disease? 2.121 A construction company employs two sales en- gineers. Engineer 1 does the work of estimating cost for 70% of jobs bid by the company. Engineer 2 does the work for 30% of jobs bid by the company. It is known that the error rate for engineer 1 is such that 0.02 is the probability of an error when he does the work, whereas the probability of an error in the work of engineer 2 is 0.04. Suppose a bid arrives and a se- rious error occurs in estimating cost. Which engineer would you guess did the work? Explain and show all work. 2.122 In the field of quality control, the science of statistics is often used to determine if a process is “out of control.” Suppose the process is, indeed, out of con- trol and 20% of items produced are defective. (a) If three items arrive off the process line in succes- sion, what is the probability that all three are de- fective? (b) If four items arrive in succession, what is the prob- ability that three are defective? 2.123 An industrial plant is conducting a study to determine how quickly injured workers are back on the job following injury. Records show that 10% of all in- jured workers are admitted to the hospital for treat- ment and 15% are back on the job the next day. In addition, studies show that 2% are both admitted for hospital treatment and back on the job the next day. If a worker is injured, what is the probability that the worker will either be admitted to a hospital or be back on the job the next day or both? 2.124 A firm is accustomed to training operators who do certain tasks on a production line. Those operators who attend the training course are known to be able to meet their production quotas 90% of the time. New op- erators who do not take the training course only meet their quotas 65% of the time. Fifty percent of new op- erators attend the course. Given that a new operator meets her production quota, what is the probability that she attended the program? 2.125 A survey of those using a particular statistical software system indicated that 10% were dissatisfied. Half of those dissatisfied purchased the system from vendor A. It is also known that 20% of those surveyed purchased from vendor A. Given that the software was purchased from vendor A, what is the probability that that particular user is dissatisfied? 2.126 During bad economic times, industrial workers are dismissed and are often replaced by machines. The history of 100 workers whose loss of employment is at- tributable to technological advances is reviewed. For each of these individuals, it is determined if he or she was given an alternative job within the same company, found a job with another company in the same field, found a job in a new field, or has been unemployed for 1 year. In addition, the union status of each worker is recorded. The following table summarizes the results. Union Nonunion Same Company New Company (same field) New Field Unemployed 40 13 4 2 15 10 11 5 (a) If the selected worker found a job with a new com- pany in the same field, what is the probability that the worker is a union member? (b) If the worker is a union member, what is the prob- ability that the worker has been unemployed for a year? 2.127 There is a 50-50 chance that the queen carries the gene of hemophilia. If she is a carrier, then each prince has a 50-50 chance of having hemophilia inde- pendently. If the queen is not a carrier, the prince will not have the disease. Suppose the queen has had three princes without the disease. What is the probability the queen is a carrier? 2.128 Group Project: Give each student a bag of chocolate MMs. Divide the students into groups of 5 or 6. Calculate the relative frequency distribution for color of MMs for each group. (a) What is your estimated probability of randomly picking a yellow? a red? (b) Redo the calculations for the whole classroom. Did the estimates change? (c) Do you believe there is an equal number of each color in a process batch? Discuss. 2.8 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters This chapter contains the fundamental definitions, rules, and theorems that provide a foundation that renders probability an important tool for evaluating
  • 101. 80 Chapter 2 Probability scientific and engineering systems. The evaluations are often in the form of prob- ability computations, as is illustrated in examples and exercises. Concepts such as independence, conditional probability, Bayes’ rule, and others tend to mesh nicely to solve practical problems in which the bottom line is to produce a probability value. Illustrations in exercises are abundant. See, for example, Exercises 2.100 and 2.101. In these and many other exercises, an evaluation of a scientific system is being made judiciously from a probability calculation, using rules and definitions discussed in the chapter. Now, how does the material in this chapter relate to that in other chapters? It is best to answer this question by looking ahead to Chapter 3. Chapter 3 also deals with the type of problems in which it is important to calculate probabili- ties. We illustrate how system performance depends on the value of one or more probabilities. Once again, conditional probability and independence play a role. However, new concepts arise which allow more structure based on the notion of a random variable and its probability distribution. Recall that the idea of frequency distributions was discussed briefly in Chapter 1. The probability distribution dis- plays, in equation form or graphically, the total information necessary to describe a probability structure. For example, in Review Exercise 2.122 the random variable of interest is the number of defective items, a discrete measurement. Thus, the probability distribution would reveal the probability structure for the number of defective items out of the number selected from the process. As the reader moves into Chapter 3 and beyond, it will become apparent that assumptions will be re- quired in order to determine and thus make use of probability distributions for solving scientific problems.
  • 102. Chapter 3 Random Variables and Probability Distributions 3.1 Concept of a Random Variable Statistics is concerned with making inferences about populations and population characteristics. Experiments are conducted with results that are subject to chance. The testing of a number of electronic components is an example of a statistical experiment, a term that is used to describe any process by which several chance observations are generated. It is often important to allocate a numerical description to the outcome. For example, the sample space giving a detailed description of each possible outcome when three electronic components are tested may be written S = {NNN, NND, NDN, DNN, NDD, DND, DDN, DDD}, where N denotes nondefective and D denotes defective. One is naturally concerned with the number of defectives that occur. Thus, each point in the sample space will be assigned a numerical value of 0, 1, 2, or 3. These values are, of course, random quantities determined by the outcome of the experiment. They may be viewed as values assumed by the random variable X , the number of defective items when three electronic components are tested. Definition 3.1: A random variable is a function that associates a real number with each element in the sample space. We shall use a capital letter, say X, to denote a random variable and its correspond- ing small letter, x in this case, for one of its values. In the electronic component testing illustration above, we notice that the random variable X assumes the value 2 for all elements in the subset E = {DDN, DND, NDD} of the sample space S. That is, each possible value of X represents an event that is a subset of the sample space for the given experiment. 81
  • 103. 82 Chapter 3 Random Variables and Probability Distributions Example 3.1: Two balls are drawn in succession without replacement from an urn containing 4 red balls and 3 black balls. The possible outcomes and the values y of the random variable Y , where Y is the number of red balls, are Sample Space y RR 2 RB 1 BR 1 BB 0 Example 3.2: A stockroom clerk returns three safety helmets at random to three steel mill em- ployees who had previously checked them. If Smith, Jones, and Brown, in that order, receive one of the three hats, list the sample points for the possible orders of returning the helmets, and find the value m of the random variable M that represents the number of correct matches. Solution: If S, J, and B stand for Smith’s, Jones’s, and Brown’s helmets, respectively, then the possible arrangements in which the helmets may be returned and the number of correct matches are Sample Space m SJB 3 SBJ 1 BJS 1 JSB 1 JBS 0 BSJ 0 In each of the two preceding examples, the sample space contains a finite number of elements. On the other hand, when a die is thrown until a 5 occurs, we obtain a sample space with an unending sequence of elements, S = {F, NF, NNF, NNNF, . . . }, where F and N represent, respectively, the occurrence and nonoccurrence of a 5. But even in this experiment, the number of elements can be equated to the number of whole numbers so that there is a first element, a second element, a third element, and so on, and in this sense can be counted. There are cases where the random variable is categorical in nature. Variables, often called dummy variables, are used. A good illustration is the case in which the random variable is binary in nature, as shown in the following example. Example 3.3: Consider the simple condition in which components are arriving from the produc- tion line and they are stipulated to be defective or not defective. Define the random variable X by X = 1, if the component is defective, 0, if the component is not defective.
  • 104. 3.1 Concept of a Random Variable 83 Clearly the assignment of 1 or 0 is arbitrary though quite convenient. This will become clear in later chapters. The random variable for which 0 and 1 are chosen to describe the two possible values is called a Bernoulli random variable. Further illustrations of random variables are revealed in the following examples. Example 3.4: Statisticians use sampling plans to either accept or reject batches or lots of material. Suppose one of these sampling plans involves sampling independently 10 items from a lot of 100 items in which 12 are defective. Let X be the random variable defined as the number of items found defec- tive in the sample of 10. In this case, the random variable takes on the values 0, 1, 2, . . . , 9, 10. Example 3.5: Suppose a sampling plan involves sampling items from a process until a defective is observed. The evaluation of the process will depend on how many consecutive items are observed. In that regard, let X be a random variable defined by the number of items observed before a defective is found. With N a nondefective and D a defective, sample spaces are S = {D} given X = 1, S = {ND} given X = 2, S = {NND} given X = 3, and so on. Example 3.6: Interest centers around the proportion of people who respond to a certain mail order solicitation. Let X be that proportion. X is a random variable that takes on all values x for which 0 ≤ x ≤ 1. Example 3.7: Let X be the random variable defined by the waiting time, in hours, between successive speeders spotted by a radar unit. The random variable X takes on all values x for which x ≥ 0. Definition 3.2: If a sample space contains a finite number of possibilities or an unending sequence with as many elements as there are whole numbers, it is called a discrete sample space. The outcomes of some statistical experiments may be neither finite nor countable. Such is the case, for example, when one conducts an investigation measuring the distances that a certain make of automobile will travel over a prescribed test course on 5 liters of gasoline. Assuming distance to be a variable measured to any degree of accuracy, then clearly we have an infinite number of possible distances in the sample space that cannot be equated to the number of whole numbers. Or, if one were to record the length of time for a chemical reaction to take place, once again the possible time intervals making up our sample space would be infinite in number and uncountable. We see now that all sample spaces need not be discrete. Definition 3.3: If a sample space contains an infinite number of possibilities equal to the number of points on a line segment, it is called a continuous sample space. A random variable is called a discrete random variable if its set of possible outcomes is countable. The random variables in Examples 3.1 to 3.5 are discrete random variables. But a random variable whose set of possible values is an entire interval of numbers is not discrete. When a random variable can take on values
  • 105. 84 Chapter 3 Random Variables and Probability Distributions on a continuous scale, it is called a continuous random variable. Often the possible values of a continuous random variable are precisely the same values that are contained in the continuous sample space. Obviously, the random variables described in Examples 3.6 and 3.7 are continuous random variables. In most practical problems, continuous random variables represent measured data, such as all possible heights, weights, temperatures, distance, or life periods, whereas discrete random variables represent count data, such as the number of defectives in a sample of k items or the number of highway fatalities per year in a given state. Note that the random variables Y and M of Examples 3.1 and 3.2 both represent count data, Y the number of red balls and M the number of correct hat matches. 3.2 Discrete Probability Distributions A discrete random variable assumes each of its values with a certain probability. In the case of tossing a coin three times, the variable X, representing the number of heads, assumes the value 2 with probability 3/8, since 3 of the 8 equally likely sample points result in two heads and one tail. If one assumes equal weights for the simple events in Example 3.2, the probability that no employee gets back the right helmet, that is, the probability that M assumes the value 0, is 1/3. The possible values m of M and their probabilities are m 0 1 3 P(M = m) 1 3 1 2 1 6 Note that the values of m exhaust all possible cases and hence the probabilities add to 1. Frequently, it is convenient to represent all the probabilities of a random variable X by a formula. Such a formula would necessarily be a function of the numerical values x that we shall denote by f(x), g(x), r(x), and so forth. Therefore, we write f(x) = P(X = x); that is, f(3) = P(X = 3). The set of ordered pairs (x, f(x)) is called the probability function, probability mass function, or probability distribution of the discrete random variable X. Definition 3.4: The set of ordered pairs (x, f(x)) is a probability function, probability mass function, or probability distribution of the discrete random variable X if, for each possible outcome x, 1. f(x) ≥ 0, 2. x f(x) = 1, 3. P(X = x) = f(x). Example 3.8: A shipment of 20 similar laptop computers to a retail outlet contains 3 that are defective. If a school makes a random purchase of 2 of these computers, find the probability distribution for the number of defectives. Solution: Let X be a random variable whose values x are the possible numbers of defective computers purchased by the school. Then x can only take the numbers 0, 1, and
  • 106. 3.2 Discrete Probability Distributions 85 2. Now f(0) = P(X = 0) = 3 0 17 2 20 2 = 68 95 , f(1) = P(X = 1) = 3 1 17 1 20 2 = 51 190 , f(2) = P(X = 2) = 3 2 17 0 20 2 = 3 190 . Thus, the probability distribution of X is x 0 1 2 f(x) 68 95 51 190 3 190 Example 3.9: If a car agency sells 50% of its inventory of a certain foreign car equipped with side airbags, find a formula for the probability distribution of the number of cars with side airbags among the next 4 cars sold by the agency. Solution: Since the probability of selling an automobile with side airbags is 0.5, the 24 = 16 points in the sample space are equally likely to occur. Therefore, the denominator for all probabilities, and also for our function, is 16. To obtain the number of ways of selling 3 cars with side airbags, we need to consider the number of ways of partitioning 4 outcomes into two cells, with 3 cars with side airbags assigned to one cell and the model without side airbags assigned to the other. This can be done in 4 3 = 4 ways. In general, the event of selling x models with side airbags and 4 − x models without side airbags can occur in 4 x ways, where x can be 0, 1, 2, 3, or 4. Thus, the probability distribution f(x) = P(X = x) is f(x) = 1 16 4 x , for x = 0, 1, 2, 3, 4. There are many problems where we may wish to compute the probability that the observed value of a random variable X will be less than or equal to some real number x. Writing F(x) = P(X ≤ x) for every real number x, we define F(x) to be the cumulative distribution function of the random variable X. Definition 3.5: The cumulative distribution function F(x) of a discrete random variable X with probability distribution f(x) is F(x) = P(X ≤ x) = t≤x f(t), for − ∞ x ∞. For the random variable M, the number of correct matches in Example 3.2, we have F(2) = P(M ≤ 2) = f(0) + f(1) = 1 3 + 1 2 = 5 6 . The cumulative distribution function of M is F(m) = ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ 0, for m 0, 1 3 , for 0 ≤ m 1, 5 6 , for 1 ≤ m 3, 1, for m ≥ 3.
  • 107. 86 Chapter 3 Random Variables and Probability Distributions One should pay particular notice to the fact that the cumulative distribution func- tion is a monotone nondecreasing function defined not only for the values assumed by the given random variable but for all real numbers. Example 3.10: Find the cumulative distribution function of the random variable X in Example 3.9. Using F(x), verify that f(2) = 3/8. Solution: Direct calculations of the probability distribution of Example 3.9 give f(0)= 1/16, f(1) = 1/4, f(2)= 3/8, f(3)= 1/4, and f(4)= 1/16. Therefore, F(0) = f(0) = 1 16 , F(1) = f(0) + f(1) = 5 16 , F(2) = f(0) + f(1) + f(2) = 11 16 , F(3) = f(0) + f(1) + f(2) + f(3) = 15 16 , F(4) = f(0) + f(1) + f(2) + f(3) + f(4) = 1. Hence, F(x) = ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 0, for x 0, 1 16 , for 0 ≤ x 1, 5 16 , for 1 ≤ x 2, 11 16 , for 2 ≤ x 3, 15 16 , for 3 ≤ x 4, 1 for x ≥ 4. Now f(2) = F(2) − F(1) = 11 16 − 5 16 = 3 8 . It is often helpful to look at a probability distribution in graphic form. One might plot the points (x, f(x)) of Example 3.9 to obtain Figure 3.1. By joining the points to the x axis either with a dashed or with a solid line, we obtain a probability mass function plot. Figure 3.1 makes it easy to see what values of X are most likely to occur, and it also indicates a perfectly symmetric situation in this case. Instead of plotting the points (x, f(x)), we more frequently construct rectangles, as in Figure 3.2. Here the rectangles are constructed so that their bases of equal width are centered at each value x and their heights are equal to the corresponding probabilities given by f(x). The bases are constructed so as to leave no space between the rectangles. Figure 3.2 is called a probability histogram. Since each base in Figure 3.2 has unit width, P(X = x) is equal to the area of the rectangle centered at x. Even if the bases were not of unit width, we could adjust the heights of the rectangles to give areas that would still equal the proba- bilities of X assuming any of its values x. This concept of using areas to represent
  • 108. 3.3 Continuous Probability Distributions 87 x f (x) 0 1 2 3 4 1/16 2/16 3/16 4/16 5/16 6/16 Figure 3.1: Probability mass function plot. 0 1 2 3 4 x f (x) 1/16 2/16 3/16 4/16 5/16 6/16 Figure 3.2: Probability histogram. probabilities is necessary for our consideration of the probability distribution of a continuous random variable. The graph of the cumulative distribution function of Example 3.9, which ap- pears as a step function in Figure 3.3, is obtained by plotting the points (x, F(x)). Certain probability distributions are applicable to more than one physical situ- ation. The probability distribution of Example 3.9, for example, also applies to the random variable Y , where Y is the number of heads when a coin is tossed 4 times, or to the random variable W, where W is the number of red cards that occur when 4 cards are drawn at random from a deck in succession with each card replaced and the deck shuffled before the next drawing. Special discrete distributions that can be applied to many different experimental situations will be considered in Chapter 5. F(x) x 1/4 1/2 3/4 1 0 1 2 3 4 Figure 3.3: Discrete cumulative distribution function. 3.3 Continuous Probability Distributions A continuous random variable has a probability of 0 of assuming exactly any of its values. Consequently, its probability distribution cannot be given in tabular form.
  • 109. 88 Chapter 3 Random Variables and Probability Distributions At first this may seem startling, but it becomes more plausible when we consider a particular example. Let us discuss a random variable whose values are the heights of all people over 21 years of age. Between any two values, say 163.5 and 164.5 centimeters, or even 163.99 and 164.01 centimeters, there are an infinite number of heights, one of which is 164 centimeters. The probability of selecting a person at random who is exactly 164 centimeters tall and not one of the infinitely large set of heights so close to 164 centimeters that you cannot humanly measure the difference is remote, and thus we assign a probability of 0 to the event. This is not the case, however, if we talk about the probability of selecting a person who is at least 163 centimeters but not more than 165 centimeters tall. Now we are dealing with an interval rather than a point value of our random variable. We shall concern ourselves with computing probabilities for various intervals of continuous random variables such as P(a X b), P(W ≥ c), and so forth. Note that when X is continuous, P(a X ≤ b) = P(a X b) + P(X = b) = P(a X b). That is, it does not matter whether we include an endpoint of the interval or not. This is not true, though, when X is discrete. Although the probability distribution of a continuous random variable cannot be presented in tabular form, it can be stated as a formula. Such a formula would necessarily be a function of the numerical values of the continuous random variable X and as such will be represented by the functional notation f(x). In dealing with continuous variables, f(x) is usually called the probability density function, or simply the density function, of X. Since X is defined over a continuous sample space, it is possible for f(x) to have a finite number of discontinuities. However, most density functions that have practical applications in the analysis of statistical data are continuous and their graphs may take any of several forms, some of which are shown in Figure 3.4. Because areas will be used to represent probabilities and probabilities are positive numerical values, the density function must lie entirely above the x axis. (a) (b) (c) (d) Figure 3.4: Typical density functions. A probability density function is constructed so that the area under its curve
  • 110. 3.3 Continuous Probability Distributions 89 bounded by the x axis is equal to 1 when computed over the range of X for which f(x) is defined. Should this range of X be a finite interval, it is always possible to extend the interval to include the entire set of real numbers by defining f(x) to be zero at all points in the extended portions of the interval. In Figure 3.5, the probability that X assumes a value between a and b is equal to the shaded area under the density function between the ordinates at x = a and x = b, and from integral calculus is given by P(a X b) = b a f(x) dx. a b x f(x) Figure 3.5: P(a X b). Definition 3.6: The function f(x) is a probability density function (pdf) for the continuous random variable X, defined over the set of real numbers, if 1. f(x) ≥ 0, for all x ∈ R. 2. ∞ −∞ f(x) dx = 1. 3. P(a X b) = b a f(x) dx. Example 3.11: Suppose that the error in the reaction temperature, in ◦ C, for a controlled labora- tory experiment is a continuous random variable X having the probability density function f(x) = x2 3 , −1 x 2, 0, elsewhere. . (a) Verify that f(x) is a density function. (b) Find P(0 X ≤ 1). Solution: We use Definition 3.6. (a) Obviously, f(x) ≥ 0. To verify condition 2 in Definition 3.6, we have ∞ −∞ f(x) dx = 2 −1 x2 3 dx = x3 9 |2 −1 = 8 9 + 1 9 = 1.
  • 111. 90 Chapter 3 Random Variables and Probability Distributions (b) Using formula 3 in Definition 3.6, we obtain P(0 X ≤ 1) = 1 0 x2 3 dx = x3 9 1 0 = 1 9 . Definition 3.7: The cumulative distribution function F(x) of a continuous random variable X with density function f(x) is F(x) = P(X ≤ x) = x −∞ f(t) dt, for − ∞ x ∞. As an immediate consequence of Definition 3.7, one can write the two results P(a X b) = F(b) − F(a) and f(x) = dF(x) dx , if the derivative exists. Example 3.12: For the density function of Example 3.11, find F(x), and use it to evaluate P(0 X ≤ 1). Solution: For −1 x 2, F(x) = x −∞ f(t) dt = x −1 t2 3 dt = t3 9 x −1 = x3 + 1 9 . Therefore, F(x) = ⎧ ⎪ ⎨ ⎪ ⎩ 0, x −1, x3 +1 9 , −1 ≤ x 2, 1, x ≥ 2. The cumulative distribution function F(x) is expressed in Figure 3.6. Now P(0 X ≤ 1) = F(1) − F(0) = 2 9 − 1 9 = 1 9 , which agrees with the result obtained by using the density function in Example 3.11. Example 3.13: The Department of Energy (DOE) puts projects out on bid and generally estimates what a reasonable bid should be. Call the estimate b. The DOE has determined that the density function of the winning (low) bid is f(y) = 5 8b , 2 5 b ≤ y ≤ 2b, 0, elsewhere. Find F(y) and use it to determine the probability that the winning bid is less than the DOE’s preliminary estimate b. Solution: For 2b/5 ≤ y ≤ 2b, F(y) = y 2b/5 5 8b dy = 5t 8b y 2b/5 = 5y 8b − 1 4 .
  • 112. / / Exercises 91 f (x) x 0 2 1 1 0.5 1.0 Figure 3.6: Continuous cumulative distribution function. Thus, F(y) = ⎧ ⎪ ⎨ ⎪ ⎩ 0, y 2 5 b, 5y 8b − 1 4 , 2 5 b ≤ y 2b, 1, y ≥ 2b. To determine the probability that the winning bid is less than the preliminary bid estimate b, we have P(Y ≤ b) = F(b) = 5 8 − 1 4 = 3 8 . Exercises 3.1 Classify the following random variables as dis- crete or continuous: X: the number of automobile accidents per year in Virginia. Y : the length of time to play 18 holes of golf. M: the amount of milk produced yearly by a par- ticular cow. N: the number of eggs laid each month by a hen. P: the number of building permits issued each month in a certain city. Q: the weight of grain produced per acre. 3.2 An overseas shipment of 5 foreign automobiles contains 2 that have slight paint blemishes. If an agency receives 3 of these automobiles at random, list the elements of the sample space S, using the letters B and N for blemished and nonblemished, respectively; then to each sample point assign a value x of the ran- dom variable X representing the number of automo- biles with paint blemishes purchased by the agency. 3.3 Let W be a random variable giving the number of heads minus the number of tails in three tosses of a coin. List the elements of the sample space S for the three tosses of the coin and to each sample point assign a value w of W. 3.4 A coin is flipped until 3 heads in succession oc- cur. List only those elements of the sample space that require 6 or less tosses. Is this a discrete sample space? Explain. 3.5 Determine the value c so that each of the follow- ing functions can serve as a probability distribution of the discrete random variable X: (a) f(x) = c(x2 + 4), for x = 0, 1, 2, 3; (b) f(x) = c 2 x 3 3−x , for x = 0, 1, 2.
  • 113. / / 92 Chapter 3 Random Variables and Probability Distributions 3.6 The shelf life, in days, for bottles of a certain prescribed medicine is a random variable having the density function f(x) = 20,000 (x+100)3 , x 0, 0, elsewhere. Find the probability that a bottle of this medicine will have a shell life of (a) at least 200 days; (b) anywhere from 80 to 120 days. 3.7 The total number of hours, measured in units of 100 hours, that a family runs a vacuum cleaner over a period of one year is a continuous random variable X that has the density function f(x) = ⎧ ⎨ ⎩ x, 0 x 1, 2 − x, 1 ≤ x 2, 0, elsewhere. Find the probability that over a period of one year, a family runs their vacuum cleaner (a) less than 120 hours; (b) between 50 and 100 hours. 3.8 Find the probability distribution of the random variable W in Exercise 3.3, assuming that the coin is biased so that a head is twice as likely to occur as a tail. 3.9 The proportion of people who respond to a certain mail-order solicitation is a continuous random variable X that has the density function f(x) = 2(x+2) 5 , 0 x 1, 0, elsewhere. (a) Show that P(0 X 1) = 1. (b) Find the probability that more than 1/4 but fewer than 1/2 of the people contacted will respond to this type of solicitation. 3.10 Find a formula for the probability distribution of the random variable X representing the outcome when a single die is rolled once. 3.11 A shipment of 7 television sets contains 2 de- fective sets. A hotel makes a random purchase of 3 of the sets. If x is the number of defective sets pur- chased by the hotel, find the probability distribution of X. Express the results graphically as a probability histogram. 3.12 An investment firm offers its customers munici- pal bonds that mature after varying numbers of years. Given that the cumulative distribution function of T, the number of years to maturity for a randomly se- lected bond, is F(t) = ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 0, t 1, 1 4 , 1 ≤ t 3, 1 2 , 3 ≤ t 5, 3 4 , 5 ≤ t 7, 1, t ≥ 7, find (a) P(T = 5); (b) P(T 3); (c) P(1.4 T 6); (d) P(T ≤ 5 | T ≥ 2). 3.13 The probability distribution of X, the number of imperfections per 10 meters of a synthetic fabric in continuous rolls of uniform width, is given by x 0 1 2 3 4 f(x) 0.41 0.37 0.16 0.05 0.01 Construct the cumulative distribution function of X. 3.14 The waiting time, in hours, between successive speeders spotted by a radar unit is a continuous ran- dom variable with cumulative distribution function F(x) = 0, x 0, 1 − e−8x , x ≥ 0. Find the probability of waiting less than 12 minutes between successive speeders (a) using the cumulative distribution function of X; (b) using the probability density function of X. 3.15 Find the cumulative distribution function of the random variable X representing the number of defec- tives in Exercise 3.11. Then using F(x), find (a) P(X = 1); (b) P(0 X ≤ 2). 3.16 Construct a graph of the cumulative distribution function of Exercise 3.15. 3.17 A continuous random variable X that can as- sume values between x = 1 and x = 3 has a density function given by f(x) = 1/2. (a) Show that the area under the curve is equal to 1. (b) Find P(2 X 2.5). (c) Find P(X ≤ 1.6).
  • 114. / / Exercises 93 3.18 A continuous random variable X that can as- sume values between x = 2 and x = 5 has a density function given by f(x) = 2(1 + x)/27. Find (a) P(X 4); (b) P(3 ≤ X 4). 3.19 For the density function of Exercise 3.17, find F(x). Use it to evaluate P(2 X 2.5). 3.20 For the density function of Exercise 3.18, find F(x), and use it to evaluate P(3 ≤ X 4). 3.21 Consider the density function f(x) = k √ x, 0 x 1, 0, elsewhere. (a) Evaluate k. (b) Find F(x) and use it to evaluate P(0.3 X 0.6). 3.22 Three cards are drawn in succession from a deck without replacement. Find the probability distribution for the number of spades. 3.23 Find the cumulative distribution function of the random variable W in Exercise 3.8. Using F(w), find (a) P(W 0); (b) P(−1 ≤ W 3). 3.24 Find the probability distribution for the number of jazz CDs when 4 CDs are selected at random from a collection consisting of 5 jazz CDs, 2 classical CDs, and 3 rock CDs. Express your results by means of a formula. 3.25 From a box containing 4 dimes and 2 nickels, 3 coins are selected at random without replacement. Find the probability distribution for the total T of the 3 coins. Express the probability distribution graphi- cally as a probability histogram. 3.26 From a box containing 4 black balls and 2 green balls, 3 balls are drawn in succession, each ball being replaced in the box before the next draw is made. Find the probability distribution for the number of green balls. 3.27 The time to failure in hours of an important piece of electronic equipment used in a manufactured DVD player has the density function f(x) = 1 2000 exp(−x/2000), x ≥ 0, 0, x 0. (a) Find F(x). (b) Determine the probability that the component (and thus the DVD player) lasts more than 1000 hours before the component needs to be replaced. (c) Determine the probability that the component fails before 2000 hours. 3.28 A cereal manufacturer is aware that the weight of the product in the box varies slightly from box to box. In fact, considerable historical data have al- lowed the determination of the density function that describes the probability structure for the weight (in ounces). Letting X be the random variable weight, in ounces, the density function can be described as f(x) = 2 5 , 23.75 ≤ x ≤ 26.25, 0, elsewhere. (a) Verify that this is a valid density function. (b) Determine the probability that the weight is smaller than 24 ounces. (c) The company desires that the weight exceeding 26 ounces be an extremely rare occurrence. What is the probability that this rare occurrence does ac- tually occur? 3.29 An important factor in solid missile fuel is the particle size distribution. Significant problems occur if the particle sizes are too large. From production data in the past, it has been determined that the particle size (in micrometers) distribution is characterized by f(x) = 3x−4 , x 1, 0, elsewhere. (a) Verify that this is a valid density function. (b) Evaluate F(x). (c) What is the probability that a random particle from the manufactured fuel exceeds 4 micrometers? 3.30 Measurements of scientific systems are always subject to variation, some more than others. There are many structures for measurement error, and statis- ticians spend a great deal of time modeling these errors. Suppose the measurement error X of a certain physical quantity is decided by the density function f(x) = k(3 − x2 ), −1 ≤ x ≤ 1, 0, elsewhere. (a) Determine k that renders f(x) a valid density func- tion. (b) Find the probability that a random error in mea- surement is less than 1/2. (c) For this particular measurement, it is undesirable if the magnitude of the error (i.e., |x|) exceeds 0.8. What is the probability that this occurs?
  • 115. 94 Chapter 3 Random Variables and Probability Distributions 3.31 Based on extensive testing, it is determined by the manufacturer of a washing machine that the time Y (in years) before a major repair is required is char- acterized by the probability density function f(y) = 1 4 e−y/4 , y ≥ 0, 0, elsewhere. (a) Critics would certainly consider the product a bar- gain if it is unlikely to require a major repair before the sixth year. Comment on this by determining P(Y 6). (b) What is the probability that a major repair occurs in the first year? 3.32 The proportion of the budget for a certain type of industrial company that is allotted to environmental and pollution control is coming under scrutiny. A data collection project determines that the distribution of these proportions is given by f(y) = 5(1 − y)4 , 0 ≤ y ≤ 1, 0, elsewhere. (a) Verify that the above is a valid density function. (b) What is the probability that a company chosen at random expends less than 10% of its budget on en- vironmental and pollution controls? (c) What is the probability that a company selected at random spends more than 50% of its budget on environmental and pollution controls? 3.33 Suppose a certain type of small data processing firm is so specialized that some have difficulty making a profit in their first year of operation. The probabil- ity density function that characterizes the proportion Y that make a profit is given by f(y) = ky4 (1 − y)3 , 0 ≤ y ≤ 1, 0, elsewhere. (a) What is the value of k that renders the above a valid density function? (b) Find the probability that at most 50% of the firms make a profit in the first year. (c) Find the probability that at least 80% of the firms make a profit in the first year. 3.34 Magnetron tubes are produced on an automated assembly line. A sampling plan is used periodically to assess quality of the lengths of the tubes. This mea- surement is subject to uncertainty. It is thought that the probability that a random tube meets length spec- ification is 0.99. A sampling plan is used in which the lengths of 5 random tubes are measured. (a) Show that the probability function of Y , the num- ber out of 5 that meet length specification, is given by the following discrete probability function: f(y) = 5! y!(5 − y)! (0.99)y (0.01)5−y , for y = 0, 1, 2, 3, 4, 5. (b) Suppose random selections are made off the line and 3 are outside specifications. Use f(y) above ei- ther to support or to refute the conjecture that the probability is 0.99 that a single tube meets specifi- cations. 3.35 Suppose it is known from large amounts of his- torical data that X, the number of cars that arrive at a specific intersection during a 20-second time period, is characterized by the following discrete probability function: f(x) = e−6 6x x! , for x = 0, 1, 2, . . . . (a) Find the probability that in a specific 20-second time period, more than 8 cars arrive at the intersection. (b) Find the probability that only 2 cars arrive. 3.36 On a laboratory assignment, if the equipment is working, the density function of the observed outcome, X, is f(x) = 2(1 − x), 0 x 1, 0, otherwise. (a) Calculate P(X ≤ 1/3). (b) What is the probability that X will exceed 0.5? (c) Given that X ≥ 0.5, what is the probability that X will be less than 0.75? 3.4 Joint Probability Distributions Our study of random variables and their probability distributions in the preced- ing sections is restricted to one-dimensional sample spaces, in that we recorded outcomes of an experiment as values assumed by a single random variable. There will be situations, however, where we may find it desirable to record the simulta-
  • 116. 3.4 Joint Probability Distributions 95 neous outcomes of several random variables. For example, we might measure the amount of precipitate P and volume V of gas released from a controlled chemical experiment, giving rise to a two-dimensional sample space consisting of the out- comes (p, v), or we might be interested in the hardness H and tensile strength T of cold-drawn copper, resulting in the outcomes (h, t). In a study to determine the likelihood of success in college based on high school data, we might use a three- dimensional sample space and record for each individual his or her aptitude test score, high school class rank, and grade-point average at the end of freshman year in college. If X and Y are two discrete random variables, the probability distribution for their simultaneous occurrence can be represented by a function with values f(x, y) for any pair of values (x, y) within the range of the random variables X and Y . It is customary to refer to this function as the joint probability distribution of X and Y . Hence, in the discrete case, f(x, y) = P(X = x, Y = y); that is, the values f(x, y) give the probability that outcomes x and y occur at the same time. For example, if an 18-wheeler is to have its tires serviced and X represents the number of miles these tires have been driven and Y represents the number of tires that need to be replaced, then f(30000, 5) is the probability that the tires are used over 30,000 miles and the truck needs 5 new tires. Definition 3.8: The function f(x, y) is a joint probability distribution or probability mass function of the discrete random variables X and Y if 1. f(x, y) ≥ 0 for all (x, y), 2. x y f(x, y) = 1, 3. P(X = x, Y = y) = f(x, y). For any region A in the xy plane, P[(X, Y ) ∈ A] = A f(x, y). Example 3.14: Two ballpoint pens are selected at random from a box that contains 3 blue pens, 2 red pens, and 3 green pens. If X is the number of blue pens selected and Y is the number of red pens selected, find (a) the joint probability function f(x, y), (b) P[(X, Y ) ∈ A], where A is the region {(x, y)|x + y ≤ 1}. Solution: The possible pairs of values (x, y) are (0, 0), (0, 1), (1, 0), (1, 1), (0, 2), and (2, 0). (a) Now, f(0, 1), for example, represents the probability that a red and a green pens are selected. The total number of equally likely ways of selecting any 2 pens from the 8 is 8 2 = 28. The number of ways of selecting 1 red from 2 red pens and 1 green from 3 green pens is 2 1 3 1 = 6. Hence, f(0, 1) = 6/28 = 3/14. Similar calculations yield the probabilities for the other cases, which are presented in Table 3.1. Note that the probabilities sum to 1. In Chapter
  • 117. 96 Chapter 3 Random Variables and Probability Distributions 5, it will become clear that the joint probability distribution of Table 3.1 can be represented by the formula f(x, y) = 3 x 2 y 3 2−x−y 8 2 , for x = 0, 1, 2; y = 0, 1, 2; and 0 ≤ x + y ≤ 2. (b) The probability that (X, Y ) fall in the region A is P[(X, Y ) ∈ A] = P(X + Y ≤ 1) = f(0, 0) + f(0, 1) + f(1, 0) = 3 28 + 3 14 + 9 28 = 9 14 . Table 3.1: Joint Probability Distribution for Example 3.14 x Row f(x, y) 0 1 2 Totals 0 3 28 9 28 3 28 15 28 y 1 3 14 3 14 0 3 7 2 1 28 0 0 1 28 Column Totals 5 14 15 28 3 28 1 When X and Y are continuous random variables, the joint density function f(x, y) is a surface lying above the xy plane, and P[(X, Y ) ∈ A], where A is any region in the xy plane, is equal to the volume of the right cylinder bounded by the base A and the surface. Definition 3.9: The function f(x, y) is a joint density function of the continuous random variables X and Y if 1. f(x, y) ≥ 0, for all (x, y), 2. ∞ −∞ ∞ −∞ f(x, y) dx dy = 1, 3. P[(X, Y ) ∈ A] = A f(x, y) dx dy, for any region A in the xy plane. Example 3.15: A privately owned business operates both a drive-in facility and a walk-in facility. On a randomly selected day, let X and Y , respectively, be the proportions of the time that the drive-in and the walk-in facilities are in use, and suppose that the joint density function of these random variables is f(x, y) = 2 5 (2x + 3y), 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, 0, elsewhere. (a) Verify condition 2 of Definition 3.9. (b) Find P[(X, Y ) ∈ A], where A = {(x, y) | 0 x 1 2 , 1 4 y 1 2 }.
  • 118. 3.4 Joint Probability Distributions 97 Solution: (a) The integration of f(x, y) over the whole region is ∞ −∞ ∞ −∞ f(x, y) dx dy = 1 0 1 0 2 5 (2x + 3y) dx dy = 1 0 2x2 5 + 6xy 5 x=1 x=0 dy = 1 0 2 5 + 6y 5 dy = 2y 5 + 3y2 5 1 0 = 2 5 + 3 5 = 1. (b) To calculate the probability, we use P[(X, Y ) ∈ A] = P 0 X 1 2 , 1 4 Y 1 2 = 1/2 1/4 1/2 0 2 5 (2x + 3y) dx dy = 1/2 1/4 2x2 5 + 6xy 5 x=1/2 x=0 dy = 1/2 1/4 1 10 + 3y 5 dy = y 10 + 3y2 10 1/2 1/4 = 1 10 1 2 + 3 4 − 1 4 + 3 16 = 13 160 . Given the joint probability distribution f(x, y) of the discrete random variables X and Y , the probability distribution g(x) of X alone is obtained by summing f(x, y) over the values of Y . Similarly, the probability distribution h(y) of Y alone is obtained by summing f(x, y) over the values of X. We define g(x) and h(y) to be the marginal distributions of X and Y , respectively. When X and Y are continuous random variables, summations are replaced by integrals. We can now make the following general definition. Definition 3.10: The marginal distributions of X alone and of Y alone are g(x) = y f(x, y) and h(y) = x f(x, y) for the discrete case, and g(x) = ∞ −∞ f(x, y) dy and h(y) = ∞ −∞ f(x, y) dx for the continuous case. The term marginal is used here because, in the discrete case, the values of g(x) and h(y) are just the marginal totals of the respective columns and rows when the values of f(x, y) are displayed in a rectangular table.
  • 119. 98 Chapter 3 Random Variables and Probability Distributions Example 3.16: Show that the column and row totals of Table 3.1 give the marginal distribution of X alone and of Y alone. Solution: For the random variable X, we see that g(0) = f(0, 0) + f(0, 1) + f(0, 2) = 3 28 + 3 14 + 1 28 = 5 14 , g(1) = f(1, 0) + f(1, 1) + f(1, 2) = 9 28 + 3 14 + 0 = 15 28 , and g(2) = f(2, 0) + f(2, 1) + f(2, 2) = 3 28 + 0 + 0 = 3 28 , which are just the column totals of Table 3.1. In a similar manner we could show that the values of h(y) are given by the row totals. In tabular form, these marginal distributions may be written as follows: x 0 1 2 g(x) 5 14 15 28 3 28 y 0 1 2 h(y) 15 28 3 7 1 28 Example 3.17: Find g(x) and h(y) for the joint density function of Example 3.15. Solution: By definition, g(x) = ∞ −∞ f(x, y) dy = 1 0 2 5 (2x + 3y) dy = 4xy 5 + 6y2 10 y=1 y=0 = 4x + 3 5 , for 0 ≤ x ≤ 1, and g(x) = 0 elsewhere. Similarly, h(y) = ∞ −∞ f(x, y) dx = 1 0 2 5 (2x + 3y) dx = 2(1 + 3y) 5 , for 0 ≤ y ≤ 1, and h(y) = 0 elsewhere. The fact that the marginal distributions g(x) and h(y) are indeed the proba- bility distributions of the individual variables X and Y alone can be verified by showing that the conditions of Definition 3.4 or Definition 3.6 are satisfied. For example, in the continuous case ∞ −∞ g(x) dx = ∞ −∞ ∞ −∞ f(x, y) dy dx = 1, and P(a X b) = P(a X b, −∞ Y ∞) = b a ∞ −∞ f(x, y) dy dx = b a g(x) dx. In Section 3.1, we stated that the value x of the random variable X represents an event that is a subset of the sample space. If we use the definition of conditional probability as stated in Chapter 2, P(B|A) = P(A ∩ B) P(A) , provided P(A) 0,
  • 120. 3.4 Joint Probability Distributions 99 where A and B are now the events defined by X = x and Y = y, respectively, then P(Y = y | X = x) = P(X = x, Y = y) P(X = x) = f(x, y) g(x) , provided g(x) 0, where X and Y are discrete random variables. It is not difficult to show that the function f(x, y)/g(x), which is strictly a func- tion of y with x fixed, satisfies all the conditions of a probability distribution. This is also true when f(x, y) and g(x) are the joint density and marginal distribution, respectively, of continuous random variables. As a result, it is extremely important that we make use of the special type of distribution of the form f(x, y)/g(x) in order to be able to effectively compute conditional probabilities. This type of dis- tribution is called a conditional probability distribution; the formal definition follows. Definition 3.11: Let X and Y be two random variables, discrete or continuous. The conditional distribution of the random variable Y given that X = x is f(y|x) = f(x, y) g(x) , provided g(x) 0. Similarly, the conditional distribution of X given that Y = y is f(x|y) = f(x, y) h(y) , provided h(y) 0. If we wish to find the probability that the discrete random variable X falls between a and b when it is known that the discrete variable Y = y, we evaluate P(a X b | Y = y) = axb f(x|y), where the summation extends over all values of X between a and b. When X and Y are continuous, we evaluate P(a X b | Y = y) = b a f(x|y) dx. Example 3.18: Referring to Example 3.14, find the conditional distribution of X, given that Y = 1, and use it to determine P(X = 0 | Y = 1). Solution: We need to find f(x|y), where y = 1. First, we find that h(1) = 2 x=0 f(x, 1) = 3 14 + 3 14 + 0 = 3 7 . Now f(x|1) = f(x, 1) h(1) = 7 3 f(x, 1), x = 0, 1, 2.
  • 121. 100 Chapter 3 Random Variables and Probability Distributions Therefore, f(0|1) = 7 3 f(0, 1) = 7 3 3 14 = 1 2 , f(1|1) = 7 3 f(1, 1) = 7 3 3 14 = 1 2 , f(2|1) = 7 3 f(2, 1) = 7 3 (0) = 0, and the conditional distribution of X, given that Y = 1, is x 0 1 2 f(x|1) 1 2 1 2 0 Finally, P(X = 0 | Y = 1) = f(0|1) = 1 2 . Therefore, if it is known that 1 of the 2 pen refills selected is red, we have a probability equal to 1/2 that the other refill is not blue. Example 3.19: The joint density for the random variables (X, Y ), where X is the unit temperature change and Y is the proportion of spectrum shift that a certain atomic particle produces, is f(x, y) = 10xy2 , 0 x y 1, 0, elsewhere. (a) Find the marginal densities g(x), h(y), and the conditional density f(y|x). (b) Find the probability that the spectrum shifts more than half of the total observations, given that the temperature is increased by 0.25 unit. Solution: (a) By definition, g(x) = ∞ −∞ f(x, y) dy = 1 x 10xy2 dy = 10 3 xy3 y=1 y=x = 10 3 x(1 − x3 ), 0 x 1, h(y) = ∞ −∞ f(x, y) dx = y 0 10xy2 dx = 5x2 y2 x=y x=0 = 5y4 , 0 y 1. Now f(y|x) = f(x, y) g(x) = 10xy2 10 3 x(1 − x3) = 3y2 1 − x3 , 0 x y 1. (b) Therefore, P Y 1 2 X = 0.25 = 1 1/2 f(y | x = 0.25) dy = 1 1/2 3y2 1 − 0.253 dy = 8 9 . Example 3.20: Given the joint density function f(x, y) = x(1+3y2 ) 4 , 0 x 2, 0 y 1, 0, elsewhere,
  • 122. 3.4 Joint Probability Distributions 101 find g(x), h(y), f(x|y), and evaluate P(1 4 X 1 2 | Y = 1 3 ). Solution: By definition of the marginal density. for 0 x 2, g(x) = ∞ −∞ f(x, y) dy = 1 0 x(1 + 3y2 ) 4 dy = xy 4 + xy3 4 y=1 y=0 = x 2 , and for 0 y 1, h(y) = ∞ −∞ f(x, y) dx = 2 0 x(1 + 3y2 ) 4 dx = x2 8 + 3x2 y2 8 x=2 x=0 = 1 + 3y2 2 . Therefore, using the conditional density definition, for 0 x 2, f(x|y) = f(x, y) h(y) = x(1 + 3y2 )/4 (1 + 3y2)/2 = x 2 , and P 1 4 X 1 2 Y = 1 3 = 1/2 1/4 x 2 dx = 3 64 . Statistical Independence If f(x|y) does not depend on y, as is the case for Example 3.20, then f(x|y) = g(x) and f(x, y) = g(x)h(y). The proof follows by substituting f(x, y) = f(x|y)h(y) into the marginal distribution of X. That is, g(x) = ∞ −∞ f(x, y) dy = ∞ −∞ f(x|y)h(y) dy. If f(x|y) does not depend on y, we may write g(x) = f(x|y) ∞ −∞ h(y) dy. Now ∞ −∞ h(y) dy = 1, since h(y) is the probability density function of Y . Therefore, g(x) = f(x|y) and then f(x, y) = g(x)h(y).
  • 123. 102 Chapter 3 Random Variables and Probability Distributions It should make sense to the reader that if f(x|y) does not depend on y, then of course the outcome of the random variable Y has no impact on the outcome of the random variable X. In other words, we say that X and Y are independent random variables. We now offer the following formal definition of statistical independence. Definition 3.12: Let X and Y be two random variables, discrete or continuous, with joint proba- bility distribution f(x, y) and marginal distributions g(x) and h(y), respectively. The random variables X and Y are said to be statistically independent if and only if f(x, y) = g(x)h(y) for all (x, y) within their range. The continuous random variables of Example 3.20 are statistically indepen- dent, since the product of the two marginal distributions gives the joint density function. This is obviously not the case, however, for the continuous variables of Example 3.19. Checking for statistical independence of discrete random variables requires a more thorough investigation, since it is possible to have the product of the marginal distributions equal to the joint probability distribution for some but not all combinations of (x, y). If you can find any point (x, y) for which f(x, y) is defined such that f(x, y) = g(x)h(y), the discrete variables X and Y are not statistically independent. Example 3.21: Show that the random variables of Example 3.14 are not statistically independent. Proof: Let us consider the point (0, 1). From Table 3.1 we find the three probabilities f(0, 1), g(0), and h(1) to be f(0, 1) = 3 14 , g(0) = 2 y=0 f(0, y) = 3 28 + 3 14 + 1 28 = 5 14 , h(1) = 2 x=0 f(x, 1) = 3 14 + 3 14 + 0 = 3 7 . Clearly, f(0, 1) = g(0)h(1), and therefore X and Y are not statistically independent. All the preceding definitions concerning two random variables can be general- ized to the case of n random variables. Let f(x1, x2, . . . , xn) be the joint probability function of the random variables X1, X2, . . . , Xn. The marginal distribution of X1, for example, is g(x1) = x2 · · · xn f(x1, x2, . . . , xn)
  • 124. 3.4 Joint Probability Distributions 103 for the discrete case, and g(x1) = ∞ −∞ · · · ∞ −∞ f(x1, x2, . . . , xn) dx2 dx3 · · · dxn for the continuous case. We can now obtain joint marginal distributions such as g(x1, x2), where g(x1, x2) = ⎧ ⎨ ⎩ x3 · · · xn f(x1, x2, . . . , xn) (discrete case), ∞ −∞ · · · ∞ −∞ f(x1, x2, . . . , xn) dx3 dx4 · · · dxn (continuous case). We could consider numerous conditional distributions. For example, the joint con- ditional distribution of X1, X2, and X3, given that X4 = x4, X5 = x5, . . . , Xn = xn, is written f(x1, x2, x3 | x4, x5, . . . , xn) = f(x1, x2, . . . , xn) g(x4, x5, . . . , xn) , where g(x4, x5, . . . , xn) is the joint marginal distribution of the random variables X4, X5, . . . , Xn. A generalization of Definition 3.12 leads to the following definition for the mu- tual statistical independence of the variables X1, X2, . . . , Xn. Definition 3.13: Let X1, X2, . . . , Xn be n random variables, discrete or continuous, with joint probability distribution f(x1, x2, . . . , xn) and marginal distribution f1(x1), f2(x2), . . . , fn(xn), respectively. The random variables X1, X2, . . . , Xn are said to be mutually statistically independent if and only if f(x1, x2, . . . , xn) = f1(x1)f2(x2) · · · fn(xn) for all (x1, x2, . . . , xn) within their range. Example 3.22: Suppose that the shelf life, in years, of a certain perishable food product packaged in cardboard containers is a random variable whose probability density function is given by f(x) = e−x , x 0, 0, elsewhere. Let X1, X2, and X3 represent the shelf lives for three of these containers selected independently and find P(X1 2, 1 X2 3, X3 2). Solution: Since the containers were selected independently, we can assume that the random variables X1, X2, and X3 are statistically independent, having the joint probability density f(x1, x2, x3) = f(x1)f(x2)f(x3) = e−x1 e−x2 e−x3 = e−x1−x2−x3 , for x1 0, x2 0, x3 0, and f(x1, x2, x3) = 0 elsewhere. Hence P(X1 2, 1 X2 3, X3 2) = ∞ 2 3 1 2 0 e−x1−x2−x3 dx1 dx2 dx3 = (1 − e−2 )(e−1 − e−3 )e−2 = 0.0372.
  • 125. / / 104 Chapter 3 Random Variables and Probability Distributions What Are Important Characteristics of Probability Distributions and Where Do They Come From? This is an important point in the text to provide the reader with a transition into the next three chapters. We have given illustrations in both examples and exercises of practical scientific and engineering situations in which probability distributions and their properties are used to solve important problems. These probability dis- tributions, either discrete or continuous, were introduced through phrases like “it is known that” or “suppose that” or even in some cases “historical evidence sug- gests that.” These are situations in which the nature of the distribution and even a good estimate of the probability structure can be determined through historical data, data from long-term studies, or even large amounts of planned data. The reader should remember the discussion of the use of histograms in Chapter 1 and from that recall how frequency distributions are estimated from the histograms. However, not all probability functions and probability density functions are derived from large amounts of historical data. There are a substantial number of situa- tions in which the nature of the scientific scenario suggests a distribution type. Indeed, many of these are reflected in exercises in both Chapter 2 and this chap- ter. When independent repeated observations are binary in nature (e.g., defective or not, survive or not, allergic or not) with value 0 or 1, the distribution covering this situation is called the binomial distribution and the probability function is known and will be demonstrated in its generality in Chapter 5. Exercise 3.34 in Section 3.3 and Review Exercise 3.80 are examples, and there are others that the reader should recognize. The scenario of a continuous distribution in time to failure, as in Review Exercise 3.69 or Exercise 3.27 on page 93, often suggests a dis- tribution type called the exponential distribution. These types of illustrations are merely two of many so-called standard distributions that are used extensively in real-world problems because the scientific scenario that gives rise to each of them is recognizable and occurs often in practice. Chapters 5 and 6 cover many of these types along with some underlying theory concerning their use. A second part of this transition to material in future chapters deals with the notion of population parameters or distributional parameters. Recall in Chapter 1 we discussed the need to use data to provide information about these parameters. We went to some length in discussing the notions of a mean and variance and provided a vision for the concepts in the context of a population. Indeed, the population mean and variance are easily found from the probability function for the discrete case or probability density function for the continuous case. These parameters and their importance in the solution of many types of real-world problems will provide much of the material in Chapters 8 through 17. Exercises 3.37 Determine the values of c so that the follow- ing functions represent joint probability distributions of the random variables X and Y : (a) f(x, y) = cxy, for x = 1, 2, 3; y = 1, 2, 3; (b) f(x, y) = c|x − y|, for x = −2, 0, 2; y = −2, 3. 3.38 If the joint probability distribution of X and Y is given by f(x, y) = x + y 30 , for x = 0, 1, 2, 3; y = 0, 1, 2, find
  • 126. / / Exercises 105 (a) P(X ≤ 2, Y = 1); (b) P(X 2, Y ≤ 1); (c) P(X Y ); (d) P(X + Y = 4). 3.39 From a sack of fruit containing 3 oranges, 2 ap- ples, and 3 bananas, a random sample of 4 pieces of fruit is selected. If X is the number of oranges and Y is the number of apples in the sample, find (a) the joint probability distribution of X and Y ; (b) P[(X, Y ) ∈ A], where A is the region that is given by {(x, y) | x + y ≤ 2}. 3.40 A fast-food restaurant operates both a drive- through facility and a walk-in facility. On a randomly selected day, let X and Y , respectively, be the propor- tions of the time that the drive-through and walk-in facilities are in use, and suppose that the joint density function of these random variables is f(x, y) = 2 3 (x + 2y), 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, 0, elsewhere. (a) Find the marginal density of X. (b) Find the marginal density of Y . (c) Find the probability that the drive-through facility is busy less than one-half of the time. 3.41 A candy company distributes boxes of choco- lates with a mixture of creams, toffees, and cordials. Suppose that the weight of each box is 1 kilogram, but the individual weights of the creams, toffees, and cor- dials vary from box to box. For a randomly selected box, let X and Y represent the weights of the creams and the toffees, respectively, and suppose that the joint density function of these variables is f(x, y) = 24xy, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, x + y ≤ 1, 0, elsewhere. (a) Find the probability that in a given box the cordials account for more than 1/2 of the weight. (b) Find the marginal density for the weight of the creams. (c) Find the probability that the weight of the toffees in a box is less than 1/8 of a kilogram if it is known that creams constitute 3/4 of the weight. 3.42 Let X and Y denote the lengths of life, in years, of two components in an electronic system. If the joint density function of these variables is f(x, y) = e−(x+y) , x 0, y 0, 0, elsewhere, find P(0 X 1 | Y = 2). 3.43 Let X denote the reaction time, in seconds, to a certain stimulus and Y denote the temperature (◦ F) at which a certain reaction starts to take place. Sup- pose that two random variables X and Y have the joint density f(x, y) = 4xy, 0 x 1, 0 y 1, 0, elsewhere. Find (a) P(0 ≤ X ≤ 1 2 and 1 4 ≤ Y ≤ 1 2 ); (b) P(X Y ). 3.44 Each rear tire on an experimental airplane is supposed to be filled to a pressure of 40 pounds per square inch (psi). Let X denote the actual air pressure for the right tire and Y denote the actual air pressure for the left tire. Suppose that X and Y are random variables with the joint density function f(x, y) = k(x2 + y2 ), 30 ≤ x 50, 30 ≤ y 50, 0, elsewhere. (a) Find k. (b) Find P(30 ≤ X ≤ 40 and 40 ≤ Y 50). (c) Find the probability that both tires are underfilled. 3.45 Let X denote the diameter of an armored elec- tric cable and Y denote the diameter of the ceramic mold that makes the cable. Both X and Y are scaled so that they range between 0 and 1. Suppose that X and Y have the joint density f(x, y) = 1 y , 0 x y 1, 0, elsewhere. Find P(X + Y 1/2). 3.46 Referring to Exercise 3.38, find (a) the marginal distribution of X; (b) the marginal distribution of Y . 3.47 The amount of kerosene, in thousands of liters, in a tank at the beginning of any day is a random amount Y from which a random amount X is sold dur- ing that day. Suppose that the tank is not resupplied during the day so that x ≤ y, and assume that the joint density function of these variables is f(x, y) = 2, 0 x ≤ y 1, 0, elsewhere. (a) Determine if X and Y are independent.
  • 127. / / 106 Chapter 3 Random Variables and Probability Distributions (b) Find P(1/4 X 1/2 | Y = 3/4). 3.48 Referring to Exercise 3.39, find (a) f(y|2) for all values of y; (b) P(Y = 0 | X = 2). 3.49 Let X denote the number of times a certain nu- merical control machine will malfunction: 1, 2, or 3 times on any given day. Let Y denote the number of times a technician is called on an emergency call. Their joint probability distribution is given as x f(x, y) 1 2 3 y 1 3 5 0.05 0.05 0.00 0.05 0.10 0.20 0.10 0.35 0.10 (a) Evaluate the marginal distribution of X. (b) Evaluate the marginal distribution of Y . (c) Find P(Y = 3 | X = 2). 3.50 Suppose that X and Y have the following joint probability distribution: x f(x, y) 2 4 1 0.10 0.15 y 3 0.20 0.30 5 0.10 0.15 (a) Find the marginal distribution of X. (b) Find the marginal distribution of Y . 3.51 Three cards are drawn without replacement from the 12 face cards (jacks, queens, and kings) of an ordinary deck of 52 playing cards. Let X be the number of kings selected and Y the number of jacks. Find (a) the joint probability distribution of X and Y ; (b) P[(X, Y ) ∈ A], where A is the region given by {(x, y) | x + y ≥ 2}. 3.52 A coin is tossed twice. Let Z denote the number of heads on the first toss and W the total number of heads on the 2 tosses. If the coin is unbalanced and a head has a 40% chance of occurring, find (a) the joint probability distribution of W and Z; (b) the marginal distribution of W; (c) the marginal distribution of Z; (d) the probability that at least 1 head occurs. 3.53 Given the joint density function f(x, y) = 6−x−y 8 , 0 x 2, 2 y 4, 0, elsewhere, find P(1 Y 3 | X = 1). 3.54 Determine whether the two random variables of Exercise 3.49 are dependent or independent. 3.55 Determine whether the two random variables of Exercise 3.50 are dependent or independent. 3.56 The joint density function of the random vari- ables X and Y is f(x, y) = 6x, 0 x 1, 0 y 1 − x, 0, elsewhere. (a) Show that X and Y are not independent. (b) Find P(X 0.3 | Y = 0.5). 3.57 Let X, Y , and Z have the joint probability den- sity function f(x, y, z) = kxy2 z, 0 x, y 1, 0 z 2, 0, elsewhere. (a) Find k. (b) Find P(X 1 4 , Y 1 2 , 1 Z 2). 3.58 Determine whether the two random variables of Exercise 3.43 are dependent or independent. 3.59 Determine whether the two random variables of Exercise 3.44 are dependent or independent. 3.60 The joint probability density function of the ran- dom variables X, Y , and Z is f(x, y, z) = 4xyz2 9 , 0 x, y 1, 0 z 3, 0, elsewhere. Find (a) the joint marginal density function of Y and Z; (b) the marginal density of Y ; (c) P(1 4 X 1 2 , Y 1 3 , 1 Z 2); (d) P(0 X 1 2 | Y = 1 4 , Z = 2).
  • 128. / / Review Exercises 107 Review Exercises 3.61 A tobacco company produces blends of tobacco, with each blend containing various proportions of Turkish, domestic, and other tobaccos. The propor- tions of Turkish and domestic in a blend are random variables with joint density function (X = Turkish and Y = domestic) f(x, y) = 24xy, 0 ≤ x, y ≤ 1, x + y ≤ 1, 0, elsewhere. (a) Find the probability that in a given box the Turkish tobacco accounts for over half the blend. (b) Find the marginal density function for the propor- tion of the domestic tobacco. (c) Find the probability that the proportion of Turk- ish tobacco is less than 1/8 if it is known that the blend contains 3/4 domestic tobacco. 3.62 An insurance company offers its policyholders a number of different premium payment options. For a randomly selected policyholder, let X be the number of months between successive payments. The cumulative distribution function of X is F(x) = ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 0, if x 1, 0.4, if 1 ≤ x 3, 0.6, if 3 ≤ x 5, 0.8, if 5 ≤ x 7, 1.0, if x ≥ 7. (a) What is the probability mass function of X? (b) Compute P(4 X ≤ 7). 3.63 Two electronic components of a missile system work in harmony for the success of the total system. Let X and Y denote the life in hours of the two com- ponents. The joint density of X and Y is f(x, y) = ye−y(1+x) , x, y ≥ 0, 0, elsewhere. (a) Give the marginal density functions for both ran- dom variables. (b) What is the probability that the lives of both com- ponents will exceed 2 hours? 3.64 A service facility operates with two service lines. On a randomly selected day, let X be the proportion of time that the first line is in use whereas Y is the pro- portion of time that the second line is in use. Suppose that the joint probability density function for (X, Y ) is f(x, y) = 3 2 (x2 + y2 ), 0 ≤ x, y ≤ 1, 0, elsewhere. (a) Compute the probability that neither line is busy more than half the time. (b) Find the probability that the first line is busy more than 75% of the time. 3.65 Let the number of phone calls received by a switchboard during a 5-minute interval be a random variable X with probability function f(x) = e−2 2x x! , for x = 0, 1, 2, . . . . (a) Determine the probability that X equals 0, 1, 2, 3, 4, 5, and 6. (b) Graph the probability mass function for these val- ues of x. (c) Determine the cumulative distribution function for these values of X. 3.66 Consider the random variables X and Y with joint density function f(x, y) = x + y, 0 ≤ x, y ≤ 1, 0, elsewhere. (a) Find the marginal distributions of X and Y . (b) Find P(X 0.5, Y 0.5). 3.67 An industrial process manufactures items that can be classified as either defective or not defective. The probability that an item is defective is 0.1. An experiment is conducted in which 5 items are drawn randomly from the process. Let the random variable X be the number of defectives in this sample of 5. What is the probability mass function of X? 3.68 Consider the following joint probability density function of the random variables X and Y : f(x, y) = 3x−y 9 , 1 x 3, 1 y 2, 0, elsewhere. (a) Find the marginal density functions of X and Y . (b) Are X and Y independent? (c) Find P(X 2). 3.69 The life span in hours of an electrical compo- nent is a random variable with cumulative distribution function F(x) = 1 − e− x 50 , x 0, 0, eleswhere.
  • 129. / / 108 Chapter 3 Random Variables and Probability Distributions (a) Determine its probability density function. (b) Determine the probability that the life span of such a component will exceed 70 hours. 3.70 Pairs of pants are being produced by a particu- lar outlet facility. The pants are checked by a group of 10 workers. The workers inspect pairs of pants taken randomly from the production line. Each inspector is assigned a number from 1 through 10. A buyer selects a pair of pants for purchase. Let the random variable X be the inspector number. (a) Give a reasonable probability mass function for X. (b) Plot the cumulative distribution function for X. 3.71 The shelf life of a product is a random variable that is related to consumer acceptance. It turns out that the shelf life Y in days of a certain type of bakery product has a density function f(y) = 1 2 e−y/2 , 0 ≤ y ∞, 0, elsewhere. What fraction of the loaves of this product stocked to- day would you expect to be sellable 3 days from now? 3.72 Passenger congestion is a service problem in air- ports. Trains are installed within the airport to reduce the congestion. With the use of the train, the time X in minutes that it takes to travel from the main terminal to a particular concourse has density function f(x) = 1 10 , 0 ≤ x ≤ 10, 0, elsewhere. (a) Show that the above is a valid probability density function. (b) Find the probability that the time it takes a pas- senger to travel from the main terminal to the con- course will not exceed 7 minutes. 3.73 Impurities in a batch of final product of a chem- ical process often reflect a serious problem. From con- siderable plant data gathered, it is known that the pro- portion Y of impurities in a batch has a density func- tion given by f(y) = 10(1 − y)9 , 0 ≤ y ≤ 1, 0, elsewhere. (a) Verify that the above is a valid density function. (b) A batch is considered not sellable and then not acceptable if the percentage of impurities exceeds 60%. With the current quality of the process, what is the percentage of batches that are not acceptable? 3.74 The time Z in minutes between calls to an elec- trical supply system has the probability density func- tion f(z) = 1 10 e−z/10 , 0 z ∞, 0, elsewhere. (a) What is the probability that there are no calls within a 20-minute time interval? (b) What is the probability that the first call comes within 10 minutes of opening? 3.75 A chemical system that results from a chemical reaction has two important components among others in a blend. The joint distribution describing the pro- portions X1 and X2 of these two components is given by f(x1, x2) = 2, 0 x1 x2 1, 0, elsewhere. (a) Give the marginal distribution of X1. (b) Give the marginal distribution of X2. (c) What is the probability that component propor- tions produce the results X1 0.2 and X2 0.5? (d) Give the conditional distribution fX1|X2 (x1|x2). 3.76 Consider the situation of Review Exercise 3.75. But suppose the joint distribution of the two propor- tions is given by f(x1, x2) = 6x2, 0 x2 x1 1, 0, elsewhere. (a) Give the marginal distribution fX1 (x1) of the pro- portion X1 and verify that it is a valid density function. (b) What is the probability that proportion X2 is less than 0.5, given that X1 is 0.7? 3.77 Consider the random variables X and Y that represent the number of vehicles that arrive at two sep- arate street corners during a certain 2-minute period. These street corners are fairly close together so it is im- portant that traffic engineers deal with them jointly if necessary. The joint distribution of X and Y is known to be f(x, y) = 9 16 · 1 4(x+y) , for x = 0, 1, 2, . . . and y = 0, 1, 2, . . . . (a) Are the two random variables X and Y indepen- dent? Explain why or why not. (b) What is the probability that during the time pe- riod in question less than 4 vehicles arrive at the two street corners?
  • 130. 3.5 Potential Misconceptions and Hazards 109 3.78 The behavior of series of components plays a huge role in scientific and engineering reliability prob- lems. The reliability of the entire system is certainly no better than that of the weakest component in the series. In a series system, the components operate in- dependently of each other. In a particular system con- taining three components, the probabilities of meeting specifications for components 1, 2, and 3, respectively, are 0.95, 0.99, and 0.92. What is the probability that the entire system works? 3.79 Another type of system that is employed in en- gineering work is a group of parallel components or a parallel system. In this more conservative approach, the probability that the system operates is larger than the probability that any component operates. The sys- tem fails only when all components fail. Consider a sit- uation in which there are 4 independent components in a parallel system with probability of operation given by Component 1: 0.95; Component 2: 0.94; Component 3: 0.90; Component 4: 0.97. What is the probability that the system does not fail? 3.80 Consider a system of components in which there are 5 independent components, each of which possesses an operational probability of 0.92. The system does have a redundancy built in such that it does not fail if 3 out of the 5 components are operational. What is the probability that the total system is operational? 3.81 Project: Take 5 class periods to observe the shoe color of individuals in class. Assume the shoe color categories are red, white, black, brown, and other. Complete a frequency table for each color category. (a) Estimate and interpret the meaning of the proba- bility distribution. (b) What is the estimated probability that in the next class period a randomly selected student will be wearing a red or a white pair of shoes? 3.5 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters In future chapters it will become apparent that probability distributions represent the structure through which probabilities that are computed aid in the evalua- tion and understanding of a process. For example, in Review Exercise 3.65, the probability distribution that quantifies the probability of a heavy load during cer- tain time periods can be very useful in planning for any changes in the system. Review Exercise 3.69 describes a scenario in which the life span of an electronic component is studied. Knowledge of the probability structure for the component will contribute significantly to an understanding of the reliability of a large system of which the component is a part. In addition, an understanding of the general nature of probability distributions will enhance understanding of the concept of a P-value, which was introduced briefly in Chapter 1 and will play a major role beginning in Chapter 10 and extending throughout the balance of the text. Chapters 4, 5, and 6 depend heavily on the material in this chapter. In Chapter 4, we discuss the meaning of important parameters in probability distributions. These important parameters quantify notions of central tendency and variabil- ity in a system. In fact, knowledge of these quantities themselves, quite apart from the complete distribution, can provide insight into the nature of the system. Chapters 5 and 6 will deal with engineering, biological, or general scientific scenar- ios that identify special types of distributions. For example, the structure of the probability function in Review Exercise 3.65 will easily be identified under certain assumptions discussed in Chapter 5. The same holds for the scenario of Review Exercise 3.69. This is a special type of time to failure problem for which the probability density function will be discussed in Chapter 6.
  • 131. 110 Chapter 3 Random Variables and Probability Distributions As far as potential hazards with the use of material in this chapter, the warning to the reader is not to read more into the material than is evident. The general nature of the probability distribution for a specific scientific phenomenon is not obvious from what is learned in this chapter. The purpose of this chapter is for readers to learn how to manipulate a probability distribution, not to learn how to identify a specific type. Chapters 5 and 6 go a long way toward identification according to the general nature of the scientific system.
  • 132. Chapter 4 Mathematical Expectation 4.1 Mean of a Random Variable In Chapter 1, we discussed the sample mean, which is the arithmetic mean of the data. Now consider the following. If two coins are tossed 16 times and X is the number of heads that occur per toss, then the values of X are 0, 1, and 2. Suppose that the experiment yields no heads, one head, and two heads a total of 4, 7, and 5 times, respectively. The average number of heads per toss of the two coins is then (0)(4) + (1)(7) + (2)(5) 16 = 1.06. This is an average value of the data and yet it is not a possible outcome of {0, 1, 2}. Hence, an average is not necessarily a possible outcome for the experiment. For instance, a salesman’s average monthly income is not likely to be equal to any of his monthly paychecks. Let us now restructure our computation for the average number of heads so as to have the following equivalent form: (0) 4 16 + (1) 7 16 + (2) 5 16 = 1.06. The numbers 4/16, 7/16, and 5/16 are the fractions of the total tosses resulting in 0, 1, and 2 heads, respectively. These fractions are also the relative frequencies for the different values of X in our experiment. In fact, then, we can calculate the mean, or average, of a set of data by knowing the distinct values that occur and their relative frequencies, without any knowledge of the total number of observations in our set of data. Therefore, if 4/16, or 1/4, of the tosses result in no heads, 7/16 of the tosses result in one head, and 5/16 of the tosses result in two heads, the mean number of heads per toss would be 1.06 no matter whether the total number of tosses were 16, 1000, or even 10,000. This method of relative frequencies is used to calculate the average number of heads per toss of two coins that we might expect in the long run. We shall refer to this average value as the mean of the random variable X or the mean of the probability distribution of X and write it as μx or simply as μ when it is 111
  • 133. 112 Chapter 4 Mathematical Expectation clear to which random variable we refer. It is also common among statisticians to refer to this mean as the mathematical expectation, or the expected value of the random variable X, and denote it as E(X). Assuming that 1 fair coin was tossed twice, we find that the sample space for our experiment is S = {HH, HT, TH, TT}. Since the 4 sample points are all equally likely, it follows that P(X = 0) = P(TT) = 1 4 , P(X = 1) = P(TH) + P(HT) = 1 2 , and P(X = 2) = P(HH) = 1 4 , where a typical element, say TH, indicates that the first toss resulted in a tail followed by a head on the second toss. Now, these probabilities are just the relative frequencies for the given events in the long run. Therefore, μ = E(X) = (0) 1 4 + (1) 1 2 + (2) 1 4 = 1. This result means that a person who tosses 2 coins over and over again will, on the average, get 1 head per toss. The method described above for calculating the expected number of heads per toss of 2 coins suggests that the mean, or expected value, of any discrete random variable may be obtained by multiplying each of the values x1, x2, . . . , xn of the random variable X by its corresponding probability f(x1), f(x2), . . . , f(xn) and summing the products. This is true, however, only if the random variable is discrete. In the case of continuous random variables, the definition of an expected value is essentially the same with summations replaced by integrations. Definition 4.1: Let X be a random variable with probability distribution f(x). The mean, or expected value, of X is μ = E(X) = x xf(x) if X is discrete, and μ = E(X) = ∞ −∞ xf(x) dx if X is continuous. The reader should note that the way to calculate the expected value, or mean, shown here is different from the way to calculate the sample mean described in Chapter 1, where the sample mean is obtained by using data. In mathematical expectation, the expected value is calculated by using the probability distribution.
  • 134. 4.1 Mean of a Random Variable 113 However, the mean is usually understood as a “center” value of the underlying distribution if we use the expected value, as in Definition 4.1. Example 4.1: A lot containing 7 components is sampled by a quality inspector; the lot contains 4 good components and 3 defective components. A sample of 3 is taken by the inspector. Find the expected value of the number of good components in this sample. Solution: Let X represent the number of good components in the sample. The probability distribution of X is f(x) = 4 x 3 3−x 7 3 , x = 0, 1, 2, 3. Simple calculations yield f(0) = 1/35, f(1) = 12/35, f(2) = 18/35, and f(3) = 4/35. Therefore, μ = E(X) = (0) 1 35 + (1) 12 35 + (2) 18 35 + (3) 4 35 = 12 7 = 1.7. Thus, if a sample of size 3 is selected at random over and over again from a lot of 4 good components and 3 defective components, it will contain, on average, 1.7 good components. Example 4.2: A salesperson for a medical device company has two appointments on a given day. At the first appointment, he believes that he has a 70% chance to make the deal, from which he can earn $1000 commission if successful. On the other hand, he thinks he only has a 40% chance to make the deal at the second appointment, from which, if successful, he can make $1500. What is his expected commission based on his own probability belief? Assume that the appointment results are independent of each other. Solution: First, we know that the salesperson, for the two appointments, can have 4 possible commission totals: $0, $1000, $1500, and $2500. We then need to calculate their associated probabilities. By independence, we obtain f($0) = (1 − 0.7)(1 − 0.4) = 0.18, f($2500) = (0.7)(0.4) = 0.28, f($1000) = (0.7)(1 − 0.4) = 0.42, and f($1500) = (1 − 0.7)(0.4) = 0.12. Therefore, the expected commission for the salesperson is E(X) = ($0)(0.18) + ($1000)(0.42) + ($1500)(0.12) + ($2500)(0.28) = $1300. Examples 4.1 and 4.2 are designed to allow the reader to gain some insight into what we mean by the expected value of a random variable. In both cases the random variables are discrete. We follow with an example involving a continuous random variable, where an engineer is interested in the mean life of a certain type of electronic device. This is an illustration of a time to failure problem that occurs often in practice. The expected value of the life of a device is an important parameter for its evaluation.
  • 135. 114 Chapter 4 Mathematical Expectation Example 4.3: Let X be the random variable that denotes the life in hours of a certain electronic device. The probability density function is f(x) = 20,000 x3 , x 100, 0, elsewhere. Find the expected life of this type of device. Solution: Using Definition 4.1, we have μ = E(X) = ∞ 100 x 20, 000 x3 dx = ∞ 100 20, 000 x2 dx = 200. Therefore, we can expect this type of device to last, on average, 200 hours. Now let us consider a new random variable g(X), which depends on X; that is, each value of g(X) is determined by the value of X. For instance, g(X) might be X2 or 3X − 1, and whenever X assumes the value 2, g(X) assumes the value g(2). In particular, if X is a discrete random variable with probability distribution f(x), for x = −1, 0, 1, 2, and g(X) = X2 , then P[g(X) = 0] = P(X = 0) = f(0), P[g(X) = 1] = P(X = −1) + P(X = 1) = f(−1) + f(1), P[g(X) = 4] = P(X = 2) = f(2), and so the probability distribution of g(X) may be written g(x) 0 1 4 P[g(X) = g(x)] f(0) f(−1) + f(1) f(2) By the definition of the expected value of a random variable, we obtain μg(X) = E[g(x)] = 0f(0) + 1[f(−1) + f(1)] + 4f(2) = (−1)2 f(−1) + (0)2 f(0) + (1)2 f(1) + (2)2 f(2) = x g(x)f(x). This result is generalized in Theorem 4.1 for both discrete and continuous random variables. Theorem 4.1: Let X be a random variable with probability distribution f(x). The expected value of the random variable g(X) is μg(X) = E[g(X)] = x g(x)f(x) if X is discrete, and μg(X) = E[g(X)] = ∞ −∞ g(x)f(x) dx if X is continuous.
  • 136. 4.1 Mean of a Random Variable 115 Example 4.4: Suppose that the number of cars X that pass through a car wash between 4:00 P.M. and 5:00 P.M. on any sunny Friday has the following probability distribution: x 4 5 6 7 8 9 P(X = x) 1 12 1 12 1 4 1 4 1 6 1 6 Let g(X) = 2X−1 represent the amount of money, in dollars, paid to the attendant by the manager. Find the attendant’s expected earnings for this particular time period. Solution: By Theorem 4.1, the attendant can expect to receive E[g(X)] = E(2X − 1) = 9 x=4 (2x − 1)f(x) = (7) 1 12 + (9) 1 12 + (11) 1 4 + (13) 1 4 + (15) 1 6 + (17) 1 6 = $12.67. Example 4.5: Let X be a random variable with density function f(x) = x2 3 , −1 x 2, 0, elsewhere. Find the expected value of g(X) = 4X + 3. Solution: By Theorem 4.1, we have E(4X + 3) = 2 −1 (4x + 3)x2 3 dx = 1 3 2 −1 (4x3 + 3x2 ) dx = 8. We shall now extend our concept of mathematical expectation to the case of two random variables X and Y with joint probability distribution f(x, y). Definition 4.2: Let X and Y be random variables with joint probability distribution f(x, y). The mean, or expected value, of the random variable g(X, Y ) is μg(X,Y ) = E[g(X, Y )] = x y g(x, y)f(x, y) if X and Y are discrete, and μg(X,Y ) = E[g(X, Y )] = ∞ −∞ ∞ −∞ g(x, y)f(x, y) dx dy if X and Y are continuous. Generalization of Definition 4.2 for the calculation of mathematical expectations of functions of several random variables is straightforward.
  • 137. 116 Chapter 4 Mathematical Expectation Example 4.6: Let X and Y be the random variables with joint probability distribution indicated in Table 3.1 on page 96. Find the expected value of g(X, Y ) = XY . The table is reprinted here for convenience. x Row f(x, y) 0 1 2 Totals 0 3 28 9 28 3 28 15 28 y 1 3 14 3 14 0 3 7 2 1 28 0 0 1 28 Column Totals 5 14 15 28 3 28 1 Solution: By Definition 4.2, we write E(XY ) = 2 x=0 2 y=0 xyf(x, y) = (0)(0)f(0, 0) + (0)(1)f(0, 1) + (1)(0)f(1, 0) + (1)(1)f(1, 1) + (2)(0)f(2, 0) = f(1, 1) = 3 14 . Example 4.7: Find E(Y/X) for the density function f(x, y) = x(1+3y2 ) 4 , 0 x 2, 0 y 1, 0, elsewhere. Solution: We have E Y X = 1 0 2 0 y(1 + 3y2 ) 4 dxdy = 1 0 y + 3y3 2 dy = 5 8 . Note that if g(X, Y ) = X in Definition 4.2, we have E(X) = ⎧ ⎨ ⎩ x y xf(x, y) = x xg(x) (discrete case), ∞ −∞ ∞ −∞ xf(x, y) dy dx = ∞ −∞ xg(x) dx (continuous case), where g(x) is the marginal distribution of X. Therefore, in calculating E(X) over a two-dimensional space, one may use either the joint probability distribution of X and Y or the marginal distribution of X. Similarly, we define E(Y ) = ⎧ ⎨ ⎩ y x yf(x, y) = y yh(y) (discrete case), ∞ −∞ ∞ −∞ yf(x, y) dxdy = ∞ −∞ yh(y) dy (continuous case), where h(y) is the marginal distribution of the random variable Y .
  • 138. / / Exercises 117 Exercises 4.1 The probability distribution of X, the number of imperfections per 10 meters of a synthetic fabric in con- tinuous rolls of uniform width, is given in Exercise 3.13 on page 92 as x 0 1 2 3 4 f(x) 0.41 0.37 0.16 0.05 0.01 Find the average number of imperfections per 10 me- ters of this fabric. 4.2 The probability distribution of the discrete ran- dom variable X is f(x) = 3 x 1 4 x 3 4 3−x , x = 0, 1, 2, 3. Find the mean of X. 4.3 Find the mean of the random variable T repre- senting the total of the three coins in Exercise 3.25 on page 93. 4.4 A coin is biased such that a head is three times as likely to occur as a tail. Find the expected number of tails when this coin is tossed twice. 4.5 In a gambling game, a woman is paid $3 if she draws a jack or a queen and $5 if she draws a king or an ace from an ordinary deck of 52 playing cards. If she draws any other card, she loses. How much should she pay to play if the game is fair? 4.6 An attendant at a car wash is paid according to the number of cars that pass through. Suppose the probabilities are 1/12, 1/12, 1/4, 1/4, 1/6, and 1/6, respectively, that the attendant receives $7, $9, $11, $13, $15, or $17 between 4:00 P.M. and 5:00 P.M. on any sunny Friday. Find the attendant’s expected earn- ings for this particular period. 4.7 By investing in a particular stock, a person can make a profit in one year of $4000 with probability 0.3 or take a loss of $1000 with probability 0.7. What is this person’s expected gain? 4.8 Suppose that an antique jewelry dealer is inter- ested in purchasing a gold necklace for which the prob- abilities are 0.22, 0.36, 0.28, and 0.14, respectively, that she will be able to sell it for a profit of $250, sell it for a profit of $150, break even, or sell it for a loss of $150. What is her expected profit? 4.9 A private pilot wishes to insure his airplane for $200,000. The insurance company estimates that a to- tal loss will occur with probability 0.002, a 50% loss with probability 0.01, and a 25% loss with probability 0.1. Ignoring all other partial losses, what premium should the insurance company charge each year to re- alize an average profit of $500? 4.10 Two tire-quality experts examine stacks of tires and assign a quality rating to each tire on a 3-point scale. Let X denote the rating given by expert A and Y denote the rating given by B. The following table gives the joint distribution for X and Y . y f(x, y) 1 2 3 1 0.10 0.05 0.02 x 2 0.10 0.35 0.05 3 0.03 0.10 0.20 Find μX and μY . 4.11 The density function of coded measurements of the pitch diameter of threads of a fitting is f(x) = 4 π(1+x2) , 0 x 1, 0, elsewhere. Find the expected value of X. 4.12 If a dealer’s profit, in units of $5000, on a new automobile can be looked upon as a random variable X having the density function f(x) = 2(1 − x), 0 x 1, 0, elsewhere, find the average profit per automobile. 4.13 The density function of the continuous random variable X, the total number of hours, in units of 100 hours, that a family runs a vacuum cleaner over a pe- riod of one year, is given in Exercise 3.7 on page 92 as f(x) = ⎧ ⎨ ⎩ x, 0 x 1, 2 − x, 1 ≤ x 2, 0, elsewhere. Find the average number of hours per year that families run their vacuum cleaners. 4.14 Find the proportion X of individuals who can be expected to respond to a certain mail-order solicitation if X has the density function f(x) = 2(x+2) 5 , 0 x 1, 0, elsewhere.
  • 139. / / 118 Chapter 4 Mathematical Expectation 4.15 Assume that two random variables (X, Y ) are uniformly distributed on a circle with radius a. Then the joint probability density function is f(x, y) = 1 πa2 , x2 + y2 ≤ a2 , 0, otherwise. Find μX , the expected value of X. 4.16 Suppose that you are inspecting a lot of 1000 light bulbs, among which 20 are defectives. You choose two light bulbs randomly from the lot without replace- ment. Let X1 = 1, if the 1st light bulb is defective, 0, otherwise, X2 = 1, if the 2nd light bulb is defective, 0, otherwise. Find the probability that at least one light bulb chosen is defective. [Hint: Compute P(X1 + X2 = 1).] 4.17 Let X be a random variable with the following probability distribution: x −3 6 9 f(x) 1/6 1/2 1/3 Find μg(X), where g(X) = (2X + 1)2 . 4.18 Find the expected value of the random variable g(X) = X2 , where X has the probability distribution of Exercise 4.2. 4.19 A large industrial firm purchases several new word processors at the end of each year, the exact num- ber depending on the frequency of repairs in the previ- ous year. Suppose that the number of word processors, X, purchased each year has the following probability distribution: x 0 1 2 3 f(x) 1/10 3/10 2/5 1/5 If the cost of the desired model is $1200 per unit and at the end of the year a refund of 50X2 dollars will be issued, how much can this firm expect to spend on new word processors during this year? 4.20 A continuous random variable X has the density function f(x) = e−x , x 0, 0, elsewhere. Find the expected value of g(X) = e2X/3 . 4.21 What is the dealer’s average profit per auto- mobile if the profit on each automobile is given by g(X) = X2 , where X is a random variable having the density function of Exercise 4.12? 4.22 The hospitalization period, in days, for patients following treatment for a certain type of kidney disor- der is a random variable Y = X + 4, where X has the density function f(x) = 32 (x+4)3 , x 0, 0, elsewhere. Find the average number of days that a person is hos- pitalized following treatment for this disorder. 4.23 Suppose that X and Y have the following joint probability function: x f(x, y) 2 4 1 0.10 0.15 y 3 0.20 0.30 5 0.10 0.15 (a) Find the expected value of g(X, Y ) = XY 2 . (b) Find μX and μY . 4.24 Referring to the random variables whose joint probability distribution is given in Exercise 3.39 on page 105, (a) find E(X2 Y − 2XY ); (b) find μX − μY . 4.25 Referring to the random variables whose joint probability distribution is given in Exercise 3.51 on page 106, find the mean for the total number of jacks and kings when 3 cards are drawn without replacement from the 12 face cards of an ordinary deck of 52 playing cards. 4.26 Let X and Y be random variables with joint density function f(x, y) = 4xy, 0 x, y 1, 0, elsewhere. Find the expected value of Z = √ X2 + Y 2. 4.27 In Exercise 3.27 on page 93, a density function is given for the time to failure of an important compo- nent of a DVD player. Find the mean number of hours to failure of the component and thus the DVD player. 4.28 Consider the information in Exercise 3.28 on page 93. The problem deals with the weight in ounces of the product in a cereal box, with f(x) = 2 5 , 23.75 ≤ x ≤ 26.25, 0, elsewhere.
  • 140. 4.2 Variance and Covariance of Random Variables 119 (a) Plot the density function. (b) Compute the expected value, or mean weight, in ounces. (c) Are you surprised at your answer in (b)? Explain why or why not. 4.29 Exercise 3.29 on page 93 dealt with an impor- tant particle size distribution characterized by f(x) = 3x−4 , x 1, 0, elsewhere. (a) Plot the density function. (b) Give the mean particle size. 4.30 In Exercise 3.31 on page 94, the distribution of times before a major repair of a washing machine was given as f(y) = 1 4 e−y/4 , y ≥ 0, 0, elsewhere. What is the population mean of the times to repair? 4.31 Consider Exercise 3.32 on page 94. (a) What is the mean proportion of the budget allo- cated to environmental and pollution control? (b) What is the probability that a company selected at random will have allocated to environmental and pollution control a proportion that exceeds the population mean given in (a)? 4.32 In Exercise 3.13 on page 92, the distribution of the number of imperfections per 10 meters of synthetic fabric is given by x 0 1 2 3 4 f(x) 0.41 0.37 0.16 0.05 0.01 (a) Plot the probability function. (b) Find the expected number of imperfections, E(X) = μ. (c) Find E(X2 ). 4.2 Variance and Covariance of Random Variables The mean, or expected value, of a random variable X is of special importance in statistics because it describes where the probability distribution is centered. By itself, however, the mean does not give an adequate description of the shape of the distribution. We also need to characterize the variability in the distribution. In Figure 4.1, we have the histograms of two discrete probability distributions that have the same mean, μ = 2, but differ considerably in variability, or the dispersion of their observations about the mean. 1 2 3 0 1 2 3 4 x (a) (b) x Figure 4.1: Distributions with equal means and unequal dispersions. The most important measure of variability of a random variable X is obtained by applying Theorem 4.1 with g(X) = (X − μ)2 . The quantity is referred to as the variance of the random variable X or the variance of the probability
  • 141. 120 Chapter 4 Mathematical Expectation distribution of X and is denoted by Var(X) or the symbol σ2 X , or simply by σ2 when it is clear to which random variable we refer. Definition 4.3: Let X be a random variable with probability distribution f(x) and mean μ. The variance of X is σ2 = E[(X − μ)2 ] = x (x − μ)2 f(x), if X is discrete, and σ2 = E[(X − μ)2 ] = ∞ −∞ (x − μ)2 f(x) dx, if X is continuous. The positive square root of the variance, σ, is called the standard deviation of X. The quantity x−μ in Definition 4.3 is called the deviation of an observation from its mean. Since the deviations are squared and then averaged, σ2 will be much smaller for a set of x values that are close to μ than it will be for a set of values that vary considerably from μ. Example 4.8: Let the random variable X represent the number of automobiles that are used for official business purposes on any given workday. The probability distribution for company A [Figure 4.1(a)] is x 1 2 3 f(x) 0.3 0.4 0.3 and that for company B [Figure 4.1(b)] is x 0 1 2 3 4 f(x) 0.2 0.1 0.3 0.3 0.1 Show that the variance of the probability distribution for company B is greater than that for company A. Solution: For company A, we find that μA = E(X) = (1)(0.3) + (2)(0.4) + (3)(0.3) = 2.0, and then σ2 A = 3 x=1 (x − 2)2 = (1 − 2)2 (0.3) + (2 − 2)2 (0.4) + (3 − 2)2 (0.3) = 0.6. For company B, we have μB = E(X) = (0)(0.2) + (1)(0.1) + (2)(0.3) + (3)(0.3) + (4)(0.1) = 2.0, and then σ2 B = 4 x=0 (x − 2)2 f(x) = (0 − 2)2 (0.2) + (1 − 2)2 (0.1) + (2 − 2)2 (0.3) + (3 − 2)2 (0.3) + (4 − 2)2 (0.1) = 1.6.
  • 142. 4.2 Variance and Covariance of Random Variables 121 Clearly, the variance of the number of automobiles that are used for official business purposes is greater for company B than for company A. An alternative and preferred formula for finding σ2 , which often simplifies the calculations, is stated in the following theorem. Theorem 4.2: The variance of a random variable X is σ2 = E(X2 ) − μ2 . Proof: For the discrete case, we can write σ2 = x (x − μ)2 f(x) = x (x2 − 2μx + μ2 )f(x) = x x2 f(x) − 2μ x xf(x) + μ2 x f(x). Since μ = x xf(x) by definition, and x f(x) = 1 for any discrete probability distribution, it follows that σ2 = x x2 f(x) − μ2 = E(X2 ) − μ2 . For the continuous case the proof is step by step the same, with summations replaced by integrations. Example 4.9: Let the random variable X represent the number of defective parts for a machine when 3 parts are sampled from a production line and tested. The following is the probability distribution of X. x 0 1 2 3 f(x) 0.51 0.38 0.10 0.01 Using Theorem 4.2, calculate σ2 . Solution: First, we compute μ = (0)(0.51) + (1)(0.38) + (2)(0.10) + (3)(0.01) = 0.61. Now, E(X2 ) = (0)(0.51) + (1)(0.38) + (4)(0.10) + (9)(0.01) = 0.87. Therefore, σ2 = 0.87 − (0.61)2 = 0.4979. Example 4.10: The weekly demand for a drinking-water product, in thousands of liters, from a local chain of efficiency stores is a continuous random variable X having the probability density f(x) = 2(x − 1), 1 x 2, 0, elsewhere. Find the mean and variance of X.
  • 143. 122 Chapter 4 Mathematical Expectation Solution: Calculating E(X) and E(X2 , we have μ = E(X) = 2 2 1 x(x − 1) dx = 5 3 and E(X2 ) = 2 2 1 x2 (x − 1) dx = 17 6 . Therefore, σ2 = 17 6 − 5 3 2 = 1 18 . At this point, the variance or standard deviation has meaning only when we compare two or more distributions that have the same units of measurement. Therefore, we could compare the variances of the distributions of contents, mea- sured in liters, of bottles of orange juice from two companies, and the larger value would indicate the company whose product was more variable or less uniform. It would not be meaningful to compare the variance of a distribution of heights to the variance of a distribution of aptitude scores. In Section 4.4, we show how the standard deviation can be used to describe a single distribution of observations. We shall now extend our concept of the variance of a random variable X to include random variables related to X. For the random variable g(X), the variance is denoted by σ2 g(X) and is calculated by means of the following theorem. Theorem 4.3: Let X be a random variable with probability distribution f(x). The variance of the random variable g(X) is σ2 g(X) = E{[g(X) − μg(X)]2 } = x [g(x) − μg(X)]2 f(x) if X is discrete, and σ2 g(X) = E{[g(X) − μg(X)]2 } = ∞ −∞ [g(x) − μg(X)]2 f(x) dx if X is continuous. Proof: Since g(X) is itself a random variable with mean μg(X) as defined in Theorem 4.1, it follows from Definition 4.3 that σ2 g(X) = E{[g(X) − μg(X)]}. Now, applying Theorem 4.1 again to the random variable [g(X)−μg(X)]2 completes the proof. Example 4.11: Calculate the variance of g(X) = 2X + 3, where X is a random variable with probability distribution x 0 1 2 3 f(x) 1 4 1 8 1 2 1 8
  • 144. 4.2 Variance and Covariance of Random Variables 123 Solution: First, we find the mean of the random variable 2X +3. According to Theorem 4.1, μ2X+3 = E(2X + 3) = 3 x=0 (2x + 3)f(x) = 6. Now, using Theorem 4.3, we have σ2 2X+3 = E{[(2X + 3) − μ2x+3]2 } = E[(2X + 3 − 6)2 ] = E(4X2 − 12X + 9) = 3 x=0 (4x2 − 12x + 9)f(x) = 4. Example 4.12: Let X be a random variable having the density function given in Example 4.5 on page 115. Find the variance of the random variable g(X) = 4X + 3. Solution: In Example 4.5, we found that μ4X+3 = 8. Now, using Theorem 4.3, σ2 4X+3 = E{[(4X + 3) − 8]2 } = E[(4X − 5)2 ] = 2 −1 (4x − 5)2 x2 3 dx = 1 3 2 −1 (16x4 − 40x3 + 25x2 ) dx = 51 5 . If g(X, Y ) = (X −μX)(Y −μY ), where μX = E(X) and μY = E(Y ), Definition 4.2 yields an expected value called the covariance of X and Y , which we denote by σXY or Cov(X, Y ). Definition 4.4: Let X and Y be random variables with joint probability distribution f(x, y). The covariance of X and Y is σXY = E[(X − μX )(Y − μY )] = x y (x − μX )(y − μy)f(x, y) if X and Y are discrete, and σXY = E[(X − μX)(Y − μY )] = ∞ −∞ ∞ −∞ (x − μX )(y − μy)f(x, y) dx dy if X and Y are continuous. The covariance between two random variables is a measure of the nature of the association between the two. If large values of X often result in large values of Y or small values of X result in small values of Y , positive X −μX will often result in positive Y −μY and negative X −μX will often result in negative Y −μY . Thus, the product (X − μX )(Y − μY ) will tend to be positive. On the other hand, if large X values often result in small Y values, the product (X −μX )(Y −μY ) will tend to be negative. The sign of the covariance indicates whether the relationship between two dependent random variables is positive or negative. When X and Y are statistically independent, it can be shown that the covariance is zero (see Corollary 4.5). The converse, however, is not generally true. Two variables may have zero covariance and still not be statistically independent. Note that the covariance only describes the linear relationship between two random variables. Therefore, if a covariance between X and Y is zero, X and Y may have a nonlinear relationship, which means that they are not necessarily independent.
  • 145. 124 Chapter 4 Mathematical Expectation The alternative and preferred formula for σXY is stated by Theorem 4.4. Theorem 4.4: The covariance of two random variables X and Y with means μX and μY , respec- tively, is given by σXY = E(XY ) − μX μY . Proof: For the discrete case, we can write σXY = x y (x − μX )(y − μY )f(x, y) = x y xyf(x, y) − μX x y yf(x, y) − μY x y xf(x, y) + μX μY x y f(x, y). Since μX = x xf(x, y), μY = y yf(x, y), and x y f(x, y) = 1 for any joint discrete distribution, it follows that σXY = E(XY ) − μX μY − μY μX + μX μY = E(XY ) − μX μY . For the continuous case, the proof is identical with summations replaced by inte- grals. Example 4.13: Example 3.14 on page 95 describes a situation involving the number of blue refills X and the number of red refills Y . Two refills for a ballpoint pen are selected at random from a certain box, and the following is the joint probability distribution: x f(x, y) 0 1 2 h(y) 0 3 28 9 28 3 28 15 28 y 1 3 14 3 14 0 3 7 2 1 28 0 0 1 28 g(x) 5 14 15 28 3 28 1 Find the covariance of X and Y . Solution: From Example 4.6, we see that E(XY ) = 3/14. Now μX = 2 x=0 xg(x) = (0) 5 14 + (1) 15 28 + (2) 3 28 = 3 4 , and μY = 2 y=0 yh(y) = (0) 15 28 + (1) 3 7 + (2) 1 28 = 1 2 .
  • 146. 4.2 Variance and Covariance of Random Variables 125 Therefore, σXY = E(XY ) − μX μY = 3 14 − 3 4 1 2 = − 9 56 . Example 4.14: The fraction X of male runners and the fraction Y of female runners who compete in marathon races are described by the joint density function f(x, y) = 8xy, 0 ≤ y ≤ x ≤ 1, 0, elsewhere. Find the covariance of X and Y . Solution: We first compute the marginal density functions. They are g(x) = 4x3 , 0 ≤ x ≤ 1, 0, elsewhere, and h(y) = 4y(1 − y2 ), 0 ≤ y ≤ 1, 0, elsewhere. From these marginal density functions, we compute μX = E(X) = 1 0 4x4 dx = 4 5 and μY = 1 0 4y2 (1 − y2 ) dy = 8 15 . From the joint density function given above, we have E(XY ) = 1 0 1 y 8x2 y2 dx dy = 4 9 . Then σXY = E(XY ) − μX μY = 4 9 − 4 5 8 15 = 4 225 . Although the covariance between two random variables does provide informa- tion regarding the nature of the relationship, the magnitude of σXY does not indi- cate anything regarding the strength of the relationship, since σXY is not scale-free. Its magnitude will depend on the units used to measure both X and Y . There is a scale-free version of the covariance called the correlation coefficient that is used widely in statistics. Definition 4.5: Let X and Y be random variables with covariance σXY and standard deviations σX and σY , respectively. The correlation coefficient of X and Y is ρXY = σXY σX σY . It should be clear to the reader that ρXY is free of the units of X and Y . The correlation coefficient satisfies the inequality −1 ≤ ρXY ≤ 1. It assumes a value of zero when σXY = 0. Where there is an exact linear dependency, say Y ≡ a + bX,
  • 147. 126 Chapter 4 Mathematical Expectation ρXY = 1 if b 0 and ρXY = −1 if b 0. (See Exercise 4.48.) The correlation coefficient is the subject of more discussion in Chapter 12, where we deal with linear regression. Example 4.15: Find the correlation coefficient between X and Y in Example 4.13. Solution: Since E(X2 ) = (02 ) 5 14 + (12 ) 15 28 + (22 ) 3 28 = 27 28 and E(Y 2 ) = (02 ) 15 28 + (12 ) 3 7 + (22 ) 1 28 = 4 7 , we obtain σ2 X = 27 28 − 3 4 2 = 45 112 and σ2 Y = 4 7 − 1 2 2 = 9 28 . Therefore, the correlation coefficient between X and Y is ρXY = σXY σX σY = −9/56 (45/112)(9/28) = − 1 √ 5 . Example 4.16: Find the correlation coefficient of X and Y in Example 4.14. Solution: Because E(X2 ) = 1 0 4x5 dx = 2 3 and E(Y 2 ) = 1 0 4y3 (1 − y2 ) dy = 1 − 2 3 = 1 3 , we conclude that σ2 X = 2 3 − 4 5 2 = 2 75 and σ2 Y = 1 3 − 8 15 2 = 11 225 . Hence, ρXY = 4/225 (2/75)(11/225) = 4 √ 66 . Note that although the covariance in Example 4.15 is larger in magnitude (dis- regarding the sign) than that in Example 4.16, the relationship of the magnitudes of the correlation coefficients in these two examples is just the reverse. This is evidence that we cannot look at the magnitude of the covariance to decide on how strong the relationship is.
  • 148. / / Exercises 127 Exercises 4.33 Use Definition 4.3 on page 120 to find the vari- ance of the random variable X of Exercise 4.7 on page 117. 4.34 Let X be a random variable with the following probability distribution: x −2 3 5 f(x) 0.3 0.2 0.5 Find the standard deviation of X. 4.35 The random variable X, representing the num- ber of errors per 100 lines of software code, has the following probability distribution: x 2 3 4 5 6 f(x) 0.01 0.25 0.4 0.3 0.04 Using Theorem 4.2 on page 121, find the variance of X. 4.36 Suppose that the probabilities are 0.4, 0.3, 0.2, and 0.1, respectively, that 0, 1, 2, or 3 power failures will strike a certain subdivision in any given year. Find the mean and variance of the random variable X repre- senting the number of power failures striking this sub- division. 4.37 A dealer’s profit, in units of $5000, on a new automobile is a random variable X having the density function given in Exercise 4.12 on page 117. Find the variance of X. 4.38 The proportion of people who respond to a cer- tain mail-order solicitation is a random variable X hav- ing the density function given in Exercise 4.14 on page 117. Find the variance of X. 4.39 The total number of hours, in units of 100 hours, that a family runs a vacuum cleaner over a period of one year is a random variable X having the density function given in Exercise 4.13 on page 117. Find the variance of X. 4.40 Referring to Exercise 4.14 on page 117, find σ2 g(X) for the function g(X) = 3X2 + 4. 4.41 Find the standard deviation of the random vari- able g(X) = (2X + 1)2 in Exercise 4.17 on page 118. 4.42 Using the results of Exercise 4.21 on page 118, find the variance of g(X) = X2 , where X is a random variable having the density function given in Exercise 4.12 on page 117. 4.43 The length of time, in minutes, for an airplane to obtain clearance for takeoff at a certain airport is a random variable Y = 3X − 2, where X has the density function f(x) = 1 4 e−x/4 , x 0 0, elsewhere. Find the mean and variance of the random variable Y . 4.44 Find the covariance of the random variables X and Y of Exercise 3.39 on page 105. 4.45 Find the covariance of the random variables X and Y of Exercise 3.49 on page 106. 4.46 Find the covariance of the random variables X and Y of Exercise 3.44 on page 105. 4.47 For the random variables X and Y whose joint density function is given in Exercise 3.40 on page 105, find the covariance. 4.48 Given a random variable X, with standard de- viation σX , and a random variable Y = a + bX, show that if b 0, the correlation coefficient ρXY = −1, and if b 0, ρXY = 1. 4.49 Consider the situation in Exercise 4.32 on page 119. The distribution of the number of imperfections per 10 meters of synthetic failure is given by x 0 1 2 3 4 f(x) 0.41 0.37 0.16 0.05 0.01 Find the variance and standard deviation of the num- ber of imperfections. 4.50 For a laboratory assignment, if the equipment is working, the density function of the observed outcome X is f(x) = 2(1 − x), 0 x 1, 0, otherwise. Find the variance and standard deviation of X. 4.51 For the random variables X and Y in Exercise 3.39 on page 105, determine the correlation coefficient between X and Y . 4.52 Random variables X and Y follow a joint distri- bution f(x, y) = 2, 0 x ≤ y 1, 0, otherwise. Determine the correlation coefficient between X and Y .
  • 149. 128 Chapter 4 Mathematical Expectation 4.3 Means and Variances of Linear Combinations of Random Variables We now develop some useful properties that will simplify the calculations of means and variances of random variables that appear in later chapters. These properties will permit us to deal with expectations in terms of other parameters that are either known or easily computed. All the results that we present here are valid for both discrete and continuous random variables. Proofs are given only for the continuous case. We begin with a theorem and two corollaries that should be, intuitively, reasonable to the reader. Theorem 4.5: If a and b are constants, then E(aX + b) = aE(X) + b. Proof: By the definition of expected value, E(aX + b) = ∞ −∞ (ax + b)f(x) dx = a ∞ −∞ xf(x) dx + b ∞ −∞ f(x) dx. The first integral on the right is E(X) and the second integral equals 1. Therefore, we have E(aX + b) = aE(X) + b. Corollary 4.1: Setting a = 0, we see that E(b) = b. Corollary 4.2: Setting b = 0, we see that E(aX) = aE(X). Example 4.17: Applying Theorem 4.5 to the discrete random variable f(X) = 2X − 1, rework Example 4.4 on page 115. Solution: According to Theorem 4.5, we can write E(2X − 1) = 2E(X) − 1. Now μ = E(X) = 9 x=4 xf(x) = (4) 1 12 + (5) 1 12 + (6) 1 4 + (7) 1 4 + (8) 1 6 + (9) 1 6 = 41 6 . Therefore, μ2X−1 = (2) 41 6 − 1 = $12.67, as before.
  • 150. 4.3 Means and Variances of Linear Combinations of Random Variables 129 Example 4.18: Applying Theorem 4.5 to the continuous random variable g(X) = 4X + 3, rework Example 4.5 on page 115. Solution: For Example 4.5, we may use Theorem 4.5 to write E(4X + 3) = 4E(X) + 3. Now E(X) = 2 −1 x x2 3 dx = 2 −1 x3 3 dx = 5 4 . Therefore, E(4X + 3) = (4) 5 4 + 3 = 8, as before. Theorem 4.6: The expected value of the sum or difference of two or more functions of a random variable X is the sum or difference of the expected values of the functions. That is, E[g(X) ± h(X)] = E[g(X)] ± E[h(X)]. Proof: By definition, E[g(X) ± h(X)] = ∞ −∞ [g(x) ± h(x)]f(x) dx = ∞ −∞ g(x)f(x) dx ± ∞ −∞ h(x)f(x) dx = E[g(X)] ± E[h(X)]. Example 4.19: Let X be a random variable with probability distribution as follows: x 0 1 2 3 f(x) 1 3 1 2 0 1 6 Find the expected value of Y = (X − 1)2 . Solution: Applying Theorem 4.6 to the function Y = (X − 1)2 , we can write E[(X − 1)2 ] = E(X2 − 2X + 1) = E(X2 ) − 2E(X) + E(1). From Corollary 4.1, E(1) = 1, and by direct computation, E(X) = (0) 1 3 + (1) 1 2 + (2)(0) + (3) 1 6 = 1 and E(X2 ) = (0) 1 3 + (1) 1 2 + (4)(0) + (9) 1 6 = 2. Hence, E[(X − 1)2 ] = 2 − (2)(1) + 1 = 1.
  • 151. 130 Chapter 4 Mathematical Expectation Example 4.20: The weekly demand for a certain drink, in thousands of liters, at a chain of con- venience stores is a continuous random variable g(X) = X2 + X − 2, where X has the density function f(x) = 2(x − 1), 1 x 2, 0, elsewhere. Find the expected value of the weekly demand for the drink. Solution: By Theorem 4.6, we write E(X2 + X − 2) = E(X2 ) + E(X) − E(2). From Corollary 4.1, E(2) = 2, and by direct integration, E(X) = 2 1 2x(x − 1) dx = 5 3 and E(X2 ) = 2 1 2x2 (x − 1) dx = 17 6 . Now E(X2 + X − 2) = 17 6 + 5 3 − 2 = 5 2 , so the average weekly demand for the drink from this chain of efficiency stores is 2500 liters. Suppose that we have two random variables X and Y with joint probability dis- tribution f(x, y). Two additional properties that will be very useful in succeeding chapters involve the expected values of the sum, difference, and product of these two random variables. First, however, let us prove a theorem on the expected value of the sum or difference of functions of the given variables. This, of course, is merely an extension of Theorem 4.6. Theorem 4.7: The expected value of the sum or difference of two or more functions of the random variables X and Y is the sum or difference of the expected values of the functions. That is, E[g(X, Y ) ± h(X, Y )] = E[g(X, Y )] ± E[h(X, Y )]. Proof: By Definition 4.2, E[g(X, Y ) ± h(X, Y )] = ∞ −∞ ∞ −∞ [g(x, y) ± h(x, y)]f(x, y) dx dy = ∞ −∞ ∞ −∞ g(x, y)f(x, y) dx dy ± ∞ −∞ ∞ −∞ h(x, y)f(x, y) dx dy = E[g(X, Y )] ± E[h(X, Y )]. Corollary 4.3: Setting g(X, Y ) = g(X) and h(X, Y ) = h(Y ), we see that E[g(X) ± h(Y )] = E[g(X)] ± E[h(Y )].
  • 152. 4.3 Means and Variances of Linear Combinations of Random Variables 131 Corollary 4.4: Setting g(X, Y ) = X and h(X, Y ) = Y , we see that E[X ± Y ] = E[X] ± E[Y ]. If X represents the daily production of some item from machine A and Y the daily production of the same kind of item from machine B, then X + Y represents the total number of items produced daily by both machines. Corollary 4.4 states that the average daily production for both machines is equal to the sum of the average daily production of each machine. Theorem 4.8: Let X and Y be two independent random variables. Then E(XY ) = E(X)E(Y ). Proof: By Definition 4.2, E(XY ) = ∞ −∞ ∞ −∞ xyf(x, y) dx dy. Since X and Y are independent, we may write f(x, y) = g(x)h(y), where g(x) and h(y) are the marginal distributions of X and Y , respectively. Hence, E(XY ) = ∞ −∞ ∞ −∞ xyg(x)h(y) dx dy = ∞ −∞ xg(x) dx ∞ −∞ yh(y) dy = E(X)E(Y ). Theorem 4.8 can be illustrated for discrete variables by considering the exper- iment of tossing a green die and a red die. Let the random variable X represent the outcome on the green die and the random variable Y represent the outcome on the red die. Then XY represents the product of the numbers that occur on the pair of dice. In the long run, the average of the products of the numbers is equal to the product of the average number that occurs on the green die and the average number that occurs on the red die. Corollary 4.5: Let X and Y be two independent random variables. Then σXY = 0. Proof: The proof can be carried out by using Theorems 4.4 and 4.8. Example 4.21: It is known that the ratio of gallium to arsenide does not affect the functioning of gallium-arsenide wafers, which are the main components of microchips. Let X denote the ratio of gallium to arsenide and Y denote the functional wafers retrieved during a 1-hour period. X and Y are independent random variables with the joint density function f(x, y) = x(1+3y2 ) 4 , 0 x 2, 0 y 1, 0, elsewhere.
  • 153. 132 Chapter 4 Mathematical Expectation Show that E(XY ) = E(X)E(Y ), as Theorem 4.8 suggests. Solution: By definition, E(XY ) = 1 0 2 0 x2 y(1 + 3y2 ) 4 dxdy = 5 6 , E(X) = 4 3 , and E(Y ) = 5 8 . Hence, E(X)E(Y ) = 4 3 5 8 = 5 6 = E(XY ). We conclude this section by proving one theorem and presenting several corol- laries that are useful for calculating variances or standard deviations. Theorem 4.9: If X and Y are random variables with joint probability distribution f(x, y) and a, b, and c are constants, then σ2 aX+bY +c = a2 σ2 X + b2 σ2 Y + 2abσXY . Proof: By definition, σ2 aX+bY +c = E{[(aX + bY + c) − μaX+bY +c]2 }. Now μaX+bY +c = E(aX + bY + c) = aE(X) + bE(Y ) + c = aμX + bμY + c, by using Corollary 4.4 followed by Corollary 4.2. Therefore, σ2 aX+bY +c = E{[a(X − μX ) + b(Y − μY )]2 } = a2 E[(X − μX )2 ] + b2 E[(Y − μY )2 ] + 2abE[(X − μX )(Y − μY )] = a2 σ2 X + b2 σ2 Y + 2abσXY . Using Theorem 4.9, we have the following corollaries. Corollary 4.6: Setting b = 0, we see that σ2 aX+c = a2 σ2 X = a2 σ2 . Corollary 4.7: Setting a = 1 and b = 0, we see that σ2 X+c = σ2 X = σ2 . Corollary 4.8: Setting b = 0 and c = 0, we see that σ2 aX = a2 σ2 X = a2 σ2 . Corollaries 4.6 and 4.7 state that the variance is unchanged if a constant is added to or subtracted from a random variable. The addition or subtraction of a constant simply shifts the values of X to the right or to the left but does not change their variability. However, if a random variable is multiplied or divided by a constant, then Corollaries 4.6 and 4.8 state that the variance is multiplied or divided by the square of the constant.
  • 154. 4.3 Means and Variances of Linear Combinations of Random Variables 133 Corollary 4.9: If X and Y are independent random variables, then σ2 aX+bY = a2 σ2 X + b2 σ2 Y . The result stated in Corollary 4.9 is obtained from Theorem 4.9 by invoking Corollary 4.5. Corollary 4.10: If X and Y are independent random variables, then σ2 aX−bY = a2 σ2 X + b2 σ2 Y . Corollary 4.10 follows when b in Corollary 4.9 is replaced by −b. Generalizing to a linear combination of n independent random variables, we have Corollary 4.11. Corollary 4.11: If X1, X2, . . . , Xn are independent random variables, then σ2 a1X1+a2X2+···+anXn = a2 1σ2 X1 + a2 2σ2 X2 + · · · + a2 nσ2 Xn . Example 4.22: If X and Y are random variables with variances σ2 X = 2 and σ2 Y = 4 and covariance σXY = −2, find the variance of the random variable Z = 3X − 4Y + 8. Solution: σ2 Z = σ2 3X−4Y +8 = σ2 3X−4Y (by Corollary 4.6) = 9σ2 X + 16σ2 Y − 24σXY (by Theorem 4.9) = (9)(2) + (16)(4) − (24)(−2) = 130. Example 4.23: Let X and Y denote the amounts of two different types of impurities in a batch of a certain chemical product. Suppose that X and Y are independent random variables with variances σ2 X = 2 and σ2 Y = 3. Find the variance of the random variable Z = 3X − 2Y + 5. Solution: σ2 Z = σ2 3X−2Y +5 = σ2 3X−2Y (by Corollary 4.6) = 9σ2 x + 4σ2 y (by Corollary 4.10) = (9)(2) + (4)(3) = 30. What If the Function Is Nonlinear? In that which has preceded this section, we have dealt with properties of linear functions of random variables for very important reasons. Chapters 8 through 15 will discuss and illustrate practical real-world problems in which the analyst is constructing a linear model to describe a data set and thus to describe or explain the behavior of a certain scientific phenomenon. Thus, it is natural that expected values and variances of linear combinations of random variables are encountered. However, there are situations in which properties of nonlinear functions of random variables become important. Certainly there are many scientific phenomena that are nonlinear, and certainly statistical modeling using nonlinear functions is very important. In fact, in Chapter 12, we deal with the modeling of what have become standard nonlinear models. Indeed, even a simple function of random variables, such as Z = X/Y , occurs quite frequently in practice, and yet unlike in the case of
  • 155. 134 Chapter 4 Mathematical Expectation the expected value of linear combinations of random variables, there is no simple general rule. For example, E(Z) = E(X/Y ) = E(X)/E(Y ), except in very special circumstances. The material provided by Theorems 4.5 through 4.9 and the various corollaries is extremely useful in that there are no restrictions on the form of the density or probability functions, apart from the property of independence when it is required as in the corollaries following Theorems 4.9. To illustrate, consider Example 4.23; the variance of Z = 3X −2Y +5 does not require restrictions on the distributions of the amounts X and Y of the two types of impurities. Only independence between X and Y is required. Now, we do have at our disposal the capacity to find μg(X) and σ2 g(X) for any function g(·) from first principles established in Theorems 4.1 and 4.3, where it is assumed that the corresponding distribution f(x) is known. Exercises 4.40, 4.41, and 4.42, among others, illustrate the use of these theorems. Thus, if the function g(x) is nonlinear and the density function (or probability function in the discrete case) is known, μg(X) and σ2 g(X) can be evaluated exactly. But, similar to the rules given for linear combinations, are there rules for nonlinear functions that can be used when the form of the distribution of the pertinent random variables is not known? In general, suppose X is a random variable and Y = g(x). The general solution for E(Y ) or Var(Y ) can be difficult to find and depends on the complexity of the function g(·). However, there are approximations available that depend on a linear approximation of the function g(x). For example, suppose we denote E(X) as μ and Var(X) = σ2 X . Then a Taylor series approximation of g(x) around X = μX gives g(x) = g(μX ) + ∂g(x) ∂x x=μX (x − μX ) + ∂2 g(x) ∂x2 x=μX (x − μX )2 2 + · · · . As a result, if we truncate after the linear term and take the expected value of both sides, we obtain E[g(X)] ≈ g(μX ), which is certainly intuitive and in some cases gives a reasonable approximation. However, if we include the second-order term of the Taylor series, then we have a second-order adjustment for this first-order approximation as follows: Approximation of E[g(X)] E[g(X)] ≈ g(μX ) + ∂2 g(x) ∂x2 x=μX σ2 X 2 . Example 4.24: Given the random variable X with mean μX and variance σ2 X , give the second-order approximation to E(eX ). Solution: Since ∂ex ∂x = ex and ∂2 ex ∂x2 = ex , we obtain E(eX ) ≈ eμX (1 + σ2 X /2). Similarly, we can develop an approximation for Var[g(x)] by taking the variance of both sides of the first-order Taylor series expansion of g(x). Approximation of Var[g(X)] Var[g(X)] ≈ ∂g(x) ∂x 2 x=μX σ2 X . Example 4.25: Given the random variable X as in Example 4.24, give an approximate formula for Var[g(x)].
  • 156. 4.4 Chebyshev’s Theorem 135 Solution: Again ∂ex ∂x = ex ; thus, Var(X) ≈ e2μX σ2 X . These approximations can be extended to nonlinear functions of more than one random variable. Given a set of independent random variables X1, X2, . . . , Xk with means μ1, μ2, . . . , μk and variances σ2 1, σ2 2, . . . , σ2 k, respectively, let Y = h(X1, X2, . . . , Xk) be a nonlinear function; then the following are approximations for E(Y ) and Var(Y ): E(Y ) ≈ h(μ1, μ2, . . . , μk) + k i=1 σ2 i 2 ∂2 h(x1, x2, . . . , xk) ∂x2 i xi=μi, 1≤i≤k , Var(Y ) ≈ k i=1 ∂h(x1, x2, . . . , xk) ∂xi 2 xi=μi, 1≤i≤k σ2 i . Example 4.26: Consider two independent random variables X and Z with means μX and μZ and variances σ2 X and σ2 Z, respectively. Consider a random variable Y = X/Z. Give approximations for E(Y ) and Var(Y ). Solution: For E(Y ), we must use ∂y ∂x = 1 z and ∂y ∂z = − x z2 . Thus, ∂2 y ∂x2 = 0 and ∂2 y ∂z2 = 2x z3 . As a result, E(Y ) ≈ μX μZ + μX μ3 Z σ2 Z = μX μZ 1 + σ2 Z μ2 Z , and the approximation for the variance of Y is given by Var(Y ) ≈ 1 μ2 Z σ2 X + μ2 X μ4 Z σ2 Z = 1 μ2 Z σ2 X + μ2 X μ2 Z σ2 Z . 4.4 Chebyshev’s Theorem In Section 4.2 we stated that the variance of a random variable tells us something about the variability of the observations about the mean. If a random variable has a small variance or standard deviation, we would expect most of the values to be grouped around the mean. Therefore, the probability that the random variable assumes a value within a certain interval about the mean is greater than for a similar random variable with a larger standard deviation. If we think of probability in terms of area, we would expect a continuous distribution with a large value of σ to indicate a greater variability, and therefore we should expect the area to be more spread out, as in Figure 4.2(a). A distribution with a small standard deviation should have most of its area close to μ, as in Figure 4.2(b).
  • 157. 136 Chapter 4 Mathematical Expectation x μ (a) x μ (b) Figure 4.2: Variability of continuous observations about the mean. μ x (a) μ x (b) Figure 4.3: Variability of discrete observations about the mean. We can argue the same way for a discrete distribution. The area in the prob- ability histogram in Figure 4.3(b) is spread out much more than that in Figure 4.3(a) indicating a more variable distribution of measurements or outcomes. The Russian mathematician P. L. Chebyshev (1821–1894) discovered that the fraction of the area between any two values symmetric about the mean is related to the standard deviation. Since the area under a probability distribution curve or in a probability histogram adds to 1, the area between any two numbers is the probability of the random variable assuming a value between these numbers. The following theorem, due to Chebyshev, gives a conservative estimate of the probability that a random variable assumes a value within k standard deviations of its mean for any real number k.
  • 158. / / Exercises 137 Theorem 4.10: (Chebyshev’s Theorem) The probability that any random variable X will as- sume a value within k standard deviations of the mean is at least 1 − 1/k2 . That is, P(μ − kσ X μ + kσ) ≥ 1 − 1 k2 . For k = 2, the theorem states that the random variable X has a probability of at least 1−1/22 = 3/4 of falling within two standard deviations of the mean. That is, three-fourths or more of the observations of any distribution lie in the interval μ ± 2σ. Similarly, the theorem says that at least eight-ninths of the observations of any distribution fall in the interval μ ± 3σ. Example 4.27: A random variable X has a mean μ = 8, a variance σ2 = 9, and an unknown probability distribution. Find (a) P(−4 X 20), (b) P(|X − 8| ≥ 6). Solution: (a) P(−4 X 20) = P[8 − (4)(3) X 8 + (4)(3)] ≥ 15 16 . (b) P(|X − 8| ≥ 6) = 1 − P(|X − 8| 6) = 1 − P(−6 X − 8 6) = 1 − P[8 − (2)(3) X 8 + (2)(3)] ≤ 1 4 . Chebyshev’s theorem holds for any distribution of observations, and for this reason the results are usually weak. The value given by the theorem is a lower bound only. That is, we know that the probability of a random variable falling within two standard deviations of the mean can be no less than 3/4, but we never know how much more it might actually be. Only when the probability distribution is known can we determine exact probabilities. For this reason we call the theorem a distribution-free result. When specific distributions are assumed, as in future chapters, the results will be less conservative. The use of Chebyshev’s theorem is relegated to situations where the form of the distribution is unknown. Exercises 4.53 Referring to Exercise 4.35 on page 127, find the mean and variance of the discrete random variable Z = 3X − 2, when X represents the number of errors per 100 lines of code. 4.54 Using Theorem 4.5 and Corollary 4.6, find the mean and variance of the random variable Z = 5X +3, where X has the probability distribution of Exercise 4.36 on page 127. 4.55 Suppose that a grocery store purchases 5 car- tons of skim milk at the wholesale price of $1.20 per carton and retails the milk at $1.65 per carton. After the expiration date, the unsold milk is removed from the shelf and the grocer receives a credit from the dis- tributor equal to three-fourths of the wholesale price. If the probability distribution of the random variable X, the number of cartons that are sold from this lot, is x 0 1 2 3 4 5 f(x) 1 15 2 15 2 15 3 15 4 15 3 15 find the expected profit. 4.56 Repeat Exercise 4.43 on page 127 by applying Theorem 4.5 and Corollary 4.6. 4.57 Let X be a random variable with the following probability distribution: x −3 6 9 f(x) 1 6 1 2 1 3
  • 159. / / 138 Chapter 4 Mathematical Expectation Find E(X) and E(X2 ) and then, using these values, evaluate E[(2X + 1)2 ]. 4.58 The total time, measured in units of 100 hours, that a teenager runs her hair dryer over a period of one year is a continuous random variable X that has the density function f(x) = ⎧ ⎨ ⎩ x, 0 x 1, 2 − x, 1 ≤ x 2, 0, elsewhere. Use Theorem 4.6 to evaluate the mean of the random variable Y = 60X2 + 39X, where Y is equal to the number of kilowatt hours expended annually. 4.59 If a random variable X is defined such that E[(X − 1)2 ] = 10 and E[(X − 2)2 ] = 6, find μ and σ2 . 4.60 Suppose that X and Y are independent random variables having the joint probability distribution x f(x, y) 2 4 1 0.10 0.15 y 3 0.20 0.30 5 0.10 0.15 Find (a) E(2X − 3Y ); (b) E(XY ). 4.61 Use Theorem 4.7 to evaluate E(2XY 2 − X2 Y ) for the joint probability distribution shown in Table 3.1 on page 96. 4.62 If X and Y are independent random variables with variances σ2 X = 5 and σ2 Y = 3, find the variance of the random variable Z = −2X + 4Y − 3. 4.63 Repeat Exercise 4.62 if X and Y are not inde- pendent and σXY = 1. 4.64 Suppose that X and Y are independent random variables with probability densities and g(x) = 8 x3 , x 2, 0, elsewhere, and h(y) = 2y, 0 y 1, 0, elsewhere. Find the expected value of Z = XY . 4.65 Let X represent the number that occurs when a red die is tossed and Y the number that occurs when a green die is tossed. Find (a) E(X + Y ); (b) E(X − Y ); (c) E(XY ). 4.66 Let X represent the number that occurs when a green die is tossed and Y the number that occurs when a red die is tossed. Find the variance of the random variable (a) 2X − Y ; (b) X + 3Y − 5. 4.67 If the joint density function of X and Y is given by f(x, y) = 2 7 (x + 2y), 0 x 1, 1 y 2, 0, elsewhere, find the expected value of g(X, Y ) = X Y 3 + X2 Y . 4.68 The power P in watts which is dissipated in an electric circuit with resistance R is known to be given by P = I2 R, where I is current in amperes and R is a constant fixed at 50 ohms. However, I is a random vari- able with μI = 15 amperes and σ2 I = 0.03 amperes2 . Give numerical approximations to the mean and vari- ance of the power P. 4.69 Consider Review Exercise 3.77 on page 108. The random variables X and Y represent the number of ve- hicles that arrive at two separate street corners during a certain 2-minute period in the day. The joint distri- bution is f(x, y) = 1 4(x+y) 9 16 , for x = 0, 1, 2, . . . and y = 0, 1, 2, . . . . (a) Give E(X), E(Y ), Var(X), and Var(Y ). (b) Consider Z = X + Y , the sum of the two. Find E(Z) and Var(Z). 4.70 Consider Review Exercise 3.64 on page 107. There are two service lines. The random variables X and Y are the proportions of time that line 1 and line 2 are in use, respectively. The joint probability density function for (X, Y ) is given by f(x, y) = 3 2 (x2 + y2 ), 0 ≤ x, y ≤ 1, 0, elsewhere. (a) Determine whether or not X and Y are indepen- dent.
  • 160. / / Review Exercises 139 (b) It is of interest to know something about the pro- portion of Z = X + Y , the sum of the two propor- tions. Find E(X + Y ). Also find E(XY ). (c) Find Var(X), Var(Y ), and Cov(X, Y ). (d) Find Var(X + Y ). 4.71 The length of time Y , in minutes, required to generate a human reflex to tear gas has the density function f(y) = 1 4 e−y/4 , 0 ≤ y ∞, 0, elsewhere. (a) What is the mean time to reflex? (b) Find E(Y 2 ) and Var(Y ). 4.72 A manufacturing company has developed a ma- chine for cleaning carpet that is fuel-efficient because it delivers carpet cleaner so rapidly. Of interest is a random variable Y , the amount in gallons per minute delivered. It is known that the density function is given by f(y) = 1, 7 ≤ y ≤ 8, 0, elsewhere. (a) Sketch the density function. (b) Give E(Y ), E(Y 2 ), and Var(Y ). 4.73 For the situation in Exercise 4.72, compute E(eY ) using Theorem 4.1, that is, by using E(eY ) = 8 7 ey f(y) dy. Then compute E(eY ) not by using f(y), but rather by using the second-order adjustment to the first-order approximation of E(eY ). Comment. 4.74 Consider again the situation of Exercise 4.72. It is required to find Var(eY ). Use Theorems 4.2 and 4.3 and define Z = eY . Thus, use the conditions of Exer- cise 4.73 to find Var(Z) = E(Z2 ) − [E(Z)]2 . Then do it not by using f(y), but rather by using the first-order Taylor series approximation to Var(eY ). Comment! 4.75 An electrical firm manufactures a 100-watt light bulb, which, according to specifications written on the package, has a mean life of 900 hours with a standard deviation of 50 hours. At most, what percentage of the bulbs fail to last even 700 hours? Assume that the distribution is symmetric about the mean. 4.76 Seventy new jobs are opening up at an automo- bile manufacturing plant, and 1000 applicants show up for the 70 positions. To select the best 70 from among the applicants, the company gives a test that covers mechanical skill, manual dexterity, and mathematical ability. The mean grade on this test turns out to be 60, and the scores have a standard deviation of 6. Can a person who scores 84 count on getting one of the jobs? [Hint: Use Chebyshev’s theorem.] Assume that the distribution is symmetric about the mean. 4.77 A random variable X has a mean μ = 10 and a variance σ2 = 4. Using Chebyshev’s theorem, find (a) P(|X − 10| ≥ 3); (b) P(|X − 10| 3); (c) P(5 X 15); (d) the value of the constant c such that P(|X − 10| ≥ c) ≤ 0.04. 4.78 Compute P(μ − 2σ X μ + 2σ), where X has the density function f(x) = 6x(1 − x), 0 x 1, 0, elsewhere, and compare with the result given in Chebyshev’s theorem. Review Exercises 4.79 Prove Chebyshev’s theorem. 4.80 Find the covariance of random variables X and Y having the joint probability density function f(x, y) = x + y, 0 x 1, 0 y 1, 0, elsewhere. 4.81 Referring to the random variables whose joint probability density function is given in Exercise 3.47 on page 105, find the average amount of kerosene left in the tank at the end of the day. 4.82 Assume the length X, in minutes, of a particu- lar type of telephone conversation is a random variable
  • 161. / / 140 Chapter 4 Mathematical Expectation with probability density function f(x) = 1 5 e−x/5 , x 0, 0, elsewhere. (a) Determine the mean length E(X) of this type of telephone conversation. (b) Find the variance and standard deviation of X. (c) Find E[(X + 5)2 ]. 4.83 Referring to the random variables whose joint density function is given in Exercise 3.41 on page 105, find the covariance between the weight of the creams and the weight of the toffees in these boxes of choco- lates. 4.84 Referring to the random variables whose joint probability density function is given in Exercise 3.41 on page 105, find the expected weight for the sum of the creams and toffees if one purchased a box of these chocolates. 4.85 Suppose it is known that the life X of a partic- ular compressor, in hours, has the density function f(x) = 1 900 e−x/900 , x 0, 0, elsewhere. (a) Find the mean life of the compressor. (b) Find E(X2 ). (c) Find the variance and standard deviation of the random variable X. 4.86 Referring to the random variables whose joint density function is given in Exercise 3.40 on page 105, (a) find μX and μY ; (b) find E[(X + Y )/2]. 4.87 Show that Cov(aX, bY ) = ab Cov(X, Y ). 4.88 Consider the density function of Review Ex- ercise 4.85. Demonstrate that Chebyshev’s theorem holds for k = 2 and k = 3. 4.89 Consider the joint density function f(x, y) = 16y x3 , x 2, 0 y 1, 0, elsewhere. Compute the correlation coefficient ρXY . 4.90 Consider random variables X and Y of Exercise 4.63 on page 138. Compute ρXY . 4.91 A dealer’s profit, in units of $5000, on a new au- tomobile is a random variable X having density func- tion f(x) = 2(1 − x), 0 ≤ x ≤ 1, 0, elsewhere. (a) Find the variance of the dealer’s profit. (b) Demonstrate that Chebyshev’s theorem holds for k = 2 with the density function above. (c) What is the probability that the profit exceeds $500? 4.92 Consider Exercise 4.10 on page 117. Can it be said that the ratings given by the two experts are in- dependent? Explain why or why not. 4.93 A company’s marketing and accounting depart- ments have determined that if the company markets its newly developed product, the contribution of the product to the firm’s profit during the next 6 months will be described by the following: Profit Contribution Probability −$5, 000 $10, 000 $30, 000 0.2 0.5 0.3 What is the company’s expected profit? 4.94 In a support system in the U.S. space program, a single crucial component works only 85% of the time. In order to enhance the reliability of the system, it is decided that 3 components will be installed in parallel such that the system fails only if they all fail. Assume the components act independently and that they are equivalent in the sense that all 3 of them have an 85% success rate. Consider the random variable X as the number of components out of 3 that fail. (a) Write out a probability function for the random variable X. (b) What is E(X) (i.e., the mean number of compo- nents out of 3 that fail)? (c) What is Var(X)? (d) What is the probability that the entire system is successful? (e) What is the probability that the system fails? (f) If the desire is to have the system be successful with probability 0.99, are three components suffi- cient? If not, how many are required? 4.95 In business, it is important to plan and carry out research in order to anticipate what will occur at the end of the year. Research suggests that the profit (loss) spectrum for a certain company, with corresponding probabilities, is as follows:
  • 162. / / Review Exercises 141 Profit Probability −$15, 000 0.05 $0 0.15 $15,000 0.15 $25,000 0.30 $40,000 0.15 $50,000 0.10 $100,000 0.05 $150,000 0.03 $200,000 0.02 (a) What is the expected profit? (b) Give the standard deviation of the profit. 4.96 It is known through data collection and consid- erable research that the amount of time in seconds that a certain employee of a company is late for work is a random variable X with density function f(x) = 3 (4)(503) (502 − x2 ), −50 ≤ x ≤ 50, 0, elsewhere. In other words, he not only is slightly late at times, but also can be early to work. (a) Find the expected value of the time in seconds that he is late. (b) Find E(X2 ). (c) What is the standard deviation of the amount of time he is late? 4.97 A delivery truck travels from point A to point B and back using the same route each day. There are four traffic lights on the route. Let X1 denote the number of red lights the truck encounters going from A to B and X2 denote the number encountered on the return trip. Data collected over a long period suggest that the joint probability distribution for (X1, X2) is given by x2 x1 0 1 2 3 4 0 0.01 0.01 0.03 0.07 0.01 1 0.03 0.05 0.08 0.03 0.02 2 0.03 0.11 0.15 0.01 0.01 3 0.02 0.07 0.10 0.03 0.01 4 0.01 0.06 0.03 0.01 0.01 (a) Give the marginal density of X1. (b) Give the marginal density of X2. (c) Give the conditional density distribution of X1 given X2 = 3. (d) Give E(X1). (e) Give E(X2). (f) Give E(X1 | X2 = 3). (g) Give the standard deviation of X1. 4.98 A convenience store has two separate locations where customers can be checked out as they leave. These locations each have two cash registers and two employees who check out customers. Let X be the number of cash registers being used at a particular time for location 1 and Y the number being used at the same time for location 2. The joint probability function is given by y x 0 1 2 0 0.12 0.04 0.04 1 0.08 0.19 0.05 2 0.06 0.12 0.30 (a) Give the marginal density of both X and Y as well as the probability distribution of X given Y = 2. (b) Give E(X) and Var(X). (c) Give E(X | Y = 2) and Var(X | Y = 2). 4.99 Consider a ferry that can carry both buses and cars across a waterway. Each trip costs the owner ap- proximately $10. The fee for cars is $3 and the fee for buses is $8. Let X and Y denote the number of buses and cars, respectively, carried on a given trip. The joint distribution of X and Y is given by x y 0 1 2 0 0.01 0.01 0.03 1 0.03 0.08 0.07 2 0.03 0.06 0.06 3 0.07 0.07 0.13 4 0.12 0.04 0.03 5 0.08 0.06 0.02 Compute the expected profit for the ferry trip. 4.100 As we shall illustrate in Chapter 12, statistical methods associated with linear and nonlinear models are very important. In fact, exponential functions are often used in a wide variety of scientific and engineering problems. Consider a model that is fit to a set of data involving measured values k1 and k2 and a certain re- sponse Y to the measurements. The model postulated is Ŷ = eb0+b1k1+b2k2 , where Ŷ denotes the estimated value of Y, k1 and k2 are fixed values, and b0, b1, and b2 are estimates of constants and hence are random variables. Assume that these random variables are independent and use the approximate formula for the variance of a nonlinear function of more than one variable. Give an expression for Var(Ŷ ). Assume that the means of b0, b1, and b2 are known and are β0, β1, and β2, and assume that the variances of b0, b1, and b2 are known and are σ2 0, σ2 1, and σ2 2.
  • 163. 142 Chapter 4 Mathematical Expectation 4.101 Consider Review Exercise 3.73 on page 108. It involved Y , the proportion of impurities in a batch, and the density function is given by f(y) = 10(1 − y)9 , 0 ≤ y ≤ 1, 0, elsewhere. (a) Find the expected percentage of impurities. (b) Find the expected value of the proportion of quality material (i.e., find E(1 − Y )). (c) Find the variance of the random variable Z = 1−Y . 4.102 Project: Let X = number of hours each stu- dent in the class slept the night before. Create a dis- crete variable by using the following arbitrary intervals: X 3, 3 ≤ X 6, 6 ≤ X 9, and X ≥ 9. (a) Estimate the probability distribution for X. (b) Calculate the estimated mean and variance for X. 4.5 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters The material in this chapter is extremely fundamental in nature, much like that in Chapter 3. Whereas in Chapter 3 we focused on general characteristics of a prob- ability distribution, in this chapter we defined important quantities or parameters that characterize the general nature of the system. The mean of a distribution reflects central tendency, and the variance or standard deviation reflects vari- ability in the system. In addition, covariance reflects the tendency for two random variables to “move together” in a system. These important parameters will remain fundamental to all that follows in this text. The reader should understand that the distribution type is often dictated by the scientific scenario. However, the parameter values need to be estimated from scientific data. For example, in the case of Review Exercise 4.85, the manufac- turer of the compressor may know (material that will be presented in Chapter 6) from experience and knowledge of the type of compressor that the nature of the distribution is as indicated in the exercise. But the mean μ = 900 would be esti- mated from experimentation on the machine. Though the parameter value of 900 is given as known here, it will not be known in real-life situations without the use of experimental data. Chapter 9 is dedicated to estimation.
  • 164. Chapter 5 Some Discrete Probability Distributions 5.1 Introduction and Motivation No matter whether a discrete probability distribution is represented graphically by a histogram, in tabular form, or by means of a formula, the behavior of a random variable is described. Often, the observations generated by different statistical ex- periments have the same general type of behavior. Consequently, discrete random variables associated with these experiments can be described by essentially the same probability distribution and therefore can be represented by a single formula. In fact, one needs only a handful of important probability distributions to describe many of the discrete random variables encountered in practice. Such a handful of distributions describe several real-life random phenomena. For instance, in a study involving testing the effectiveness of a new drug, the num- ber of cured patients among all the patients who use the drug approximately follows a binomial distribution (Section 5.2). In an industrial example, when a sample of items selected from a batch of production is tested, the number of defective items in the sample usually can be modeled as a hypergeometric random variable (Sec- tion 5.3). In a statistical quality control problem, the experimenter will signal a shift of the process mean when observational data exceed certain limits. The num- ber of samples required to produce a false alarm follows a geometric distribution which is a special case of the negative binomial distribution (Section 5.4). On the other hand, the number of white cells from a fixed amount of an individual’s blood sample is usually random and may be described by a Poisson distribution (Section 5.5). In this chapter, we present these commonly used distributions with various examples. 5.2 Binomial and Multinomial Distributions An experiment often consists of repeated trials, each with two possible outcomes that may be labeled success or failure. The most obvious application deals with 143
  • 165. 144 Chapter 5 Some Discrete Probability Distributions the testing of items as they come off an assembly line, where each trial may indicate a defective or a nondefective item. We may choose to define either outcome as a success. The process is referred to as a Bernoulli process. Each trial is called a Bernoulli trial. Observe, for example, if one were drawing cards from a deck, the probabilities for repeated trials change if the cards are not replaced. That is, the probability of selecting a heart on the first draw is 1/4, but on the second draw it is a conditional probability having a value of 13/51 or 12/51, depending on whether a heart appeared on the first draw: this, then, would no longer be considered a set of Bernoulli trials. The Bernoulli Process Strictly speaking, the Bernoulli process must possess the following properties: 1. The experiment consists of repeated trials. 2. Each trial results in an outcome that may be classified as a success or a failure. 3. The probability of success, denoted by p, remains constant from trial to trial. 4. The repeated trials are independent. Consider the set of Bernoulli trials where three items are selected at random from a manufacturing process, inspected, and classified as defective or nondefective. A defective item is designated a success. The number of successes is a random variable X assuming integral values from 0 through 3. The eight possible outcomes and the corresponding values of X are Outcome NNN NDN NND DNN NDD DND DDN DDD x 0 1 1 1 2 2 2 3 Since the items are selected independently and we assume that the process produces 25% defectives, we have P(NDN) = P(N)P(D)P(N) = 3 4 1 4 3 4 = 9 64 . Similar calculations yield the probabilities for the other possible outcomes. The probability distribution of X is therefore x 0 1 2 3 f(x) 27 64 27 64 9 64 1 64 Binomial Distribution The number X of successes in n Bernoulli trials is called a binomial random variable. The probability distribution of this discrete random variable is called the binomial distribution, and its values will be denoted by b(x; n, p) since they depend on the number of trials and the probability of a success on a given trial. Thus, for the probability distribution of X, the number of defectives is P(X = 2) = f(2) = b 2; 3, 1 4 = 9 64 .
  • 166. 5.2 Binomial and Multinomial Distributions 145 Let us now generalize the above illustration to yield a formula for b(x; n, p). That is, we wish to find a formula that gives the probability of x successes in n trials for a binomial experiment. First, consider the probability of x successes and n − x failures in a specified order. Since the trials are independent, we can multiply all the probabilities corresponding to the different outcomes. Each success occurs with probability p and each failure with probability q = 1 − p. Therefore, the probability for the specified order is px qn−x . We must now determine the total number of sample points in the experiment that have x successes and n−x failures. This number is equal to the number of partitions of n outcomes into two groups with x in one group and n−x in the other and is written n x as introduced in Section 2.3. Because these partitions are mutually exclusive, we add the probabilities of all the different partitions to obtain the general formula, or simply multiply px qn−x by n x . Binomial Distribution A Bernoulli trial can result in a success with probability p and a failure with probability q = 1−p. Then the probability distribution of the binomial random variable X, the number of successes in n independent trials, is b(x; n, p) = n x px qn−x , x = 0, 1, 2, . . . , n. Note that when n = 3 and p = 1/4, the probability distribution of X, the number of defectives, may be written as b x; 3, 1 4 = 3 x 1 4 x 3 4 3−x , x = 0, 1, 2, 3, rather than in the tabular form on page 144. Example 5.1: The probability that a certain kind of component will survive a shock test is 3/4. Find the probability that exactly 2 of the next 4 components tested survive. Solution: Assuming that the tests are independent and p = 3/4 for each of the 4 tests, we obtain b 2; 4, 3 4 = 4 2 3 4 2 1 4 2 = 4! 2! 2! 32 44 = 27 128 . Where Does the Name Binomial Come From? The binomial distribution derives its name from the fact that the n + 1 terms in the binomial expansion of (q +p)n correspond to the various values of b(x; n, p) for x = 0, 1, 2, . . . , n. That is, (q + p)n = n 0 qn + n 1 pqn−1 + n 2 p2 qn−2 + · · · + n n pn = b(0; n, p) + b(1; n, p) + b(2; n, p) + · · · + b(n; n, p). Since p + q = 1, we see that n x=0 b(x; n, p) = 1,
  • 167. 146 Chapter 5 Some Discrete Probability Distributions a condition that must hold for any probability distribution. Frequently, we are interested in problems where it is necessary to find P(X r) or P(a ≤ X ≤ b). Binomial sums B(r; n, p) = r x=0 b(x; n, p) are given in Table A.1 of the Appendix for n = 1, 2, . . . , 20 for selected values of p from 0.1 to 0.9. We illustrate the use of Table A.1 with the following example. Example 5.2: The probability that a patient recovers from a rare blood disease is 0.4. If 15 people are known to have contracted this disease, what is the probability that (a) at least 10 survive, (b) from 3 to 8 survive, and (c) exactly 5 survive? Solution: Let X be the number of people who survive. (a) P(X ≥ 10) = 1 − P(X 10) = 1 − 9 x=0 b(x; 15, 0.4) = 1 − 0.9662 = 0.0338 (b) P(3 ≤ X ≤ 8) = 8 x=3 b(x; 15, 0.4) = 8 x=0 b(x; 15, 0.4) − 2 x=0 b(x; 15, 0.4) = 0.9050 − 0.0271 = 0.8779 (c) P(X = 5) = b(5; 15, 0.4) = 5 x=0 b(x; 15, 0.4) − 4 x=0 b(x; 15, 0.4) = 0.4032 − 0.2173 = 0.1859 Example 5.3: A large chain retailer purchases a certain kind of electronic device from a manu- facturer. The manufacturer indicates that the defective rate of the device is 3%. (a) The inspector randomly picks 20 items from a shipment. What is the proba- bility that there will be at least one defective item among these 20? (b) Suppose that the retailer receives 10 shipments in a month and the inspector randomly tests 20 devices per shipment. What is the probability that there will be exactly 3 shipments each containing at least one defective device among the 20 that are selected and tested from the shipment? Solution: (a) Denote by X the number of defective devices among the 20. Then X follows a b(x; 20, 0.03) distribution. Hence, P(X ≥ 1) = 1 − P(X = 0) = 1 − b(0; 20, 0.03) = 1 − (0.03)0 (1 − 0.03)20−0 = 0.4562. (b) In this case, each shipment can either contain at least one defective item or not. Hence, testing of each shipment can be viewed as a Bernoulli trial with p = 0.4562 from part (a). Assuming independence from shipment to shipment
  • 168. 5.2 Binomial and Multinomial Distributions 147 and denoting by Y the number of shipments containing at least one defective item, Y follows another binomial distribution b(y; 10, 0.4562). Therefore, P(Y = 3) = 10 3 0.45623 (1 − 0.4562)7 = 0.1602. Areas of Application From Examples 5.1 through 5.3, it should be clear that the binomial distribution finds applications in many scientific fields. An industrial engineer is keenly inter- ested in the “proportion defective” in an industrial process. Often, quality control measures and sampling schemes for processes are based on the binomial distribu- tion. This distribution applies to any industrial situation where an outcome of a process is dichotomous and the results of the process are independent, with the probability of success being constant from trial to trial. The binomial distribution is also used extensively for medical and military applications. In both fields, a success or failure result is important. For example, “cure” or “no cure” is impor- tant in pharmaceutical work, and “hit” or “miss” is often the interpretation of the result of firing a guided missile. Since the probability distribution of any binomial random variable depends only on the values assumed by the parameters n, p, and q, it would seem reasonable to assume that the mean and variance of a binomial random variable also depend on the values assumed by these parameters. Indeed, this is true, and in the proof of Theorem 5.1 we derive general formulas that can be used to compute the mean and variance of any binomial random variable as functions of n, p, and q. Theorem 5.1: The mean and variance of the binomial distribution b(x; n, p) are μ = np and σ2 = npq. Proof: Let the outcome on the jth trial be represented by a Bernoulli random variable Ij, which assumes the values 0 and 1 with probabilities q and p, respectively. Therefore, in a binomial experiment the number of successes can be written as the sum of the n independent indicator variables. Hence, X = I1 + I2 + · · · + In. The mean of any Ij is E(Ij) = (0)(q) + (1)(p) = p. Therefore, using Corollary 4.4 on page 131, the mean of the binomial distribution is μ = E(X) = E(I1) + E(I2) + · · · + E(In) = p + p + · · · + p n terms = np. The variance of any Ij is σ2 Ij = E(I2 j )−p2 = (0)2 (q)+(1)2 (p)−p2 = p(1−p) = pq. Extending Corollary 4.11 to the case of n independent Bernoulli variables gives the variance of the binomial distribution as σ2 X = σ2 I1 + σ2 I2 + · · · + σ2 In = pq + pq + · · · + pq n terms = npq.
  • 169. 148 Chapter 5 Some Discrete Probability Distributions Example 5.4: It is conjectured that an impurity exists in 30% of all drinking wells in a certain rural community. In order to gain some insight into the true extent of the problem, it is determined that some testing is necessary. It is too expensive to test all of the wells in the area, so 10 are randomly selected for testing. (a) Using the binomial distribution, what is the probability that exactly 3 wells have the impurity, assuming that the conjecture is correct? (b) What is the probability that more than 3 wells are impure? Solution: (a) We require b(3; 10, 0.3) = 3 x=0 b(x; 10, 0.3) − 2 x=0 b(x; 10, 0.3) = 0.6496 − 0.3828 = 0.2668. (b) In this case, P(X 3) = 1 − 0.6496 = 0.3504. Example 5.5: Find the mean and variance of the binomial random variable of Example 5.2, and then use Chebyshev’s theorem (on page 137) to interpret the interval μ ± 2σ. Solution: Since Example 5.2 was a binomial experiment with n = 15 and p = 0.4, by Theorem 5.1, we have μ = (15)(0.4) = 6 and σ2 = (15)(0.4)(0.6) = 3.6. Taking the square root of 3.6, we find that σ = 1.897. Hence, the required interval is 6±(2)(1.897), or from 2.206 to 9.794. Chebyshev’s theorem states that the number of recoveries among 15 patients who contracted the disease has a probability of at least 3/4 of falling between 2.206 and 9.794 or, because the data are discrete, between 2 and 10 inclusive. There are solutions in which the computation of binomial probabilities may allow us to draw a scientific inference about population after data are collected. An illustration is given in the next example. Example 5.6: Consider the situation of Example 5.4. The notion that 30% of the wells are impure is merely a conjecture put forth by the area water board. Suppose 10 wells are randomly selected and 6 are found to contain the impurity. What does this imply about the conjecture? Use a probability statement. Solution: We must first ask: “If the conjecture is correct, is it likely that we would find 6 or more impure wells?” P(X ≥ 6) = 10 x=0 b(x; 10, 0.3) − 5 x=0 b(x; 10, 0.3) = 1 − 0.9527 = 0.0473. As a result, it is very unlikely (4.7% chance) that 6 or more wells would be found impure if only 30% of all are impure. This casts considerable doubt on the conjec- ture and suggests that the impurity problem is much more severe. As the reader should realize by now, in many applications there are more than two possible outcomes. To borrow an example from the field of genetics, the color of guinea pigs produced as offspring may be red, black, or white. Often the “defective” or “not defective” dichotomy is truly an oversimplification in engineering situations. Indeed, there are often more than two categories that characterize items or parts coming off an assembly line.
  • 170. 5.2 Binomial and Multinomial Distributions 149 Multinomial Experiments and the Multinomial Distribution The binomial experiment becomes a multinomial experiment if we let each trial have more than two possible outcomes. The classification of a manufactured product as being light, heavy, or acceptable and the recording of accidents at a certain intersection according to the day of the week constitute multinomial exper- iments. The drawing of a card from a deck with replacement is also a multinomial experiment if the 4 suits are the outcomes of interest. In general, if a given trial can result in any one of k possible outcomes E1, E2, . . . , Ek with probabilities p1, p2, . . . , pk, then the multinomial distribution will give the probability that E1 occurs x1 times, E2 occurs x2 times, . . . , and Ek occurs xk times in n independent trials, where x1 + x2 + · · · + xk = n. We shall denote this joint probability distribution by f(x1, x2, . . . , xk; p1, p2, . . . , pk, n). Clearly, p1 + p2 + · · · + pk = 1, since the result of each trial must be one of the k possible outcomes. To derive the general formula, we proceed as in the binomial case. Since the trials are independent, any specified order yielding x1 outcomes for E1, x2 for E2, . . . , xk for Ek will occur with probability px1 1 px2 2 · · · pxk k . The total number of orders yielding similar outcomes for the n trials is equal to the number of partitions of n items into k groups with x1 in the first group, x2 in the second group, . . . , and xk in the kth group. This can be done in n x1, x2, . . . , xk = n! x1! x2! · · · xk! ways. Since all the partitions are mutually exclusive and occur with equal proba- bility, we obtain the multinomial distribution by multiplying the probability for a specified order by the total number of partitions. Multinomial Distribution If a given trial can result in the k outcomes E1, E2, . . . , Ek with probabilities p1, p2, . . . , pk, then the probability distribution of the random variables X1, X2, . . . , Xk, representing the number of occurrences for E1, E2, . . . , Ek in n inde- pendent trials, is f(x1, x2, . . . , xk; p1, p2, . . . , pk, n) = n x1, x2, . . . , xk px1 1 px2 2 · · · pxk k , with k i=1 xi = n and k i=1 pi = 1. The multinomial distribution derives its name from the fact that the terms of the multinomial expansion of (p1 + p2 + · · · + pk)n correspond to all the possible values of f(x1, x2, . . . , xk; p1, p2, . . . , pk, n).
  • 171. / / 150 Chapter 5 Some Discrete Probability Distributions Example 5.7: The complexity of arrivals and departures of planes at an airport is such that computer simulation is often used to model the “ideal” conditions. For a certain airport with three runways, it is known that in the ideal setting the following are the probabilities that the individual runways are accessed by a randomly arriving commercial jet: Runway 1: p1 = 2/9, Runway 2: p2 = 1/6, Runway 3: p3 = 11/18. What is the probability that 6 randomly arriving airplanes are distributed in the following fashion? Runway 1: 2 airplanes, Runway 2: 1 airplane, Runway 3: 3 airplanes Solution: Using the multinomial distribution, we have f 2, 1, 3; 2 9 , 1 6 , 11 18 , 6 = 6 2, 1, 3 2 9 2 1 6 1 11 18 3 = 6! 2! 1! 3! · 22 92 · 1 6 · 113 183 = 0.1127. Exercises 5.1 A random variable X that assumes the values x1, x2, . . . , xk is called a discrete uniform random vari- able if its probability mass function is f(x) = 1 k for all of x1, x2, . . . , xk and 0 otherwise. Find the mean and variance of X. 5.2 Twelve people are given two identical speakers, which they are asked to listen to for differences, if any. Suppose that these people answer simply by guessing. Find the probability that three people claim to have heard a difference between the two speakers. 5.3 An employee is selected from a staff of 10 to super- vise a certain project by selecting a tag at random from a box containing 10 tags numbered from 1 to 10. Find the formula for the probability distribution of X rep- resenting the number on the tag that is drawn. What is the probability that the number drawn is less than 4? 5.4 In a certain city district, the need for money to buy drugs is stated as the reason for 75% of all thefts. Find the probability that among the next 5 theft cases reported in this district, (a) exactly 2 resulted from the need for money to buy drugs; (b) at most 3 resulted from the need for money to buy drugs. 5.5 According to Chemical Engineering Progress (November 1990), approximately 30% of all pipework failures in chemical plants are caused by operator error. (a) What is the probability that out of the next 20 pipework failures at least 10 are due to operator error? (b) What is the probability that no more than 4 out of 20 such failures are due to operator error? (c) Suppose, for a particular plant, that out of the ran- dom sample of 20 such failures, exactly 5 are due to operator error. Do you feel that the 30% figure stated above applies to this plant? Comment. 5.6 According to a survey by the Administrative Management Society, one-half of U.S. companies give employees 4 weeks of vacation after they have been with the company for 15 years. Find the probabil- ity that among 6 companies surveyed at random, the number that give employees 4 weeks of vacation after 15 years of employment is (a) anywhere from 2 to 5; (b) fewer than 3. 5.7 One prominent physician claims that 70% of those with lung cancer are chain smokers. If his assertion is correct, (a) find the probability that of 10 such patients
  • 172. / / Exercises 151 recently admitted to a hospital, fewer than half are chain smokers; (b) find the probability that of 20 such patients re- cently admitted to a hospital, fewer than half are chain smokers. 5.8 According to a study published by a group of Uni- versity of Massachusetts sociologists, approximately 60% of the Valium users in the state of Massachusetts first took Valium for psychological problems. Find the probability that among the next 8 users from this state who are interviewed, (a) exactly 3 began taking Valium for psychological problems; (b) at least 5 began taking Valium for problems that were not psychological. 5.9 In testing a certain kind of truck tire over rugged terrain, it is found that 25% of the trucks fail to com- plete the test run without a blowout. Of the next 15 trucks tested, find the probability that (a) from 3 to 6 have blowouts; (b) fewer than 4 have blowouts; (c) more than 5 have blowouts. 5.10 A nationwide survey of college seniors by the University of Michigan revealed that almost 70% dis- approve of daily pot smoking, according to a report in Parade. If 12 seniors are selected at random and asked their opinion, find the probability that the number who disapprove of smoking pot daily is (a) anywhere from 7 to 9; (b) at most 5; (c) not less than 8. 5.11 The probability that a patient recovers from a delicate heart operation is 0.9. What is the probabil- ity that exactly 5 of the next 7 patients having this operation survive? 5.12 A traffic control engineer reports that 75% of the vehicles passing through a checkpoint are from within the state. What is the probability that fewer than 4 of the next 9 vehicles are from out of state? 5.13 A national study that examined attitudes about antidepressants revealed that approximately 70% of re- spondents believe “antidepressants do not really cure anything, they just cover up the real trouble.” Accord- ing to this study, what is the probability that at least 3 of the next 5 people selected at random will hold this opinion? 5.14 The percentage of wins for the Chicago Bulls basketball team going into the playoffs for the 1996–97 season was 87.7. Round the 87.7 to 90 in order to use Table A.1. (a) What is the probability that the Bulls sweep (4-0) the initial best-of-7 playoff series? (b) What is the probability that the Bulls win the ini- tial best-of-7 playoff series? (c) What very important assumption is made in an- swering parts (a) and (b)? 5.15 It is known that 60% of mice inoculated with a serum are protected from a certain disease. If 5 mice are inoculated, find the probability that (a) none contracts the disease; (b) fewer than 2 contract the disease; (c) more than 3 contract the disease. 5.16 Suppose that airplane engines operate indepen- dently and fail with probability equal to 0.4. Assuming that a plane makes a safe flight if at least one-half of its engines run, determine whether a 4-engine plane or a 2- engine plane has the higher probability for a successful flight. 5.17 If X represents the number of people in Exer- cise 5.13 who believe that antidepressants do not cure but only cover up the real problem, find the mean and variance of X when 5 people are selected at random. 5.18 (a) In Exercise 5.9, how many of the 15 trucks would you expect to have blowouts? (b) What is the variance of the number of blowouts ex- perienced by the 15 trucks? What does that mean? 5.19 As a student drives to school, he encounters a traffic signal. This traffic signal stays green for 35 sec- onds, yellow for 5 seconds, and red for 60 seconds. As- sume that the student goes to school each weekday between 8:00 and 8:30 a.m. Let X1 be the number of times he encounters a green light, X2 be the number of times he encounters a yellow light, and X3 be the number of times he encounters a red light. Find the joint distribution of X1, X2, and X3. 5.20 According to USA Today (March 18, 1997), of 4 million workers in the general workforce, 5.8% tested positive for drugs. Of those testing positive, 22.5% were cocaine users and 54.4% marijuana users. (a) What is the probability that of 10 workers testing positive, 2 are cocaine users, 5 are marijuana users, and 3 are users of other drugs? (b) What is the probability that of 10 workers testing positive, all are marijuana users?
  • 173. 152 Chapter 5 Some Discrete Probability Distributions (c) What is the probability that of 10 workers testing positive, none is a cocaine user? 5.21 The surface of a circular dart board has a small center circle called the bull’s-eye and 20 pie-shaped re- gions numbered from 1 to 20. Each of the pie-shaped regions is further divided into three parts such that a person throwing a dart that lands in a specific region scores the value of the number, double the number, or triple the number, depending on which of the three parts the dart hits. If a person hits the bull’s-eye with probability 0.01, hits a double with probability 0.10, hits a triple with probability 0.05, and misses the dart board with probability 0.02, what is the probability that 7 throws will result in no bull’s-eyes, no triples, a double twice, and a complete miss once? 5.22 According to a genetics theory, a certain cross of guinea pigs will result in red, black, and white offspring in the ratio 8:4:4. Find the probability that among 8 offspring, 5 will be red, 2 black, and 1 white. 5.23 The probabilities are 0.4, 0.2, 0.3, and 0.1, re- spectively, that a delegate to a certain convention ar- rived by air, bus, automobile, or train. What is the probability that among 9 delegates randomly selected at this convention, 3 arrived by air, 3 arrived by bus, 1 arrived by automobile, and 2 arrived by train? 5.24 A safety engineer claims that only 40% of all workers wear safety helmets when they eat lunch at the workplace. Assuming that this claim is right, find the probability that 4 of 6 workers randomly chosen will be wearing their helmets while having lunch at the workplace. 5.25 Suppose that for a very large shipment of integrated-circuit chips, the probability of failure for any one chip is 0.10. Assuming that the assumptions underlying the binomial distributions are met, find the probability that at most 3 chips fail in a random sample of 20. 5.26 Assuming that 6 in 10 automobile accidents are due mainly to a speed violation, find the probabil- ity that among 8 automobile accidents, 6 will be due mainly to a speed violation (a) by using the formula for the binomial distribution; (b) by using Table A.1. 5.27 If the probability that a fluorescent light has a useful life of at least 800 hours is 0.9, find the proba- bilities that among 20 such lights (a) exactly 18 will have a useful life of at least 800 hours; (b) at least 15 will have a useful life of at least 800 hours; (c) at least 2 will not have a useful life of at least 800 hours. 5.28 A manufacturer knows that on average 20% of the electric toasters produced require repairs within 1 year after they are sold. When 20 toasters are ran- domly selected, find appropriate numbers x and y such that (a) the probability that at least x of them will require repairs is less than 0.5; (b) the probability that at least y of them will not re- quire repairs is greater than 0.8. 5.3 Hypergeometric Distribution The simplest way to view the distinction between the binomial distribution of Section 5.2 and the hypergeometric distribution is to note the way the sampling is done. The types of applications for the hypergeometric are very similar to those for the binomial distribution. We are interested in computing probabilities for the number of observations that fall into a particular category. But in the case of the binomial distribution, independence among trials is required. As a result, if that distribution is applied to, say, sampling from a lot of items (deck of cards, batch of production items), the sampling must be done with replacement of each item after it is observed. On the other hand, the hypergeometric distribution does not require independence and is based on sampling done without replacement. Applications for the hypergeometric distribution are found in many areas, with heavy use in acceptance sampling, electronic testing, and quality assurance. Ob- viously, in many of these fields, testing is done at the expense of the item being tested. That is, the item is destroyed and hence cannot be replaced in the sample. Thus, sampling without replacement is necessary. A simple example with playing
  • 174. 5.3 Hypergeometric Distribution 153 cards will serve as our first illustration. If we wish to find the probability of observing 3 red cards in 5 draws from an ordinary deck of 52 playing cards, the binomial distribution of Section 5.2 does not apply unless each card is replaced and the deck reshuffled before the next draw is made. To solve the problem of sampling without replacement, let us restate the problem. If 5 cards are drawn at random, we are interested in the probability of selecting 3 red cards from the 26 available in the deck and 2 black cards from the 26 available in the deck. There are 26 3 ways of selecting 3 red cards, and for each of these ways we can choose 2 black cards in 26 2 ways. Therefore, the total number of ways to select 3 red and 2 black cards in 5 draws is the product 26 3 26 2 . The total number of ways to select any 5 cards from the 52 that are available is 52 5 . Hence, the probability of selecting 5 cards without replacement of which 3 are red and 2 are black is given by 26 3 26 2 52 5 = (26!/3! 23!)(26!/2! 24!) 52!/5! 47! = 0.3251. In general, we are interested in the probability of selecting x successes from the k items labeled successes and n − x failures from the N − k items labeled failures when a random sample of size n is selected from N items. This is known as a hypergeometric experiment, that is, one that possesses the following two properties: 1. A random sample of size n is selected without replacement from N items. 2. Of the N items, k may be classified as successes and N − k are classified as failures. The number X of successes of a hypergeometric experiment is called a hyper- geometric random variable. Accordingly, the probability distribution of the hypergeometric variable is called the hypergeometric distribution, and its val- ues are denoted by h(x; N, n, k), since they depend on the number of successes k in the set N from which we select n items. Hypergeometric Distribution in Acceptance Sampling Like the binomial distribution, the hypergeometric distribution finds applications in acceptance sampling, where lots of materials or parts are sampled in order to determine whether or not the entire lot is accepted. Example 5.8: A particular part that is used as an injection device is sold in lots of 10. The producer deems a lot acceptable if no more than one defective is in the lot. A sampling plan involves random sampling and testing 3 of the parts out of 10. If none of the 3 is defective, the lot is accepted. Comment on the utility of this plan. Solution: Let us assume that the lot is truly unacceptable (i.e., that 2 out of 10 parts are defective). The probability that the sampling plan finds the lot acceptable is P(X = 0) = 2 0 8 3 10 3 = 0.467.
  • 175. 154 Chapter 5 Some Discrete Probability Distributions Thus, if the lot is truly unacceptable, with 2 defective parts, this sampling plan will allow acceptance roughly 47% of the time. As a result, this plan should be considered faulty. Let us now generalize in order to find a formula for h(x; N, n, k). The total number of samples of size n chosen from N items is N n . These samples are assumed to be equally likely. There are k x ways of selecting x successes from the k that are available, and for each of these ways we can choose the n − x failures in N−k n−x ways. Thus, the total number of favorable samples among the N n possible samples is given by k x N−k n−x . Hence, we have the following definition. Hypergeometric Distribution The probability distribution of the hypergeometric random variable X, the num- ber of successes in a random sample of size n selected from N items of which k are labeled success and N − k labeled failure, is h(x; N, n, k) = k x N−k n−x N n , max{0, n − (N − k)} ≤ x ≤ min{n, k}. The range of x can be determined by the three binomial coefficients in the definition, where x and n−x are no more than k and N −k, respectively, and both of them cannot be less than 0. Usually, when both k (the number of successes) and N − k (the number of failures) are larger than the sample size n, the range of a hypergeometric random variable will be x = 0, 1, . . . , n. Example 5.9: Lots of 40 components each are deemed unacceptable if they contain 3 or more defectives. The procedure for sampling a lot is to select 5 components at random and to reject the lot if a defective is found. What is the probability that exactly 1 defective is found in the sample if there are 3 defectives in the entire lot? Solution: Using the hypergeometric distribution with n = 5, N = 40, k = 3, and x = 1, we find the probability of obtaining 1 defective to be h(1; 40, 5, 3) = 3 1 37 4 40 5 = 0.3011. Once again, this plan is not desirable since it detects a bad lot (3 defectives) only about 30% of the time. Theorem 5.2: The mean and variance of the hypergeometric distribution h(x; N, n, k) are μ = nk N and σ2 = N − n N − 1 · n · k N 1 − k N . The proof for the mean is shown in Appendix A.24. Example 5.10: Let us now reinvestigate Example 3.4 on page 83. The purpose of this example was to illustrate the notion of a random variable and the corresponding sample space. In the example, we have a lot of 100 items of which 12 are defective. What is the probability that in a sample of 10, 3 are defective?
  • 176. 5.3 Hypergeometric Distribution 155 Solution: Using the hypergeometric probability function, we have h(3; 100, 10, 12) = 12 3 88 7 100 10 = 0.08. Example 5.11: Find the mean and variance of the random variable of Example 5.9 and then use Chebyshev’s theorem to interpret the interval μ ± 2σ. Solution: Since Example 5.9 was a hypergeometric experiment with N = 40, n = 5, and k = 3, by Theorem 5.2, we have μ = (5)(3) 40 = 3 8 = 0.375, and σ2 = 40 − 5 39 (5) 3 40 1 − 3 40 = 0.3113. Taking the square root of 0.3113, we find that σ = 0.558. Hence, the required interval is 0.375 ± (2)(0.558), or from −0.741 to 1.491. Chebyshev’s theorem states that the number of defectives obtained when 5 components are selected at random from a lot of 40 components of which 3 are defective has a probability of at least 3/4 of falling between −0.741 and 1.491. That is, at least three-fourths of the time, the 5 components include fewer than 2 defectives. Relationship to the Binomial Distribution In this chapter, we discuss several important discrete distributions that have wide applicability. Many of these distributions relate nicely to each other. The beginning student should gain a clear understanding of these relationships. There is an interesting relationship between the hypergeometric and the binomial distribution. As one might expect, if n is small compared to N, the nature of the N items changes very little in each draw. So a binomial distribution can be used to approximate the hypergeometric distribution when n is small compared to N. In fact, as a rule of thumb, the approximation is good when n/N ≤ 0.05. Thus, the quantity k/N plays the role of the binomial parameter p. As a result, the binomial distribution may be viewed as a large-population version of the hypergeometric distribution. The mean and variance then come from the formulas μ = np = nk N and σ2 = npq = n · k N 1 − k N . Comparing these formulas with those of Theorem 5.2, we see that the mean is the same but the variance differs by a correction factor of (N − n)/(N − 1), which is negligible when n is small relative to N. Example 5.12: A manufacturer of automobile tires reports that among a shipment of 5000 sent to a local distributor, 1000 are slightly blemished. If one purchases 10 of these tires at random from the distributor, what is the probability that exactly 3 are blemished?
  • 177. 156 Chapter 5 Some Discrete Probability Distributions Solution: Since N = 5000 is large relative to the sample size n = 10, we shall approximate the desired probability by using the binomial distribution. The probability of obtaining a blemished tire is 0.2. Therefore, the probability of obtaining exactly 3 blemished tires is h(3; 5000, 10, 1000) ≈ b(3; 10, 0.2) = 0.8791 − 0.6778 = 0.2013. On the other hand, the exact probability is h(3; 5000, 10, 1000) = 0.2015. The hypergeometric distribution can be extended to treat the case where the N items can be partitioned into k cells A1, A2, . . . , Ak with a1 elements in the first cell, a2 elements in the second cell, . . . , ak elements in the kth cell. We are now interested in the probability that a random sample of size n yields x1 elements from A1, x2 elements from A2, . . . , and xk elements from Ak. Let us represent this probability by f(x1, x2, . . . , xk; a1, a2, . . . , ak, N, n). To obtain a general formula, we note that the total number of samples of size n that can be chosen from N items is still N n . There are a1 x1 ways of selecting x1 items from the items in A1, and for each of these we can choose x2 items from the items in A2 in a2 x2 ways. Therefore, we can select x1 items from A1 and x2 items from A2 in a1 x1 a2 x2 ways. Continuing in this way, we can select all n items consisting of x1 from A1, x2 from A2, . . . , and xk from Ak in a1 x1 a2 x2 · · · ak xk ways. The required probability distribution is now defined as follows. Multivariate Hypergeometric Distribution If N items can be partitioned into the k cells A1, A2, . . . , Ak with a1, a2, . . . , ak elements, respectively, then the probability distribution of the random vari- ables X1, X2, . . . , Xk, representing the number of elements selected from A1, A2, . . . , Ak in a random sample of size n, is f(x1, x2, . . . , xk; a1, a2, . . . , ak, N, n) = a1 x1 a2 x2 · · · ak xk N n , with k i=1 xi = n and k i=1 ai = N. Example 5.13: A group of 10 individuals is used for a biological case study. The group contains 3 people with blood type O, 4 with blood type A, and 3 with blood type B. What is the probability that a random sample of 5 will contain 1 person with blood type O, 2 people with blood type A, and 2 people with blood type B? Solution: Using the extension of the hypergeometric distribution with x1 = 1, x2 = 2, x3 = 2, a1 = 3, a2 = 4, a3 = 3, N = 10, and n = 5, we find that the desired probability is f(1, 2, 2; 3, 4, 3, 10, 5) = 3 1 4 2 3 2 10 5 = 3 14 .
  • 178. / / Exercises 157 Exercises 5.29 A homeowner plants 6 bulbs selected at ran- dom from a box containing 5 tulip bulbs and 4 daf- fodil bulbs. What is the probability that he planted 2 daffodil bulbs and 4 tulip bulbs? 5.30 To avoid detection at customs, a traveler places 6 narcotic tablets in a bottle containing 9 vitamin tablets that are similar in appearance. If the customs official selects 3 of the tablets at random for analysis, what is the probability that the traveler will be arrested for illegal possession of narcotics? 5.31 A random committee of size 3 is selected from 4 doctors and 2 nurses. Write a formula for the prob- ability distribution of the random variable X repre- senting the number of doctors on the committee. Find P(2 ≤ X ≤ 3). 5.32 From a lot of 10 missiles, 4 are selected at ran- dom and fired. If the lot contains 3 defective missiles that will not fire, what is the probability that (a) all 4 will fire? (b) at most 2 will not fire? 5.33 If 7 cards are dealt from an ordinary deck of 52 playing cards, what is the probability that (a) exactly 2 of them will be face cards? (b) at least 1 of them will be a queen? 5.34 What is the probability that a waitress will refuse to serve alcoholic beverages to only 2 minors if she randomly checks the IDs of 5 among 9 students, 4 of whom are minors? 5.35 A company is interested in evaluating its cur- rent inspection procedure for shipments of 50 identical items. The procedure is to take a sample of 5 and pass the shipment if no more than 2 are found to be defective. What proportion of shipments with 20% de- fectives will be accepted? 5.36 A manufacturing company uses an acceptance scheme on items from a production line before they are shipped. The plan is a two-stage one. Boxes of 25 items are readied for shipment, and a sample of 3 items is tested for defectives. If any defectives are found, the entire box is sent back for 100% screening. If no defec- tives are found, the box is shipped. (a) What is the probability that a box containing 3 defectives will be shipped? (b) What is the probability that a box containing only 1 defective will be sent back for screening? 5.37 Suppose that the manufacturing company of Ex- ercise 5.36 decides to change its acceptance scheme. Under the new scheme, an inspector takes 1 item at random, inspects it, and then replaces it in the box; a second inspector does likewise. Finally, a third in- spector goes through the same procedure. The box is not shipped if any of the three inspectors find a de- fective. Answer the questions in Exercise 5.36 for this new plan. 5.38 Among 150 IRS employees in a large city, only 30 are women. If 10 of the employees are chosen at random to provide free tax assistance for the residents of this city, use the binomial approximation to the hy- pergeometric distribution to find the probability that at least 3 women are selected. 5.39 An annexation suit against a county subdivision of 1200 residences is being considered by a neighboring city. If the occupants of half the residences object to being annexed, what is the probability that in a ran- dom sample of 10 at least 3 favor the annexation suit? 5.40 It is estimated that 4000 of the 10,000 voting residents of a town are against a new sales tax. If 15 eligible voters are selected at random and asked their opinion, what is the probability that at most 7 favor the new tax? 5.41 A nationwide survey of 17,000 college seniors by the University of Michigan revealed that almost 70% disapprove of daily pot smoking. If 18 of these seniors are selected at random and asked their opinion, what is the probability that more than 9 but fewer than 14 disapprove of smoking pot daily? 5.42 Find the probability of being dealt a bridge hand of 13 cards containing 5 spades, 2 hearts, 3 diamonds, and 3 clubs. 5.43 A foreign student club lists as its members 2 Canadians, 3 Japanese, 5 Italians, and 2 Germans. If a committee of 4 is selected at random, find the prob- ability that (a) all nationalities are represented; (b) all nationalities except Italian are represented. 5.44 An urn contains 3 green balls, 2 blue balls, and 4 red balls. In a random sample of 5 balls, find the probability that both blue balls and at least 1 red ball are selected. 5.45 Biologists doing studies in a particular environ- ment often tag and release subjects in order to estimate
  • 179. 158 Chapter 5 Some Discrete Probability Distributions the size of a population or the prevalence of certain features in the population. Ten animals of a certain population thought to be extinct (or near extinction) are caught, tagged, and released in a certain region. After a period of time, a random sample of 15 of this type of animal is selected in the region. What is the probability that 5 of those selected are tagged if there are 25 animals of this type in the region? 5.46 A large company has an inspection system for the batches of small compressors purchased from ven- dors. A batch typically contains 15 compressors. In the inspection system, a random sample of 5 is selected and all are tested. Suppose there are 2 faulty compressors in the batch of 15. (a) What is the probability that for a given sample there will be 1 faulty compressor? (b) What is the probability that inspection will dis- cover both faulty compressors? 5.47 A government task force suspects that some manufacturing companies are in violation of federal pollution regulations with regard to dumping a certain type of product. Twenty firms are under suspicion but not all can be inspected. Suppose that 3 of the firms are in violation. (a) What is the probability that inspection of 5 firms will find no violations? (b) What is the probability that the plan above will find two violations? 5.48 Every hour, 10,000 cans of soda are filled by a machine, among which 300 underfilled cans are pro- duced. Each hour, a sample of 30 cans is randomly selected and the number of ounces of soda per can is checked. Denote by X the number of cans selected that are underfilled. Find the probability that at least 1 underfilled can will be among those sampled. 5.4 Negative Binomial and Geometric Distributions Let us consider an experiment where the properties are the same as those listed for a binomial experiment, with the exception that the trials will be repeated until a fixed number of successes occur. Therefore, instead of the probability of x successes in n trials, where n is fixed, we are now interested in the probability that the kth success occurs on the xth trial. Experiments of this kind are called negative binomial experiments. As an illustration, consider the use of a drug that is known to be effective in 60% of the cases where it is used. The drug will be considered a success if it is effective in bringing some degree of relief to the patient. We are interested in finding the probability that the fifth patient to experience relief is the seventh patient to receive the drug during a given week. Designating a success by S and a failure by F, a possible order of achieving the desired result is SFSSSFS, which occurs with probability (0.6)(0.4)(0.6)(0.6)(0.6)(0.4)(0.6) = (0.6)5 (0.4)2 . We could list all possible orders by rearranging the F’s and S’s except for the last outcome, which must be the fifth success. The total number of possible orders is equal to the number of partitions of the first six trials into two groups with 2 failures assigned to the one group and 4 successes assigned to the other group. This can be done in 6 4 = 15 mutually exclusive ways. Hence, if X represents the outcome on which the fifth success occurs, then P(X = 7) = 6 4 (0.6)5 (0.4)2 = 0.1866. What Is the Negative Binomial Random Variable? The number X of trials required to produce k successes in a negative binomial experiment is called a negative binomial random variable, and its probability
  • 180. 5.4 Negative Binomial and Geometric Distributions 159 distribution is called the negative binomial distribution. Since its probabilities depend on the number of successes desired and the probability of a success on a given trial, we shall denote them by b∗ (x; k, p). To obtain the general formula for b∗ (x; k, p), consider the probability of a success on the xth trial preceded by k − 1 successes and x − k failures in some specified order. Since the trials are independent, we can multiply all the probabilities corresponding to each desired outcome. Each success occurs with probability p and each failure with probability q = 1 − p. Therefore, the probability for the specified order ending in success is pk−1 qx−k p = pk qx−k . The total number of sample points in the experiment ending in a success, after the occurrence of k−1 successes and x−k failures in any order, is equal to the number of partitions of x−1 trials into two groups with k−1 successes corresponding to one group and x−k failures corresponding to the other group. This number is specified by the term x−1 k−1 , each mutually exclusive and occurring with equal probability pk qx−k . We obtain the general formula by multiplying pk qx−k by x−1 k−1 . Negative Binomial Distribution If repeated independent trials can result in a success with probability p and a failure with probability q = 1 − p, then the probability distribution of the random variable X, the number of the trial on which the kth success occurs, is b∗ (x; k, p) = x − 1 k − 1 pk qx−k , x = k, k + 1, k + 2, . . . . Example 5.14: In an NBA (National Basketball Association) championship series, the team that wins four games out of seven is the winner. Suppose that teams A and B face each other in the championship games and that team A has probability 0.55 of winning a game over team B. (a) What is the probability that team A will win the series in 6 games? (b) What is the probability that team A will win the series? (c) If teams A and B were facing each other in a regional playoff series, which is decided by winning three out of five games, what is the probability that team A would win the series? Solution: (a) b∗ (6; 4, 0.55) = 5 3 0.554 (1 − 0.55)6−4 = 0.1853 (b) P(team A wins the championship series) is b∗ (4; 4, 0.55) + b∗ (5; 4, 0.55) + b∗ (6; 4, 0.55) + b∗ (7; 4, 0.55) = 0.0915 + 0.1647 + 0.1853 + 0.1668 = 0.6083. (c) P(team A wins the playoff) is b∗ (3; 3, 0.55) + b∗ (4; 3, 0.55) + b∗ (5; 3, 0.55) = 0.1664 + 0.2246 + 0.2021 = 0.5931.
  • 181. 160 Chapter 5 Some Discrete Probability Distributions The negative binomial distribution derives its name from the fact that each term in the expansion of pk (1 − q)−k corresponds to the values of b∗ (x; k, p) for x = k, k + 1, k + 2, . . . . If we consider the special case of the negative binomial distribution where k = 1, we have a probability distribution for the number of trials required for a single success. An example would be the tossing of a coin until a head occurs. We might be interested in the probability that the first head occurs on the fourth toss. The negative binomial distribution reduces to the form b∗ (x; 1, p) = pqx−1 , x = 1, 2, 3, . . . . Since the successive terms constitute a geometric progression, it is customary to refer to this special case as the geometric distribution and denote its values by g(x; p). Geometric Distribution If repeated independent trials can result in a success with probability p and a failure with probability q = 1 − p, then the probability distribution of the random variable X, the number of the trial on which the first success occurs, is g(x; p) = pqx−1 , x = 1, 2, 3, . . . . Example 5.15: For a certain manufacturing process, it is known that, on the average, 1 in every 100 items is defective. What is the probability that the fifth item inspected is the first defective item found? Solution: Using the geometric distribution with x = 5 and p = 0.01, we have g(5; 0.01) = (0.01)(0.99)4 = 0.0096. Example 5.16: At a “busy time,” a telephone exchange is very near capacity, so callers have difficulty placing their calls. It may be of interest to know the number of attempts necessary in order to make a connection. Suppose that we let p = 0.05 be the probability of a connection during a busy time. We are interested in knowing the probability that 5 attempts are necessary for a successful call. Solution: Using the geometric distribution with x = 5 and p = 0.05 yields P(X = x) = g(5; 0.05) = (0.05)(0.95)4 = 0.041. Quite often, in applications dealing with the geometric distribution, the mean and variance are important. For example, in Example 5.16, the expected number of calls necessary to make a connection is quite important. The following theorem states without proof the mean and variance of the geometric distribution. Theorem 5.3: The mean and variance of a random variable following the geometric distribution are μ = 1 p and σ2 = 1 − p p2 .
  • 182. 5.5 Poisson Distribution and the Poisson Process 161 Applications of Negative Binomial and Geometric Distributions Areas of application for the negative binomial and geometric distributions become obvious when one focuses on the examples in this section and the exercises devoted to these distributions at the end of Section 5.5. In the case of the geometric distribution, Example 5.16 depicts a situation where engineers or managers are attempting to determine how inefficient a telephone exchange system is during busy times. Clearly, in this case, trials occurring prior to a success represent a cost. If there is a high probability of several attempts being required prior to making a connection, then plans should be made to redesign the system. Applications of the negative binomial distribution are similar in nature. Sup- pose attempts are costly in some sense and are occurring in sequence. A high probability of needing a “large” number of attempts to experience a fixed number of successes is not beneficial to the scientist or engineer. Consider the scenarios of Review Exercises 5.90 and 5.91. In Review Exercise 5.91, the oil driller defines a certain level of success from sequentially drilling locations for oil. If only 6 at- tempts have been made at the point where the second success is experienced, the profits appear to dominate substantially the investment incurred by the drilling. 5.5 Poisson Distribution and the Poisson Process Experiments yielding numerical values of a random variable X, the number of outcomes occurring during a given time interval or in a specified region, are called Poisson experiments. The given time interval may be of any length, such as a minute, a day, a week, a month, or even a year. For example, a Poisson experiment can generate observations for the random variable X representing the number of telephone calls received per hour by an office, the number of days school is closed due to snow during the winter, or the number of games postponed due to rain during a baseball season. The specified region could be a line segment, an area, a volume, or perhaps a piece of material. In such instances, X might represent the number of field mice per acre, the number of bacteria in a given culture, or the number of typing errors per page. A Poisson experiment is derived from the Poisson process and possesses the following properties. Properties of the Poisson Process 1. The number of outcomes occurring in one time interval or specified region of space is independent of the number that occur in any other disjoint time in- terval or region. In this sense we say that the Poisson process has no memory. 2. The probability that a single outcome will occur during a very short time interval or in a small region is proportional to the length of the time interval or the size of the region and does not depend on the number of outcomes occurring outside this time interval or region. 3. The probability that more than one outcome will occur in such a short time interval or fall in such a small region is negligible. The number X of outcomes occurring during a Poisson experiment is called a Poisson random variable, and its probability distribution is called the Poisson
  • 183. 162 Chapter 5 Some Discrete Probability Distributions distribution. The mean number of outcomes is computed from μ = λt, where t is the specific “time,” “distance,” “area,” or “volume” of interest. Since the probabilities depend on λ, the rate of occurrence of outcomes, we shall denote them by p(x; λt). The derivation of the formula for p(x; λt), based on the three properties of a Poisson process listed above, is beyond the scope of this book. The following formula is used for computing Poisson probabilities. Poisson Distribution The probability distribution of the Poisson random variable X, representing the number of outcomes occurring in a given time interval or specified region denoted by t, is p(x; λt) = e−λt (λt)x x! , x = 0, 1, 2, . . . , where λ is the average number of outcomes per unit time, distance, area, or volume and e = 2.71828 . . . . Table A.2 contains Poisson probability sums, P(r; λt) = r x=0 p(x; λt), for selected values of λt ranging from 0.1 to 18.0. We illustrate the use of this table with the following two examples. Example 5.17: During a laboratory experiment, the average number of radioactive particles pass- ing through a counter in 1 millisecond is 4. What is the probability that 6 particles enter the counter in a given millisecond? Solution: Using the Poisson distribution with x = 6 and λt = 4 and referring to Table A.2, we have p(6; 4) = e−4 46 6! = 6 x=0 p(x; 4) − 5 x=0 p(x; 4) = 0.8893 − 0.7851 = 0.1042. Example 5.18: Ten is the average number of oil tankers arriving each day at a certain port. The facilities at the port can handle at most 15 tankers per day. What is the probability that on a given day tankers have to be turned away? Solution: Let X be the number of tankers arriving each day. Then, using Table A.2, we have P(X 15) = 1 − P(X ≤ 15) = 1 − 15 x=0 p(x; 10) = 1 − 0.9513 = 0.0487. Like the binomial distribution, the Poisson distribution is used for quality con- trol, quality assurance, and acceptance sampling. In addition, certain important continuous distributions used in reliability theory and queuing theory depend on the Poisson process. Some of these distributions are discussed and developed in Chapter 6. The following theorem concerning the Poisson random variable is given in Appendix A.25. Theorem 5.4: Both the mean and the variance of the Poisson distribution p(x; λt) are λt.
  • 184. 5.5 Poisson Distribution and the Poisson Process 163 Nature of the Poisson Probability Function Like so many discrete and continuous distributions, the form of the Poisson distri- bution becomes more and more symmetric, even bell-shaped, as the mean grows large. Figure 5.1 illustrates this, showing plots of the probability function for μ = 0.1, μ = 2, and μ = 5. Note the nearness to symmetry when μ becomes as large as 5. A similar condition exists for the binomial distribution, as will be illustrated later in the text. 0 0.25 0.5 0.75 1.0 0 2 4 6 8 10 x f (x) f (x) 0.1 2 5 0 0.10 0.20 0.30 f (x) 0 0.10 0.20 0.30 x x 0 2 4 6 8 10 0 2 4 6 8 10 μ μ μ Figure 5.1: Poisson density functions for different means. Approximation of Binomial Distribution by a Poisson Distribution It should be evident from the three principles of the Poisson process that the Poisson distribution is related to the binomial distribution. Although the Poisson usually finds applications in space and time problems, as illustrated by Examples 5.17 and 5.18, it can be viewed as a limiting form of the binomial distribution. In the case of the binomial, if n is quite large and p is small, the conditions begin to simulate the continuous space or time implications of the Poisson process. The in- dependence among Bernoulli trials in the binomial case is consistent with principle 2 of the Poisson process. Allowing the parameter p to be close to 0 relates to prin- ciple 3 of the Poisson process. Indeed, if n is large and p is close to 0, the Poisson distribution can be used, with μ = np, to approximate binomial probabilities. If p is close to 1, we can still use the Poisson distribution to approximate binomial probabilities by interchanging what we have defined to be a success and a failure, thereby changing p to a value close to 0. Theorem 5.5: Let X be a binomial random variable with probability distribution b(x; n, p). When n → ∞, p → 0, and np n→∞ −→ μ remains constant, b(x; n, p) n→∞ −→ p(x; μ).
  • 185. / / 164 Chapter 5 Some Discrete Probability Distributions Example 5.19: In a certain industrial facility, accidents occur infrequently. It is known that the probability of an accident on any given day is 0.005 and accidents are independent of each other. (a) What is the probability that in any given period of 400 days there will be an accident on one day? (b) What is the probability that there are at most three days with an accident? Solution: Let X be a binomial random variable with n = 400 and p = 0.005. Thus, np = 2. Using the Poisson approximation, (a) P(X = 1) = e−2 21 = 0.271 and (b) P(X ≤ 3) = 3 x=0 e−2 2x /x! = 0.857. Example 5.20: In a manufacturing process where glass products are made, defects or bubbles occur, occasionally rendering the piece undesirable for marketing. It is known that, on average, 1 in every 1000 of these items produced has one or more bubbles. What is the probability that a random sample of 8000 will yield fewer than 7 items possessing bubbles? Solution: This is essentially a binomial experiment with n = 8000 and p = 0.001. Since p is very close to 0 and n is quite large, we shall approximate with the Poisson distribution using μ = (8000)(0.001) = 8. Hence, if X represents the number of bubbles, we have P(X 7) = 6 x=0 b(x; 8000, 0.001) ≈ p(x; 8) = 0.3134. Exercises 5.49 The probability that a person living in a certain city owns a dog is estimated to be 0.3. Find the prob- ability that the tenth person randomly interviewed in that city is the fifth one to own a dog. 5.50 Find the probability that a person flipping a coin gets (a) the third head on the seventh flip; (b) the first head on the fourth flip. 5.51 Three people toss a fair coin and the odd one pays for coffee. If the coins all turn up the same, they are tossed again. Find the probability that fewer than 4 tosses are needed. 5.52 A scientist inoculates mice, one at a time, with a disease germ until he finds 2 that have contracted the disease. If the probability of contracting the disease is 1/6, what is the probability that 8 mice are required? 5.53 An inventory study determines that, on aver- age, demands for a particular item at a warehouse are made 5 times per day. What is the probability that on a given day this item is requested (a) more than 5 times? (b) not at all? 5.54 According to a study published by a group of University of Massachusetts sociologists, about two- thirds of the 20 million persons in this country who take Valium are women. Assuming this figure to be a valid estimate, find the probability that on a given day the fifth prescription written by a doctor for Valium is (a) the first prescribing Valium for a woman;
  • 186. / / Exercises 165 (b) the third prescribing Valium for a woman. 5.55 The probability that a student pilot passes the written test for a private pilot’s license is 0.7. Find the probability that a given student will pass the test (a) on the third try; (b) before the fourth try. 5.56 On average, 3 traffic accidents per month occur at a certain intersection. What is the probability that in any given month at this intersection (a) exactly 5 accidents will occur? (b) fewer than 3 accidents will occur? (c) at least 2 accidents will occur? 5.57 On average, a textbook author makes two word- processing errors per page on the first draft of her text- book. What is the probability that on the next page she will make (a) 4 or more errors? (b) no errors? 5.58 A certain area of the eastern United States is, on average, hit by 6 hurricanes a year. Find the prob- ability that in a given year that area will be hit by (a) fewer than 4 hurricanes; (b) anywhere from 6 to 8 hurricanes. 5.59 Suppose the probability that any given person will believe a tale about the transgressions of a famous actress is 0.8. What is the probability that (a) the sixth person to hear this tale is the fourth one to believe it? (b) the third person to hear this tale is the first one to believe it? 5.60 The average number of field mice per acre in a 5-acre wheat field is estimated to be 12. Find the probability that fewer than 7 field mice are found (a) on a given acre; (b) on 2 of the next 3 acres inspected. 5.61 Suppose that, on average, 1 person in 1000 makes a numerical error in preparing his or her income tax return. If 10,000 returns are selected at random and examined, find the probability that 6, 7, or 8 of them contain an error. 5.62 The probability that a student at a local high school fails the screening test for scoliosis (curvature of the spine) is known to be 0.004. Of the next 1875 students at the school who are screened for scoliosis, find the probability that (a) fewer than 5 fail the test; (b) 8, 9, or 10 fail the test. 5.63 Find the mean and variance of the random vari- able X in Exercise 5.58, representing the number of hurricanes per year to hit a certain area of the eastern United States. 5.64 Find the mean and variance of the random vari- able X in Exercise 5.61, representing the number of persons among 10,000 who make an error in preparing their income tax returns. 5.65 An automobile manufacturer is concerned about a fault in the braking mechanism of a particular model. The fault can, on rare occasions, cause a catastrophe at high speed. The distribution of the number of cars per year that will experience the catastrophe is a Poisson random variable with λ = 5. (a) What is the probability that at most 3 cars per year will experience a catastrophe? (b) What is the probability that more than 1 car per year will experience a catastrophe? 5.66 Changes in airport procedures require consid- erable planning. Arrival rates of aircraft are impor- tant factors that must be taken into account. Suppose small aircraft arrive at a certain airport, according to a Poisson process, at the rate of 6 per hour. Thus, the Poisson parameter for arrivals over a period of hours is μ = 6t. (a) What is the probability that exactly 4 small air- craft arrive during a 1-hour period? (b) What is the probability that at least 4 arrive during a 1-hour period? (c) If we define a working day as 12 hours, what is the probability that at least 75 small aircraft ar- rive during a working day? 5.67 The number of customers arriving per hour at a certain automobile service facility is assumed to follow a Poisson distribution with mean λ = 7. (a) Compute the probability that more than 10 cus- tomers will arrive in a 2-hour period. (b) What is the mean number of arrivals during a 2-hour period? 5.68 Consider Exercise 5.62. What is the mean num- ber of students who fail the test? 5.69 The probability that a person will die when he or she contracts a virus infection is 0.001. Of the next 4000 people infected, what is the mean number who will die?
  • 187. / / 166 Chapter 5 Some Discrete Probability Distributions 5.70 A company purchases large lots of a certain kind of electronic device. A method is used that rejects a lot if 2 or more defective units are found in a random sample of 100 units. (a) What is the mean number of defective units found in a sample of 100 units if the lot is 1% defective? (b) What is the variance? 5.71 For a certain type of copper wire, it is known that, on the average, 1.5 flaws occur per millimeter. Assuming that the number of flaws is a Poisson random variable, what is the probability that no flaws occur in a certain portion of wire of length 5 millimeters? What is the mean number of flaws in a portion of length 5 millimeters? 5.72 Potholes on a highway can be a serious problem, and are in constant need of repair. With a particular type of terrain and make of concrete, past experience suggests that there are, on the average, 2 potholes per mile after a certain amount of usage. It is assumed that the Poisson process applies to the random vari- able “number of potholes.” (a) What is the probability that no more than one pot- hole will appear in a section of 1 mile? (b) What is the probability that no more than 4 pot- holes will occur in a given section of 5 miles? 5.73 Hospital administrators in large cities anguish about traffic in emergency rooms. At a particular hos- pital in a large city, the staff on hand cannot accom- modate the patient traffic if there are more than 10 emergency cases in a given hour. It is assumed that patient arrival follows a Poisson process, and historical data suggest that, on the average, 5 emergencies arrive per hour. (a) What is the probability that in a given hour the staff cannot accommodate the patient traffic? (b) What is the probability that more than 20 emer- gencies arrive during a 3-hour shift? 5.74 It is known that 3% of people whose luggage is screened at an airport have questionable objects in their luggage. What is the probability that a string of 15 people pass through screening successfully before an individual is caught with a questionable object? What is the expected number of people to pass through be- fore an individual is stopped? 5.75 Computer technology has produced an environ- ment in which robots operate with the use of micro- processors. The probability that a robot fails during any 6-hour shift is 0.10. What is the probability that a robot will operate through at most 5 shifts before it fails? 5.76 The refusal rate for telephone polls is known to be approximately 20%. A newspaper report indicates that 50 people were interviewed before the first refusal. (a) Comment on the validity of the report. Use a prob- ability in your argument. (b) What is the expected number of people interviewed before a refusal? Review Exercises 5.77 During a manufacturing process, 15 units are randomly selected each day from the production line to check the percent defective. From historical infor- mation it is known that the probability of a defective unit is 0.05. Any time 2 or more defectives are found in the sample of 15, the process is stopped. This proce- dure is used to provide a signal in case the probability of a defective has increased. (a) What is the probability that on any given day the production process will be stopped? (Assume 5% defective.) (b) Suppose that the probability of a defective has in- creased to 0.07. What is the probability that on any given day the production process will not be stopped? 5.78 An automatic welding machine is being consid- ered for use in a production process. It will be con- sidered for purchase if it is successful on 99% of its welds. Otherwise, it will not be considered efficient. A test is to be conducted with a prototype that is to perform 100 welds. The machine will be accepted for manufacture if it misses no more than 3 welds. (a) What is the probability that a good machine will be rejected? (b) What is the probability that an inefficient machine with 95% welding success will be accepted? 5.79 A car rental agency at a local airport has avail- able 5 Fords, 7 Chevrolets, 4 Dodges, 3 Hondas, and 4 Toyotas. If the agency randomly selects 9 of these cars to chauffeur delegates from the airport to the down- town convention center, find the probability that 2 Fords, 3 Chevrolets, 1 Dodge, 1 Honda, and 2 Toyotas are used. 5.80 Service calls come to a maintenance center ac- cording to a Poisson process, and on average, 2.7 calls
  • 188. / / Review Exercises 167 are received per minute. Find the probability that (a) no more than 4 calls come in any minute; (b) fewer than 2 calls come in any minute; (c) more than 10 calls come in a 5-minute period. 5.81 An electronics firm claims that the proportion of defective units from a certain process is 5%. A buyer has a standard procedure of inspecting 15 units selected randomly from a large lot. On a particular occasion, the buyer found 5 items defective. (a) What is the probability of this occurrence, given that the claim of 5% defective is correct? (b) What would be your reaction if you were the buyer? 5.82 An electronic switching device occasionally mal- functions, but the device is considered satisfactory if it makes, on average, no more than 0.20 error per hour. A particular 5-hour period is chosen for testing the de- vice. If no more than 1 error occurs during the time period, the device will be considered satisfactory. (a) What is the probability that a satisfactory device will be considered unsatisfactory on the basis of the test? Assume a Poisson process. (b) What is the probability that a device will be ac- cepted as satisfactory when, in fact, the mean num- ber of errors is 0.25? Again, assume a Poisson pro- cess. 5.83 A company generally purchases large lots of a certain kind of electronic device. A method is used that rejects a lot if 2 or more defective units are found in a random sample of 100 units. (a) What is the probability of rejecting a lot that is 1% defective? (b) What is the probability of accepting a lot that is 5% defective? 5.84 A local drugstore owner knows that, on average, 100 people enter his store each hour. (a) Find the probability that in a given 3-minute pe- riod nobody enters the store. (b) Find the probability that in a given 3-minute pe- riod more than 5 people enter the store. 5.85 (a) Suppose that you throw 4 dice. Find the probability that you get at least one 1. (b) Suppose that you throw 2 dice 24 times. Find the probability that you get at least one (1, 1), that is, “snake-eyes.” 5.86 Suppose that out of 500 lottery tickets sold, 200 pay off at least the cost of the ticket. Now suppose that you buy 5 tickets. Find the probability that you will win back at least the cost of 3 tickets. 5.87 Imperfections in computer circuit boards and computer chips lend themselves to statistical treat- ment. For a particular type of board, the probability of a diode failure is 0.03 and the board contains 200 diodes. (a) What is the mean number of failures among the diodes? (b) What is the variance? (c) The board will work if there are no defective diodes. What is the probability that a board will work? 5.88 The potential buyer of a particular engine re- quires (among other things) that the engine start suc- cessfully 10 consecutive times. Suppose the probability of a successful start is 0.990. Let us assume that the outcomes of attempted starts are independent. (a) What is the probability that the engine is accepted after only 10 starts? (b) What is the probability that 12 attempted starts are made during the acceptance process? 5.89 The acceptance scheme for purchasing lots con- taining a large number of batteries is to test no more than 75 randomly selected batteries and to reject a lot if a single battery fails. Suppose the probability of a failure is 0.001. (a) What is the probability that a lot is accepted? (b) What is the probability that a lot is rejected on the 20th test? (c) What is the probability that it is rejected in 10 or fewer trials? 5.90 An oil drilling company ventures into various lo- cations, and its success or failure is independent from one location to another. Suppose the probability of a success at any specific location is 0.25. (a) What is the probability that the driller drills at 10 locations and has 1 success? (b) The driller will go bankrupt if it drills 10 times be- fore the first success occurs. What are the driller’s prospects for bankruptcy? 5.91 Consider the information in Review Exercise 5.90. The drilling company feels that it will “hit it big” if the second success occurs on or before the sixth attempt. What is the probability that the driller will hit it big? 5.92 A couple decides to continue to have children un- til they have two males. Assuming that P(male) = 0.5, what is the probability that their second male is their fourth child?
  • 189. / / 168 Chapter 5 Some Discrete Probability Distributions 5.93 It is known by researchers that 1 in 100 people carries a gene that leads to the inheritance of a certain chronic disease. In a random sample of 1000 individ- uals, what is the probability that fewer than 7 indi- viduals carry the gene? Use a Poisson approximation. Again, using the approximation, what is the approxi- mate mean number of people out of 1000 carrying the gene? 5.94 A production process produces electronic com- ponent parts. It is presumed that the probability of a defective part is 0.01. During a test of this presump- tion, 500 parts are sampled randomly and 15 defectives are observed. (a) What is your response to the presumption that the process is 1% defective? Be sure that a computed probability accompanies your comment. (b) Under the presumption of a 1% defective process, what is the probability that only 3 parts will be found defective? (c) Do parts (a) and (b) again using the Poisson ap- proximation. 5.95 A production process outputs items in lots of 50. Sampling plans exist in which lots are pulled aside pe- riodically and exposed to a certain type of inspection. It is usually assumed that the proportion defective is very small. It is important to the company that lots containing defectives be a rare event. The current in- spection plan is to periodically sample randomly 10 out of the 50 items in a lot and, if none are defective, to perform no intervention. (a) Suppose in a lot chosen at random, 2 out of 50 are defective. What is the probability that at least 1 in the sample of 10 from the lot is defective? (b) From your answer to part (a), comment on the quality of this sampling plan. (c) What is the mean number of defects found out of 10 items sampled? 5.96 Consider the situation of Review Exercise 5.95. It has been determined that the sampling plan should be extensive enough that there is a high probability, say 0.9, that if as many as 2 defectives exist in the lot of 50 being sampled, at least 1 will be found in the sampling. With these restrictions, how many of the 50 items should be sampled? 5.97 National security requires that defense technol- ogy be able to detect incoming projectiles or missiles. To make the defense system successful, multiple radar screens are required. Suppose that three independent screens are to be operated and the probability that any one screen will detect an incoming missile is 0.8. Ob- viously, if no screens detect an incoming projectile, the system is unworthy and must be improved. (a) What is the probability that an incoming missile will not be detected by any of the three screens? (b) What is the probability that the missile will be de- tected by only one screen? (c) What is the probability that it will be detected by at least two out of three screens? 5.98 Suppose it is important that the overall missile defense system be as near perfect as possible. (a) Assuming the quality of the screens is as indicated in Review Exercise 5.97, how many are needed to ensure that the probability that a missile gets through undetected is 0.0001? (b) Suppose it is decided to stay with only 3 screens and attempt to improve the screen detection abil- ity. What must the individual screen effectiveness (i.e., probability of detection) be in order to achieve the effectiveness required in part (a)? 5.99 Go back to Review Exercise 5.95(a). Re- compute the probability using the binomial distribu- tion. Comment. 5.100 There are two vacancies in a certain university statistics department. Five individuals apply. Two have expertise in linear models, and one has exper- tise in applied probability. The search committee is instructed to choose the two applicants randomly. (a) What is the probability that the two chosen are those with expertise in linear models? (b) What is the probability that of the two chosen, one has expertise in linear models and one has expertise in applied probability? 5.101 The manufacturer of a tricycle for children has received complaints about defective brakes in the prod- uct. According to the design of the product and consid- erable preliminary testing, it had been determined that the probability of the kind of defect in the complaint was 1 in 10,000 (i.e., 0.0001). After a thorough investi- gation of the complaints, it was determined that during a certain period of time, 200 products were randomly chosen from production and 5 had defective brakes. (a) Comment on the “1 in 10,000” claim by the man- ufacturer. Use a probabilistic argument. Use the binomial distribution for your calculations. (b) Repeat part (a) using the Poisson approximation. 5.102 Group Project: Divide the class into two groups of approximately equal size. The students in group 1 will each toss a coin 10 times (n1) and count the number of heads obtained. The students in group 2 will each toss a coin 40 times (n2) and again count the
  • 190. 5.6 Potential Misconceptions and Hazards 169 number of heads. The students in each group should individually compute the proportion of heads observed, which is an estimate of p, the probability of observing a head. Thus, there will be a set of values of p1 (from group 1) and a set of values p2 (from group 2). All of the values of p1 and p2 are estimates of 0.5, which is the true value of the probability of observing a head for a fair coin. (a) Which set of values is consistently closer to 0.5, the values of p1 or p2? Consider the proof of Theorem 5.1 on page 147 with regard to the estimates of the parameter p = 0.5. The values of p1 were obtained with n = n1 = 10, and the values of p2 were ob- tained with n = n2 = 40. Using the notation of the proof, the estimates are given by p1 = x1 n1 = I1 + · · · + In1 n1 , where I1, . . . , In1 are 0s and 1s and n1 = 10, and p2 = x2 n2 = I1 + · · · + In2 n2 , where I1, . . . , In2 , again, are 0s and 1s and n2 = 40. (b) Referring again to Theorem 5.1, show that E(p1) = E(p2) = p = 0.5. (c) Show that σ2 p1 = σ2 X1 n1 is 4 times the value of σ2 p2 = σ2 X2 n2 . Then explain further why the values of p2 from group 2 are more consistently closer to the true value, p = 0.5, than the values of p1 from group 1. You will continue to learn more and more about parameter estimation beginning in Chapter 9. At that point emphasis will put on the importance of the mean and variance of an estimator of a param- eter. 5.6 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters The discrete distributions discussed in this chapter occur with great frequency in engineering and the biological and physical sciences. The exercises and examples certainly suggest this. Industrial sampling plans and many engineering judgments are based on the binomial and Poisson distributions as well as on the hypergeo- metric distribution. While the geometric and negative binomial distributions are used to a somewhat lesser extent, they also find applications. In particular, a neg- ative binomial random variable can be viewed as a mixture of Poisson and gamma random variables (the gamma distribution will be discussed in Chapter 6). Despite the rich heritage that these distributions find in real life, they can be misused unless the scientific practitioner is prudent and cautious. Of course, any probability calculation for the distributions discussed in this chapter is made under the assumption that the parameter value is known. Real-world applications often result in a parameter value that may “move around” due to factors that are difficult to control in the process or because of interventions in the process that have not been taken into account. For example, in Review Exercise 5.77, “historical information” is used. But is the process that exists now the same as that under which the historical data were collected? The use of the Poisson distribution can suffer even more from this kind of difficulty. For example, in Review Exercise 5.80, the questions in parts (a), (b), and (c) are based on the use of μ = 2.7 calls per minute. Based on historical records, this is the number of calls that occur “on average.” But in this and many other applications of the Poisson distribution, there are slow times and busy times and so there are times in which the conditions
  • 191. 170 Chapter 5 Some Discrete Probability Distributions for the Poisson process may appear to hold when in fact they do not. Thus, the probability calculations may be incorrect. In the case of the binomial, the assumption that may fail in certain applications (in addition to nonconstancy of p) is the independence assumption, stating that the Bernoulli trials are independent. One of the most famous misuses of the binomial distribution occurred in the 1961 baseball season, when Mickey Mantle and Roger Maris were engaged in a friendly battle to break Babe Ruth’s all-time record of 60 home runs. A famous magazine article made a prediction, based on probability theory, that Mantle would break the record. The prediction was based on probability calculation with the use of the binomial distribution. The classic error made was to estimate the param- eter p (one for each player) based on relative historical frequency of home runs throughout the players’ careers. Maris, unlike Mantle, had not been a prodigious home run hitter prior to 1961 so his estimate of p was quite low. As a result, the calculated probability of breaking the record was quite high for Mantle and low for Maris. The end result: Mantle failed to break the record and Maris succeeded.
  • 192. Chapter 6 Some Continuous Probability Distributions 6.1 Continuous Uniform Distribution One of the simplest continuous distributions in all of statistics is the continuous uniform distribution. This distribution is characterized by a density function that is “flat,” and thus the probability is uniform in a closed interval, say [A, B]. Although applications of the continuous uniform distribution are not as abundant as those for other distributions discussed in this chapter, it is appropriate for the novice to begin this introduction to continuous distributions with the uniform distribution. Uniform Distribution The density function of the continuous uniform random variable X on the in- terval [A, B] is f(x; A, B) = 1 B−A , A ≤ x ≤ B, 0, elsewhere. The density function forms a rectangle with base B−A and constant height 1 B−A . As a result, the uniform distribution is often called the rectangular distribution. Note, however, that the interval may not always be closed: [A, B]. It can be (A, B) as well. The density function for a uniform random variable on the interval [1, 3] is shown in Figure 6.1. Probabilities are simple to calculate for the uniform distribution because of the simple nature of the density function. However, note that the application of this distribution is based on the assumption that the probability of falling in an interval of fixed length within [A, B] is constant. Example 6.1: Suppose that a large conference room at a certain company can be reserved for no more than 4 hours. Both long and short conferences occur quite often. In fact, it can be assumed that the length X of a conference has a uniform distribution on the interval [0, 4]. 171
  • 193. 172 Chapter 6 Some Continuous Probability Distributions x f (x) 0 3 1 1 2 Figure 6.1: The density function for a random variable on the interval [1, 3]. (a) What is the probability density function? (b) What is the probability that any given conference lasts at least 3 hours? Solution: (a) The appropriate density function for the uniformly distributed random vari- able X in this situation is f(x) = 1 4 , 0 ≤ x ≤ 4, 0, elsewhere. (b) P[X ≥ 3] = 4 3 1 4 dx = 1 4 . Theorem 6.1: The mean and variance of the uniform distribution are μ = A + B 2 and σ2 = (B − A)2 12 . The proofs of the theorems are left to the reader. See Exercise 6.1 on page 185. 6.2 Normal Distribution The most important continuous probability distribution in the entire field of statis- tics is the normal distribution. Its graph, called the normal curve, is the bell-shaped curve of Figure 6.2, which approximately describes many phenomena that occur in nature, industry, and research. For example, physical measurements in areas such as meteorological experiments, rainfall studies, and measurements of manufactured parts are often more than adequately explained with a normal distribution. In addition, errors in scientific measurements are extremely well ap- proximated by a normal distribution. In 1733, Abraham DeMoivre developed the mathematical equation of the normal curve. It provided a basis from which much of the theory of inductive statistics is founded. The normal distribution is of- ten referred to as the Gaussian distribution, in honor of Karl Friedrich Gauss
  • 194. 6.2 Normal Distribution 173 x μ σ Figure 6.2: The normal curve. (1777–1855), who also derived its equation from a study of errors in repeated mea- surements of the same quantity. A continuous random variable X having the bell-shaped distribution of Figure 6.2 is called a normal random variable. The mathematical equation for the probability distribution of the normal variable depends on the two parameters μ and σ, its mean and standard deviation, respectively. Hence, we denote the values of the density of X by n(x; μ, σ). Normal Distribution The density of the normal random variable X, with mean μ and variance σ2 , is n(x; μ, σ) = 1 √ 2πσ e− 1 2σ2 (x−μ)2 , − ∞ x ∞, where π = 3.14159 . . . and e = 2.71828 . . . . Once μ and σ are specified, the normal curve is completely determined. For exam- ple, if μ = 50 and σ = 5, then the ordinates n(x; 50, 5) can be computed for various values of x and the curve drawn. In Figure 6.3, we have sketched two normal curves having the same standard deviation but different means. The two curves are iden- tical in form but are centered at different positions along the horizontal axis. x 1 2 σ σ 1 2 μ μ Figure 6.3: Normal curves with μ1 μ2 and σ1 = σ2.
  • 195. 174 Chapter 6 Some Continuous Probability Distributions x 1 2 1 2 μ μ σ σ Figure 6.4: Normal curves with μ1 = μ2 and σ1 σ2. In Figure 6.4, we have sketched two normal curves with the same mean but different standard deviations. This time we see that the two curves are centered at exactly the same position on the horizontal axis, but the curve with the larger standard deviation is lower and spreads out farther. Remember that the area under a probability curve must be equal to 1, and therefore the more variable the set of observations, the lower and wider the corresponding curve will be. Figure 6.5 shows two normal curves having different means and different stan- dard deviations. Clearly, they are centered at different positions on the horizontal axis and their shapes reflect the two different values of σ. x 1 2 2 μ 1 μ σ σ Figure 6.5: Normal curves with μ1 μ2 and σ1 σ2. Based on inspection of Figures 6.2 through 6.5 and examination of the first and second derivatives of n(x; μ, σ), we list the following properties of the normal curve: 1. The mode, which is the point on the horizontal axis where the curve is a maximum, occurs at x = μ. 2. The curve is symmetric about a vertical axis through the mean μ. 3. The curve has its points of inflection at x = μ ± σ; it is concave downward if μ − σ X μ + σ and is concave upward otherwise.
  • 196. 6.2 Normal Distribution 175 4. The normal curve approaches the horizontal axis asymptotically as we proceed in either direction away from the mean. 5. The total area under the curve and above the horizontal axis is equal to 1. Theorem 6.2: The mean and variance of n(x; μ, σ) are μ and σ2 , respectively. Hence, the stan- dard deviation is σ. Proof: To evaluate the mean, we first calculate E(X − μ) = ∞ −∞ x − μ √ 2πσ e− 1 2 (x−μ σ ) 2 dx. Setting z = (x − μ)/σ and dx = σ dz, we obtain E(X − μ) = 1 √ 2π ∞ −∞ ze− 1 2 z2 dz = 0, since the integrand above is an odd function of z. Using Theorem 4.5 on page 128, we conclude that E(X) = μ. The variance of the normal distribution is given by E[(X − μ)2 ] = 1 √ 2πσ ∞ −∞ (x − μ)2 e− 1 2 [(x−μ)/σ]2 dx. Again setting z = (x − μ)/σ and dx = σ dz, we obtain E[(X − μ)2 ] = σ2 √ 2π ∞ −∞ z2 e− z2 2 dz. Integrating by parts with u = z and dv = ze−z2 /2 dz so that du = dz and v = −e−z2 /2 , we find that E[(X − μ)2 ] = σ2 √ 2π −ze−z2 /2 ∞ −∞ + ∞ −∞ e−z2 /2 dz = σ2 (0 + 1) = σ2 . Many random variables have probability distributions that can be described adequately by the normal curve once μ and σ2 are specified. In this chapter, we shall assume that these two parameters are known, perhaps from previous inves- tigations. Later, we shall make statistical inferences when μ and σ2 are unknown and have been estimated from the available experimental data. We pointed out earlier the role that the normal distribution plays as a reason- able approximation of scientific variables in real-life experiments. There are other applications of the normal distribution that the reader will appreciate as he or she moves on in the book. The normal distribution finds enormous application as a limiting distribution. Under certain conditions, the normal distribution provides a good continuous approximation to the binomial and hypergeometric distributions. The case of the approximation to the binomial is covered in Section 6.5. In Chap- ter 8, the reader will learn about sampling distributions. It turns out that the limiting distribution of sample averages is normal. This provides a broad base for statistical inference that proves very valuable to the data analyst interested in
  • 197. 176 Chapter 6 Some Continuous Probability Distributions estimation and hypothesis testing. Theory in the important areas such as analysis of variance (Chapters 13, 14, and 15) and quality control (Chapter 17) is based on assumptions that make use of the normal distribution. In Section 6.3, examples demonstrate the use of tables of the normal distribu- tion. Section 6.4 follows with examples of applications of the normal distribution. 6.3 Areas under the Normal Curve The curve of any continuous probability distribution or density function is con- structed so that the area under the curve bounded by the two ordinates x = x1 and x = x2 equals the probability that the random variable X assumes a value between x = x1 and x = x2. Thus, for the normal curve in Figure 6.6, P(x1 X x2) = x2 x1 n(x; μ, σ) dx = 1 √ 2πσ x2 x1 e− 1 2σ2 (x−μ)2 dx is represented by the area of the shaded region. x x1 x2 μ Figure 6.6: P(x1 X x2) = area of the shaded region. In Figures 6.3, 6.4, and 6.5 we saw how the normal curve is dependent on the mean and the standard deviation of the distribution under investigation. The area under the curve between any two ordinates must then also depend on the values μ and σ. This is evident in Figure 6.7, where we have shaded regions cor- responding to P(x1 X x2) for two curves with different means and variances. P(x1 X x2), where X is the random variable describing distribution A, is indicated by the shaded area below the curve of A. If X is the random variable de- scribing distribution B, then P(x1 X x2) is given by the entire shaded region. Obviously, the two shaded regions are different in size; therefore, the probability associated with each distribution will be different for the two given values of X. There are many types of statistical software that can be used in calculating areas under the normal curve. The difficulty encountered in solving integrals of normal density functions necessitates the tabulation of normal curve areas for quick reference. However, it would be a hopeless task to attempt to set up separate tables for every conceivable value of μ and σ. Fortunately, we are able to transform all the observations of any normal random variable X into a new set of observations
  • 198. 6.3 Areas under the Normal Curve 177 x x1 x2 A B Figure 6.7: P(x1 X x2) for different normal curves. of a normal random variable Z with mean 0 and variance 1. This can be done by means of the transformation Z = X − μ σ . Whenever X assumes a value x, the corresponding value of Z is given by z = (x − μ)/σ. Therefore, if X falls between the values x = x1 and x = x2, the random variable Z will fall between the corresponding values z1 = (x1 − μ)/σ and z2 = (x2 − μ)/σ. Consequently, we may write P(x1 X x2) = 1 √ 2πσ x2 x1 e− 1 2σ2 (x−μ)2 dx = 1 √ 2π z2 z1 e− 1 2 z2 dz = z2 z1 n(z; 0, 1) dz = P(z1 Z z2), where Z is seen to be a normal random variable with mean 0 and variance 1. Definition 6.1: The distribution of a normal random variable with mean 0 and variance 1 is called a standard normal distribution. The original and transformed distributions are illustrated in Figure 6.8. Since all the values of X falling between x1 and x2 have corresponding z values between z1 and z2, the area under the X-curve between the ordinates x = x1 and x = x2 in Figure 6.8 equals the area under the Z-curve between the transformed ordinates z = z1 and z = z2. We have now reduced the required number of tables of normal-curve areas to one, that of the standard normal distribution. Table A.3 indicates the area under the standard normal curve corresponding to P(Z z) for values of z ranging from −3.49 to 3.49. To illustrate the use of this table, let us find the probability that Z is less than 1.74. First, we locate a value of z equal to 1.7 in the left column; then we move across the row to the column under 0.04, where we read 0.9591. Therefore, P(Z 1.74) = 0.9591. To find a z value corresponding to a given probability, the process is reversed. For example, the z value leaving an area of 0.2148 under the curve to the left of z is seen to be −0.79.
  • 199. 178 Chapter 6 Some Continuous Probability Distributions x μ x1 x2 σ σ z 0 z1 z2 1 Figure 6.8: The original and transformed normal distributions. Example 6.2: Given a standard normal distribution, find the area under the curve that lies (a) to the right of z = 1.84 and (b) between z = −1.97 and z = 0.86. z 0 1.84 (a) z 1.97 0 0.86 (b) Figure 6.9: Areas for Example 6.2. Solution: See Figure 6.9 for the specific areas. (a) The area in Figure 6.9(a) to the right of z = 1.84 is equal to 1 minus the area in Table A.3 to the left of z = 1.84, namely, 1 − 0.9671 = 0.0329. (b) The area in Figure 6.9(b) between z = −1.97 and z = 0.86 is equal to the area to the left of z = 0.86 minus the area to the left of z = −1.97. From Table A.3 we find the desired area to be 0.8051 − 0.0244 = 0.7807.
  • 200. 6.3 Areas under the Normal Curve 179 Example 6.3: Given a standard normal distribution, find the value of k such that (a) P(Z k) = 0.3015 and (b) P(k Z −0.18) = 0.4197. x 0 k (a) 0.3015 x k −0.18 (b) 0.4197 Figure 6.10: Areas for Example 6.3. Solution: Distributions and the desired areas are shown in Figure 6.10. (a) In Figure 6.10(a), we see that the k value leaving an area of 0.3015 to the right must then leave an area of 0.6985 to the left. From Table A.3 it follows that k = 0.52. (b) From Table A.3 we note that the total area to the left of −0.18 is equal to 0.4286. In Figure 6.10(b), we see that the area between k and −0.18 is 0.4197, so the area to the left of k must be 0.4286 − 0.4197 = 0.0089. Hence, from Table A.3, we have k = −2.37. Example 6.4: Given a random variable X having a normal distribution with μ = 50 and σ = 10, find the probability that X assumes a value between 45 and 62. x 0 0.5 1.2 Figure 6.11: Area for Example 6.4. Solution: The z values corresponding to x1 = 45 and x2 = 62 are z1 = 45 − 50 10 = −0.5 and z2 = 62 − 50 10 = 1.2.
  • 201. 180 Chapter 6 Some Continuous Probability Distributions Therefore, P(45 X 62) = P(−0.5 Z 1.2). P(−0.5 Z 1.2) is shown by the area of the shaded region in Figure 6.11. This area may be found by subtracting the area to the left of the ordinate z = −0.5 from the entire area to the left of z = 1.2. Using Table A.3, we have P(45 X 62) = P(−0.5 Z 1.2) = P(Z 1.2) − P(Z −0.5) = 0.8849 − 0.3085 = 0.5764. Example 6.5: Given that X has a normal distribution with μ = 300 and σ = 50, find the probability that X assumes a value greater than 362. Solution: The normal probability distribution with the desired area shaded is shown in Figure 6.12. To find P(X 362), we need to evaluate the area under the normal curve to the right of x = 362. This can be done by transforming x = 362 to the corresponding z value, obtaining the area to the left of z from Table A.3, and then subtracting this area from 1. We find that z = 362 − 300 50 = 1.24. Hence, P(X 362) = P(Z 1.24) = 1 − P(Z 1.24) = 1 − 0.8925 = 0.1075. x 300 362 50 σ Figure 6.12: Area for Example 6.5. According to Chebyshev’s theorem on page 137, the probability that a random variable assumes a value within 2 standard deviations of the mean is at least 3/4. If the random variable has a normal distribution, the z values corresponding to x1 = μ − 2σ and x2 = μ + 2σ are easily computed to be z1 = (μ − 2σ) − μ σ = −2 and z2 = (μ + 2σ) − μ σ = 2. Hence, P(μ − 2σ X μ + 2σ) = P(−2 Z 2) = P(Z 2) − P(Z −2) = 0.9772 − 0.0228 = 0.9544, which is a much stronger statement than that given by Chebyshev’s theorem.
  • 202. 6.3 Areas under the Normal Curve 181 Using the Normal Curve in Reverse Sometimes, we are required to find the value of z corresponding to a specified probability that falls between values listed in Table A.3 (see Example 6.6). For convenience, we shall always choose the z value corresponding to the tabular prob- ability that comes closest to the specified probability. The preceding two examples were solved by going first from a value of x to a z value and then computing the desired area. In Example 6.6, we reverse the process and begin with a known area or probability, find the z value, and then determine x by rearranging the formula z = x − μ σ to give x = σz + μ. Example 6.6: Given a normal distribution with μ = 40 and σ = 6, find the value of x that has (a) 45% of the area to the left and (b) 14% of the area to the right. x 40 (a) σ = 6 σ = 6 0.45 x 40 (b) 0.14 Figure 6.13: Areas for Example 6.6. Solution: (a) An area of 0.45 to the left of the desired x value is shaded in Figure 6.13(a). We require a z value that leaves an area of 0.45 to the left. From Table A.3 we find P(Z −0.13) = 0.45, so the desired z value is −0.13. Hence, x = (6)(−0.13) + 40 = 39.22. (b) In Figure 6.13(b), we shade an area equal to 0.14 to the right of the desired x value. This time we require a z value that leaves 0.14 of the area to the right and hence an area of 0.86 to the left. Again, from Table A.3, we find P(Z 1.08) = 0.86, so the desired z value is 1.08 and x = (6)(1.08) + 40 = 46.48.
  • 203. 182 Chapter 6 Some Continuous Probability Distributions 6.4 Applications of the Normal Distribution Some of the many problems for which the normal distribution is applicable are treated in the following examples. The use of the normal curve to approximate binomial probabilities is considered in Section 6.5. Example 6.7: A certain type of storage battery lasts, on average, 3.0 years with a standard deviation of 0.5 year. Assuming that battery life is normally distributed, find the probability that a given battery will last less than 2.3 years. Solution: First construct a diagram such as Figure 6.14, showing the given distribution of battery lives and the desired area. To find P(X 2.3), we need to evaluate the area under the normal curve to the left of 2.3. This is accomplished by finding the area to the left of the corresponding z value. Hence, we find that z = 2.3 − 3 0.5 = −1.4, and then, using Table A.3, we have P(X 2.3) = P(Z −1.4) = 0.0808. x 3 2.3 0.5 σ Figure 6.14: Area for Example 6.7. x 800 778 834 40 σ Figure 6.15: Area for Example 6.8. Example 6.8: An electrical firm manufactures light bulbs that have a life, before burn-out, that is normally distributed with mean equal to 800 hours and a standard deviation of 40 hours. Find the probability that a bulb burns between 778 and 834 hours. Solution: The distribution of light bulb life is illustrated in Figure 6.15. The z values corre- sponding to x1 = 778 and x2 = 834 are z1 = 778 − 800 40 = −0.55 and z2 = 834 − 800 40 = 0.85. Hence, P(778 X 834) = P(−0.55 Z 0.85) = P(Z 0.85) − P(Z −0.55) = 0.8023 − 0.2912 = 0.5111. Example 6.9: In an industrial process, the diameter of a ball bearing is an important measure- ment. The buyer sets specifications for the diameter to be 3.0 ± 0.01 cm. The
  • 204. 6.4 Applications of the Normal Distribution 183 implication is that no part falling outside these specifications will be accepted. It is known that in the process the diameter of a ball bearing has a normal distribu- tion with mean μ = 3.0 and standard deviation σ = 0.005. On average, how many manufactured ball bearings will be scrapped? Solution: The distribution of diameters is illustrated by Figure 6.16. The values correspond- ing to the specification limits are x1 = 2.99 and x2 = 3.01. The corresponding z values are z1 = 2.99 − 3.0 0.005 = −2.0 and z2 = 3.01 − 3.0 0.005 = +2.0. Hence, P(2.99 X 3.01) = P(−2.0 Z 2.0). From Table A.3, P(Z −2.0) = 0.0228. Due to symmetry of the normal distribu- tion, we find that P(Z −2.0) + P(Z 2.0) = 2(0.0228) = 0.0456. As a result, it is anticipated that, on average, 4.56% of manufactured ball bearings will be scrapped. x 3.0 2.99 3.01 σ = 0.005 0.0228 0.0228 Figure 6.16: Area for Example 6.9. x 1.500 1.108 1.892 σ = 0.2 0.025 0.025 Figure 6.17: Specifications for Example 6.10. Example 6.10: Gauges are used to reject all components for which a certain dimension is not within the specification 1.50 ± d. It is known that this measurement is normally distributed with mean 1.50 and standard deviation 0.2. Determine the value d such that the specifications “cover” 95% of the measurements. Solution: From Table A.3 we know that P(−1.96 Z 1.96) = 0.95. Therefore, 1.96 = (1.50 + d) − 1.50 0.2 , from which we obtain d = (0.2)(1.96) = 0.392. An illustration of the specifications is shown in Figure 6.17.
  • 205. 184 Chapter 6 Some Continuous Probability Distributions Example 6.11: A certain machine makes electrical resistors having a mean resistance of 40 ohms and a standard deviation of 2 ohms. Assuming that the resistance follows a normal distribution and can be measured to any degree of accuracy, what percentage of resistors will have a resistance exceeding 43 ohms? Solution: A percentage is found by multiplying the relative frequency by 100%. Since the relative frequency for an interval is equal to the probability of a value falling in the interval, we must find the area to the right of x = 43 in Figure 6.18. This can be done by transforming x = 43 to the corresponding z value, obtaining the area to the left of z from Table A.3, and then subtracting this area from 1. We find z = 43 − 40 2 = 1.5. Therefore, P(X 43) = P(Z 1.5) = 1 − P(Z 1.5) = 1 − 0.9332 = 0.0668. Hence, 6.68% of the resistors will have a resistance exceeding 43 ohms. x 40 43 2.0 σ Figure 6.18: Area for Example 6.11. x 40 43.5 2.0 σ Figure 6.19: Area for Example 6.12. Example 6.12: Find the percentage of resistances exceeding 43 ohms for Example 6.11 if resistance is measured to the nearest ohm. Solution: This problem differs from that in Example 6.11 in that we now assign a measure- ment of 43 ohms to all resistors whose resistances are greater than 42.5 and less than 43.5. We are actually approximating a discrete distribution by means of a continuous normal distribution. The required area is the region shaded to the right of 43.5 in Figure 6.19. We now find that z = 43.5 − 40 2 = 1.75. Hence, P(X 43.5) = P(Z 1.75) = 1 − P(Z 1.75) = 1 − 0.9599 = 0.0401. Therefore, 4.01% of the resistances exceed 43 ohms when measured to the nearest ohm. The difference 6.68% − 4.01% = 2.67% between this answer and that of Example 6.11 represents all those resistance values greater than 43 and less than 43.5 that are now being recorded as 43 ohms.
  • 206. / / Exercises 185 Example 6.13: The average grade for an exam is 74, and the standard deviation is 7. If 12% of the class is given As, and the grades are curved to follow a normal distribution, what is the lowest possible A and the highest possible B? Solution: In this example, we begin with a known area of probability, find the z value, and then determine x from the formula x = σz + μ. An area of 0.12, corresponding to the fraction of students receiving As, is shaded in Figure 6.20. We require a z value that leaves 0.12 of the area to the right and, hence, an area of 0.88 to the left. From Table A.3, P(Z 1.18) has the closest value to 0.88, so the desired z value is 1.18. Hence, x = (7)(1.18) + 74 = 82.26. Therefore, the lowest A is 83 and the highest B is 82. x 74 σ = 7 0.12 Figure 6.20: Area for Example 6.13. x 74 D6 σ = 7 0.6 Figure 6.21: Area for Example 6.14. Example 6.14: Refer to Example 6.13 and find the sixth decile. Solution: The sixth decile, written D6, is the x value that leaves 60% of the area to the left, as shown in Figure 6.21. From Table A.3 we find P(Z 0.25) ≈ 0.6, so the desired z value is 0.25. Now x = (7)(0.25) + 74 = 75.75. Hence, D6 = 75.75. That is, 60% of the grades are 75 or less. Exercises 6.1 Given a continuous uniform distribution, show that (a) μ = A+B 2 and (b) σ2 = (B−A)2 12 . 6.2 Suppose X follows a continuous uniform distribu- tion from 1 to 5. Determine the conditional probability P(X 2.5 | X ≤ 4). 6.3 The daily amount of coffee, in liters, dispensed by a machine located in an airport lobby is a random variable X having a continuous uniform distribution with A = 7 and B = 10. Find the probability that on a given day the amount of coffee dispensed by this machine will be (a) at most 8.8 liters; (b) more than 7.4 liters but less than 9.5 liters; (c) at least 8.5 liters. 6.4 A bus arrives every 10 minutes at a bus stop. It is assumed that the waiting time for a particular indi- vidual is a random variable with a continuous uniform distribution.
  • 207. / / 186 Chapter 6 Some Continuous Probability Distributions (a) What is the probability that the individual waits more than 7 minutes? (b) What is the probability that the individual waits between 2 and 7 minutes? 6.5 Given a standard normal distribution, find the area under the curve that lies (a) to the left of z = −1.39; (b) to the right of z = 1.96; (c) between z = −2.16 and z = −0.65; (d) to the left of z = 1.43; (e) to the right of z = −0.89; (f) between z = −0.48 and z = 1.74. 6.6 Find the value of z if the area under a standard normal curve (a) to the right of z is 0.3622; (b) to the left of z is 0.1131; (c) between 0 and z, with z 0, is 0.4838; (d) between −z and z, with z 0, is 0.9500. 6.7 Given a standard normal distribution, find the value of k such that (a) P(Z k) = 0.2946; (b) P(Z k) = 0.0427; (c) P(−0.93 Z k) = 0.7235. 6.8 Given a normal distribution with μ = 30 and σ = 6, find (a) the normal curve area to the right of x = 17; (b) the normal curve area to the left of x = 22; (c) the normal curve area between x = 32 and x = 41; (d) the value of x that has 80% of the normal curve area to the left; (e) the two values of x that contain the middle 75% of the normal curve area. 6.9 Given the normally distributed variable X with mean 18 and standard deviation 2.5, find (a) P(X 15); (b) the value of k such that P(X k) = 0.2236; (c) the value of k such that P(X k) = 0.1814; (d) P(17 X 21). 6.10 According to Chebyshev’s theorem, the proba- bility that any random variable assumes a value within 3 standard deviations of the mean is at least 8/9. If it is known that the probability distribution of a random variable X is normal with mean μ and variance σ2 , what is the exact value of P(μ − 3σ X μ + 3σ)? 6.11 A soft-drink machine is regulated so that it dis- charges an average of 200 milliliters per cup. If the amount of drink is normally distributed with a stan- dard deviation equal to 15 milliliters, (a) what fraction of the cups will contain more than 224 milliliters? (b) what is the probability that a cup contains between 191 and 209 milliliters? (c) how many cups will probably overflow if 230- milliliter cups are used for the next 1000 drinks? (d) below what value do we get the smallest 25% of the drinks? 6.12 The loaves of rye bread distributed to local stores by a certain bakery have an average length of 30 centimeters and a standard deviation of 2 centimeters. Assuming that the lengths are normally distributed, what percentage of the loaves are (a) longer than 31.7 centimeters? (b) between 29.3 and 33.5 centimeters in length? (c) shorter than 25.5 centimeters? 6.13 A research scientist reports that mice will live an average of 40 months when their diets are sharply re- stricted and then enriched with vitamins and proteins. Assuming that the lifetimes of such mice are normally distributed with a standard deviation of 6.3 months, find the probability that a given mouse will live (a) more than 32 months; (b) less than 28 months; (c) between 37 and 49 months. 6.14 The finished inside diameter of a piston ring is normally distributed with a mean of 10 centimeters and a standard deviation of 0.03 centimeter. (a) What proportion of rings will have inside diameters exceeding 10.075 centimeters? (b) What is the probability that a piston ring will have an inside diameter between 9.97 and 10.03 centime- ters? (c) Below what value of inside diameter will 15% of the piston rings fall? 6.15 A lawyer commutes daily from his suburban home to his midtown office. The average time for a one-way trip is 24 minutes, with a standard deviation of 3.8 minutes. Assume the distribution of trip times to be normally distributed. (a) What is the probability that a trip will take at least 1/2 hour? (b) If the office opens at 9:00 A.M. and the lawyer leaves his house at 8:45 A.M. daily, what percentage of the time is he late for work?
  • 208. 6.5 Normal Approximation to the Binomial 187 (c) If he leaves the house at 8:35 A.M. and coffee is served at the office from 8:50 A.M. until 9:00 A.M., what is the probability that he misses coffee? (d) Find the length of time above which we find the slowest 15% of the trips. (e) Find the probability that 2 of the next 3 trips will take at least 1/2 hour. 6.16 In the November 1990 issue of Chemical Engi- neering Progress, a study discussed the percent purity of oxygen from a certain supplier. Assume that the mean was 99.61 with a standard deviation of 0.08. As- sume that the distribution of percent purity was ap- proximately normal. (a) What percentage of the purity values would you expect to be between 99.5 and 99.7? (b) What purity value would you expect to exceed ex- actly 5% of the population? 6.17 The average life of a certain type of small motor is 10 years with a standard deviation of 2 years. The manufacturer replaces free all motors that fail while under guarantee. If she is willing to replace only 3% of the motors that fail, how long a guarantee should be offered? Assume that the lifetime of a motor follows a normal distribution. 6.18 The heights of 1000 students are normally dis- tributed with a mean of 174.5 centimeters and a stan- dard deviation of 6.9 centimeters. Assuming that the heights are recorded to the nearest half-centimeter, how many of these students would you expect to have heights (a) less than 160.0 centimeters? (b) between 171.5 and 182.0 centimeters inclusive? (c) equal to 175.0 centimeters? (d) greater than or equal to 188.0 centimeters? 6.19 A company pays its employees an average wage of $15.90 an hour with a standard deviation of $1.50. If the wages are approximately normally distributed and paid to the nearest cent, (a) what percentage of the workers receive wages be- tween $13.75 and $16.22 an hour inclusive? (b) the highest 5% of the employee hourly wages is greater than what amount? 6.20 The weights of a large number of miniature poo- dles are approximately normally distributed with a mean of 8 kilograms and a standard deviation of 0.9 kilogram. If measurements are recorded to the nearest tenth of a kilogram, find the fraction of these poodles with weights (a) over 9.5 kilograms; (b) of at most 8.6 kilograms; (c) between 7.3 and 9.1 kilograms inclusive. 6.21 The tensile strength of a certain metal compo- nent is normally distributed with a mean of 10,000 kilo- grams per square centimeter and a standard deviation of 100 kilograms per square centimeter. Measurements are recorded to the nearest 50 kilograms per square centimeter. (a) What proportion of these components exceed 10,150 kilograms per square centimeter in tensile strength? (b) If specifications require that all components have tensile strength between 9800 and 10,200 kilograms per square centimeter inclusive, what proportion of pieces would we expect to scrap? 6.22 If a set of observations is normally distributed, what percent of these differ from the mean by (a) more than 1.3σ? (b) less than 0.52σ? 6.23 The IQs of 600 applicants to a certain college are approximately normally distributed with a mean of 115 and a standard deviation of 12. If the college requires an IQ of at least 95, how many of these stu- dents will be rejected on this basis of IQ, regardless of their other qualifications? Note that IQs are recorded to the nearest integers. 6.5 Normal Approximation to the Binomial Probabilities associated with binomial experiments are readily obtainable from the formula b(x; n, p) of the binomial distribution or from Table A.1 when n is small. In addition, binomial probabilities are readily available in many computer software packages. However, it is instructive to learn the relationship between the binomial and the normal distribution. In Section 5.5, we illustrated how the Poisson dis- tribution can be used to approximate binomial probabilities when n is quite large and p is very close to 0 or 1. Both the binomial and the Poisson distributions
  • 209. 188 Chapter 6 Some Continuous Probability Distributions are discrete. The first application of a continuous probability distribution to ap- proximate probabilities over a discrete sample space was demonstrated in Example 6.12, where the normal curve was used. The normal distribution is often a good approximation to a discrete distribution when the latter takes on a symmetric bell shape. From a theoretical point of view, some distributions converge to the normal as their parameters approach certain limits. The normal distribution is a conve- nient approximating distribution because the cumulative distribution function is so easily tabled. The binomial distribution is nicely approximated by the normal in practical problems when one works with the cumulative distribution function. We now state a theorem that allows us to use areas under the normal curve to approximate binomial properties when n is sufficiently large. Theorem 6.3: If X is a binomial random variable with mean μ = np and variance σ2 = npq, then the limiting form of the distribution of Z = X − np √ npq , as n → ∞, is the standard normal distribution n(z; 0, 1). It turns out that the normal distribution with μ = np and σ2 = np(1 − p) not only provides a very accurate approximation to the binomial distribution when n is large and p is not extremely close to 0 or 1 but also provides a fairly good approximation even when n is small and p is reasonably close to 1/2. To illustrate the normal approximation to the binomial distribution, we first draw the histogram for b(x; 15, 0.4) and then superimpose the particular normal curve having the same mean and variance as the binomial variable X. Hence, we draw a normal curve with μ = np = (15)(0.4) = 6 and σ2 = npq = (15)(0.4)(0.6) = 3.6. The histogram of b(x; 15, 0.4) and the corresponding superimposed normal curve, which is completely determined by its mean and variance, are illustrated in Figure 6.22. 11 0 1 2 3 4 5 6 7 8 9 13 15 x Figure 6.22: Normal approximation of b(x; 15, 0.4).
  • 210. 6.5 Normal Approximation to the Binomial 189 The exact probability that the binomial random variable X assumes a given value x is equal to the area of the bar whose base is centered at x. For example, the exact probability that X assumes the value 4 is equal to the area of the rectangle with base centered at x = 4. Using Table A.1, we find this area to be P(X = 4) = b(4; 15, 0.4) = 0.1268, which is approximately equal to the area of the shaded region under the normal curve between the two ordinates x1 = 3.5 and x2 = 4.5 in Figure 6.23. Converting to z values, we have z1 = 3.5 − 6 1.897 = −1.32 and z2 = 4.5 − 6 1.897 = −0.79. 11 0 1 2 3 4 5 6 7 8 9 13 15 x Figure 6.23: Normal approximation of b(x; 15, 0.4) and 9 x=7 b(x; 15, 0.4). If X is a binomial random variable and Z a standard normal variable, then P(X = 4) = b(4; 15, 0.4) ≈ P(−1.32 Z −0.79) = P(Z −0.79) − P(Z −1.32) = 0.2148 − 0.0934 = 0.1214. This agrees very closely with the exact value of 0.1268. The normal approximation is most useful in calculating binomial sums for large values of n. Referring to Figure 6.23, we might be interested in the probability that X assumes a value from 7 to 9 inclusive. The exact probability is given by P(7 ≤ X ≤ 9) = 9 x=0 b(x; 15, 0.4) − 6 x=0 b(x; 15, 0.4) = 0.9662 − 0.6098 = 0.3564, which is equal to the sum of the areas of the rectangles with bases centered at x = 7, 8, and 9. For the normal approximation, we find the area of the shaded region under the curve between the ordinates x1 = 6.5 and x2 = 9.5 in Figure 6.23. The corresponding z values are z1 = 6.5 − 6 1.897 = 0.26 and z2 = 9.5 − 6 1.897 = 1.85.
  • 211. 190 Chapter 6 Some Continuous Probability Distributions Now, P(7 ≤ X ≤ 9) ≈ P(0.26 Z 1.85) = P(Z 1.85) − P(Z 0.26) = 0.9678 − 0.6026 = 0.3652. Once again, the normal curve approximation provides a value that agrees very closely with the exact value of 0.3564. The degree of accuracy, which depends on how well the curve fits the histogram, will increase as n increases. This is particu- larly true when p is not very close to 1/2 and the histogram is no longer symmetric. Figures 6.24 and 6.25 show the histograms for b(x; 6, 0.2) and b(x; 15, 0.2), respec- tively. It is evident that a normal curve would fit the histogram considerably better when n = 15 than when n = 6. 0 1 x 2 3 4 5 6 Figure 6.24: Histogram for b(x; 6, 0.2). 0 x 1 2 3 4 5 6 7 8 9 11 13 15 Figure 6.25: Histogram for b(x; 15, 0.2). In our illustrations of the normal approximation to the binomial, it became apparent that if we seek the area under the normal curve to the left of, say, x, it is more accurate to use x + 0.5. This is a correction to accommodate the fact that a discrete distribution is being approximated by a continuous distribution. The correction +0.5 is called a continuity correction. The foregoing discussion leads to the following formal normal approximation to the binomial. Normal Approximation to the Binomial Distribution Let X be a binomial random variable with parameters n and p. For large n, X has approximately a normal distribution with μ = np and σ2 = npq = np(1−p) and P(X ≤ x) = x k=0 b(k; n, p) ≈ area under normal curve to the left of x + 0.5 = P Z ≤ x + 0.5 − np √ npq , and the approximation will be good if np and n(1− p) are greater than or equal to 5. As we indicated earlier, the quality of the approximation is quite good for large n. If p is close to 1/2, a moderate or small sample size will be sufficient for a reasonable approximation. We offer Table 6.1 as an indication of the quality of the
  • 212. 6.5 Normal Approximation to the Binomial 191 approximation. Both the normal approximation and the true binomial cumulative probabilities are given. Notice that at p = 0.05 and p = 0.10, the approximation is fairly crude for n = 10. However, even for n = 10, note the improvement for p = 0.50. On the other hand, when p is fixed at p = 0.05, note the improvement of the approximation as we go from n = 20 to n = 100. Table 6.1: Normal Approximation and True Cumulative Binomial Probabilities p = 0.05, n = 10 p = 0.10, n = 10 p = 0.50, n = 10 r Binomial Normal Binomial Normal Binomial Normal 0 0.5987 0.5000 0.3487 0.2981 0.0010 0.0022 1 0.9139 0.9265 0.7361 0.7019 0.0107 0.0136 2 0.9885 0.9981 0.9298 0.9429 0.0547 0.0571 3 0.9990 1.0000 0.9872 0.9959 0.1719 0.1711 4 1.0000 1.0000 0.9984 0.9999 0.3770 0.3745 5 1.0000 1.0000 0.6230 0.6255 6 0.8281 0.8289 7 0.9453 0.9429 8 0.9893 0.9864 9 0.9990 0.9978 10 1.0000 0.9997 p = 0.05 n = 20 n = 50 n = 100 r Binomial Normal Binomial Normal Binomial Normal 0 0.3585 0.3015 0.0769 0.0968 0.0059 0.0197 1 0.7358 0.6985 0.2794 0.2578 0.0371 0.0537 2 0.9245 0.9382 0.5405 0.5000 0.1183 0.1251 3 0.9841 0.9948 0.7604 0.7422 0.2578 0.2451 4 0.9974 0.9998 0.8964 0.9032 0.4360 0.4090 5 0.9997 1.0000 0.9622 0.9744 0.6160 0.5910 6 1.0000 1.0000 0.9882 0.9953 0.7660 0.7549 7 0.9968 0.9994 0.8720 0.8749 8 0.9992 0.9999 0.9369 0.9463 9 0.9998 1.0000 0.9718 0.9803 10 1.0000 1.0000 0.9885 0.9941 Example 6.15: The probability that a patient recovers from a rare blood disease is 0.4. If 100 people are known to have contracted this disease, what is the probability that fewer than 30 survive? Solution: Let the binomial variable X represent the number of patients who survive. Since n = 100, we should obtain fairly accurate results using the normal-curve approxi- mation with μ = np = (100)(0.4) = 40 and σ = √ npq = (100)(0.4)(0.6) = 4.899. To obtain the desired probability, we have to find the area to the left of x = 29.5.
  • 213. 192 Chapter 6 Some Continuous Probability Distributions The z value corresponding to 29.5 is z = 29.5 − 40 4.899 = −2.14, and the probability of fewer than 30 of the 100 patients surviving is given by the shaded region in Figure 6.26. Hence, P(X 30) ≈ P(Z −2.14) = 0.0162. 0 2.14 x 1 σ Figure 6.26: Area for Example 6.15. 0 1.16 2.71 x 1 σ Figure 6.27: Area for Example 6.16. Example 6.16: A multiple-choice quiz has 200 questions, each with 4 possible answers of which only 1 is correct. What is the probability that sheer guesswork yields from 25 to 30 correct answers for the 80 of the 200 problems about which the student has no knowledge? Solution: The probability of guessing a correct answer for each of the 80 questions is p = 1/4. If X represents the number of correct answers resulting from guesswork, then P(25 ≤ X ≤ 30) = 30 x=25 b(x; 80, 1/4). Using the normal curve approximation with μ = np = (80) 1 4 = 20 and σ = √ npq = (80)(1/4)(3/4) = 3.873, we need the area between x1 = 24.5 and x2 = 30.5. The corresponding z values are z1 = 24.5 − 20 3.873 = 1.16 and z2 = 30.5 − 20 3.873 = 2.71. The probability of correctly guessing from 25 to 30 questions is given by the shaded region in Figure 6.27. From Table A.3 we find that P(25 ≤ X ≤ 30) = 30 x=25 b(x; 80, 0.25) ≈ P(1.16 Z 2.71) = P(Z 2.71) − P(Z 1.16) = 0.9966 − 0.8770 = 0.1196.
  • 214. / / Exercises 193 Exercises 6.24 A coin is tossed 400 times. Use the normal curve approximation to find the probability of obtaining (a) between 185 and 210 heads inclusive; (b) exactly 205 heads; (c) fewer than 176 or more than 227 heads. 6.25 A process for manufacturing an electronic com- ponent yields items of which 1% are defective. A qual- ity control plan is to select 100 items from the process, and if none are defective, the process continues. Use the normal approximation to the binomial to find (a) the probability that the process continues given the sampling plan described; (b) the probability that the process continues even if the process has gone bad (i.e., if the frequency of defective components has shifted to 5.0% defec- tive). 6.26 A process yields 10% defective items. If 100 items are randomly selected from the process, what is the probability that the number of defectives (a) exceeds 13? (b) is less than 8? 6.27 The probability that a patient recovers from a delicate heart operation is 0.9. Of the next 100 patients having this operation, what is the probability that (a) between 84 and 95 inclusive survive? (b) fewer than 86 survive? 6.28 Researchers at George Washington University and the National Institutes of Health claim that ap- proximately 75% of people believe “tranquilizers work very well to make a person more calm and relaxed.” Of the next 80 people interviewed, what is the probability that (a) at least 50 are of this opinion? (b) at most 56 are of this opinion? 6.29 If 20% of the residents in a U.S. city prefer a white telephone over any other color available, what is the probability that among the next 1000 telephones installed in that city (a) between 170 and 185 inclusive will be white? (b) at least 210 but not more than 225 will be white? 6.30 A drug manufacturer claims that a certain drug cures a blood disease, on the average, 80% of the time. To check the claim, government testers use the drug on a sample of 100 individuals and decide to accept the claim if 75 or more are cured. (a) What is the probability that the claim will be re- jected when the cure probability is, in fact, 0.8? (b) What is the probability that the claim will be ac- cepted by the government when the cure probabil- ity is as low as 0.7? 6.31 One-sixth of the male freshmen entering a large state school are out-of-state students. If the students are assigned at random to dormitories, 180 to a build- ing, what is the probability that in a given dormitory at least one-fifth of the students are from out of state? 6.32 A pharmaceutical company knows that approx- imately 5% of its birth-control pills have an ingredient that is below the minimum strength, thus rendering the pill ineffective. What is the probability that fewer than 10 in a sample of 200 pills will be ineffective? 6.33 Statistics released by the National Highway Traffic Safety Administration and the National Safety Council show that on an average weekend night, 1 out of every 10 drivers on the road is drunk. If 400 drivers are randomly checked next Saturday night, what is the probability that the number of drunk drivers will be (a) less than 32? (b) more than 49? (c) at least 35 but less than 47? 6.34 A pair of dice is rolled 180 times. What is the probability that a total of 7 occurs (a) at least 25 times? (b) between 33 and 41 times inclusive? (c) exactly 30 times? 6.35 A company produces component parts for an en- gine. Parts specifications suggest that 95% of items meet specifications. The parts are shipped to cus- tomers in lots of 100. (a) What is the probability that more than 2 items in a given lot will be defective? (b) What is the probability that more than 10 items in a lot will be defective? 6.36 A common practice of airline companies is to sell more tickets for a particular flight than there are seats on the plane, because customers who buy tickets do not always show up for the flight. Suppose that the percentage of no-shows at flight time is 2%. For a particular flight with 197 seats, a total of 200 tick-
  • 215. 194 Chapter 6 Some Continuous Probability Distributions ets were sold. What is the probability that the airline overbooked this flight? 6.37 The serum cholesterol level X in 14-year-old boys has approximately a normal distribution with mean 170 and standard deviation 30. (a) Find the probability that the serum cholesterol level of a randomly chosen 14-year-old boy exceeds 230. (b) In a middle school there are 300 14-year-old boys. Find the probability that at least 8 boys have a serum cholesterol level that exceeds 230. 6.38 A telemarketing company has a special letter- opening machine that opens and removes the contents of an envelope. If the envelope is fed improperly into the machine, the contents of the envelope may not be removed or may be damaged. In this case, the machine is said to have “failed.” (a) If the machine has a probability of failure of 0.01, what is the probability of more than 1 failure oc- curring in a batch of 20 envelopes? (b) If the probability of failure of the machine is 0.01 and a batch of 500 envelopes is to be opened, what is the probability that more than 8 failures will occur? 6.6 Gamma and Exponential Distributions Although the normal distribution can be used to solve many problems in engineer- ing and science, there are still numerous situations that require different types of density functions. Two such density functions, the gamma and exponential distributions, are discussed in this section. It turns out that the exponential distribution is a special case of the gamma dis- tribution. Both find a large number of applications. The exponential and gamma distributions play an important role in both queuing theory and reliability prob- lems. Time between arrivals at service facilities and time to failure of component parts and electrical systems often are nicely modeled by the exponential distribu- tion. The relationship between the gamma and the exponential allows the gamma to be used in similar types of problems. More details and illustrations will be supplied later in the section. The gamma distribution derives its name from the well-known gamma func- tion, studied in many areas of mathematics. Before we proceed to the gamma distribution, let us review this function and some of its important properties. Definition 6.2: The gamma function is defined by Γ(α) = ∞ 0 xα−1 e−x dx, for α 0. The following are a few simple properties of the gamma function. (a) Γ(n) = (n − 1)(n − 2) · · · (1)Γ(1), for a positive integer n. To see the proof, integrating by parts with u = xα−1 and dv = e−x dx, we obtain Γ(α) = −e−x xα−1 ∞ 0 + ∞ 0 e−x (α − 1)xα−2 dx = (α − 1) ∞ 0 xα−2 e−x dx, for α 1, which yields the recursion formula Γ(α) = (α − 1)Γ(α − 1). The result follows after repeated application of the recursion formula. Using this result, we can easily show the following two properties.
  • 216. 6.6 Gamma and Exponential Distributions 195 (b) Γ(n) = (n − 1)! for a positive integer n. (c) Γ(1) = 1. Furthermore, we have the following property of Γ(α), which is left for the reader to verify (see Exercise 6.39 on page 206). (d) Γ(1/2) = √ π. The following is the definition of the gamma distribution. Gamma Distribution The continuous random variable X has a gamma distribution, with param- eters α and β, if its density function is given by f(x; α, β) = 1 βαΓ(α) xα−1 e−x/β , x 0, 0, elsewhere, where α 0 and β 0. Graphs of several gamma distributions are shown in Figure 6.28 for certain specified values of the parameters α and β. The special gamma distribution for which α = 1 is called the exponential distribution. 0 1 2 3 4 5 6 0.5 1.0 f(x) x = 1 α β = 1 = 2 α β = 1 = 4 α β = 1 Figure 6.28: Gamma distributions. Exponential Distribution The continuous random variable X has an exponential distribution, with parameter β, if its density function is given by f(x; β) = 1 β e−x/β , x 0, 0, elsewhere, where β 0.
  • 217. 196 Chapter 6 Some Continuous Probability Distributions The following theorem and corollary give the mean and variance of the gamma and exponential distributions. Theorem 6.4: The mean and variance of the gamma distribution are μ = αβ and σ2 = αβ2 . The proof of this theorem is found in Appendix A.26. Corollary 6.1: The mean and variance of the exponential distribution are μ = β and σ2 = β2 . Relationship to the Poisson Process We shall pursue applications of the exponential distribution and then return to the gamma distribution. The most important applications of the exponential distribu- tion are situations where the Poisson process applies (see Section 5.5). The reader should recall that the Poisson process allows for the use of the discrete distribu- tion called the Poisson distribution. Recall that the Poisson distribution is used to compute the probability of specific numbers of “events” during a particular period of time or span of space. In many applications, the time period or span of space is the random variable. For example, an industrial engineer may be interested in modeling the time T between arrivals at a congested intersection during rush hour in a large city. An arrival represents the Poisson event. The relationship between the exponential distribution (often called the negative exponential) and the Poisson process is quite simple. In Chapter 5, the Poisson distribution was developed as a single-parameter distribution with parameter λ, where λ may be interpreted as the mean number of events per unit “time.” Con- sider now the random variable described by the time required for the first event to occur. Using the Poisson distribution, we find that the probability of no events occurring in the span up to time t is given by p(0; λt) = e−λt (λt)0 0! = e−λt . We can now make use of the above and let X be the time to the first Poisson event. The probability that the length of time until the first event will exceed x is the same as the probability that no Poisson events will occur in x. The latter, of course, is given by e−λx . As a result, P(X x) = e−λx . Thus, the cumulative distribution function for X is given by P(0 ≤ X ≤ x) = 1 − e−λx . Now, in order that we may recognize the presence of the exponential distribution, we differentiate the cumulative distribution function above to obtain the density
  • 218. 6.6 Gamma and Exponential Distributions 197 function f(x) = λe−λx , which is the density function of the exponential distribution with λ = 1/β. Applications of the Exponential and Gamma Distributions In the foregoing, we provided the foundation for the application of the exponential distribution in “time to arrival” or time to Poisson event problems. We will illus- trate some applications here and then proceed to discuss the role of the gamma distribution in these modeling applications. Notice that the mean of the exponen- tial distribution is the parameter β, the reciprocal of the parameter in the Poisson distribution. The reader should recall that it is often said that the Poisson distri- bution has no memory, implying that occurrences in successive time periods are independent. The important parameter β is the mean time between events. In reliability theory, where equipment failure often conforms to this Poisson process, β is called mean time between failures. Many equipment breakdowns do follow the Poisson process, and thus the exponential distribution does apply. Other ap- plications include survival times in biomedical experiments and computer response time. In the following example, we show a simple application of the exponential dis- tribution to a problem in reliability. The binomial distribution also plays a role in the solution. Example 6.17: Suppose that a system contains a certain type of component whose time, in years, to failure is given by T. The random variable T is modeled nicely by the exponential distribution with mean time to failure β = 5. If 5 of these components are installed in different systems, what is the probability that at least 2 are still functioning at the end of 8 years? Solution: The probability that a given component is still functioning after 8 years is given by P(T 8) = 1 5 ∞ 8 e−t/5 dt = e−8/5 ≈ 0.2. Let X represent the number of components functioning after 8 years. Then using the binomial distribution, we have P(X ≥ 2) = 5 x=2 b(x; 5, 0.2) = 1 − 1 x=0 b(x; 5, 0.2) = 1 − 0.7373 = 0.2627. There are exercises and examples in Chapter 3 where the reader has already encountered the exponential distribution. Others involving waiting time and reli- ability include Example 6.24 and some of the exercises and review exercises at the end of this chapter. The Memoryless Property and Its Effect on the Exponential Distribution The types of applications of the exponential distribution in reliability and compo- nent or machine lifetime problems are influenced by the memoryless (or lack-of- memory) property of the exponential distribution. For example, in the case of,
  • 219. 198 Chapter 6 Some Continuous Probability Distributions say, an electronic component where lifetime has an exponential distribution, the probability that the component lasts, say, t hours, that is, P(X ≥ t), is the same as the conditional probability P(X ≥ t0 + t | X ≥ t0). So if the component “makes it” to t0 hours, the probability of lasting an additional t hours is the same as the probability of lasting t hours. There is no “punish- ment” through wear that may have ensued for lasting the first t0 hours. Thus, the exponential distribution is more appropriate when the memoryless property is justified. But if the failure of the component is a result of gradual or slow wear (as in mechanical wear), then the exponential does not apply and either the gamma or the Weibull distribution (Section 6.10) may be more appropriate. The importance of the gamma distribution lies in the fact that it defines a family of which other distributions are special cases. But the gamma itself has important applications in waiting time and reliability theory. Whereas the expo- nential distribution describes the time until the occurrence of a Poisson event (or the time between Poisson events), the time (or space) occurring until a specified number of Poisson events occur is a random variable whose density function is described by the gamma distribution. This specific number of events is the param- eter α in the gamma density function. Thus, it becomes easy to understand that when α = 1, the special case of the exponential distribution occurs. The gamma density can be developed from its relationship to the Poisson process in much the same manner as we developed the exponential density. The details are left to the reader. The following is a numerical example of the use of the gamma distribution in a waiting-time application. Example 6.18: Suppose that telephone calls arriving at a particular switchboard follow a Poisson process with an average of 5 calls coming per minute. What is the probability that up to a minute will elapse by the time 2 calls have come in to the switchboard? Solution: The Poisson process applies, with time until 2 Poisson events following a gamma distribution with β = 1/5 and α = 2. Denote by X the time in minutes that transpires before 2 calls come. The required probability is given by P(X ≤ 1) = 1 0 1 β2 xe−x/β dx = 25 1 0 xe−5x dx = 1 − e−5 (1 + 5) = 0.96. While the origin of the gamma distribution deals in time (or space) until the occurrence of α Poisson events, there are many instances where a gamma distri- bution works very well even though there is no clear Poisson structure. This is particularly true for survival time problems in both engineering and biomedical applications. Example 6.19: In a biomedical study with rats, a dose-response investigation is used to determine the effect of the dose of a toxicant on their survival time. The toxicant is one that is frequently discharged into the atmosphere from jet fuel. For a certain dose of the toxicant, the study determines that the survival time, in weeks, has a gamma distribution with α = 5 and β = 10. What is the probability that a rat survives no longer than 60 weeks?
  • 220. 6.6 Gamma and Exponential Distributions 199 Solution: Let the random variable X be the survival time (time to death). The required probability is P(X ≤ 60) = 1 β5 60 0 xα−1 e−x/β Γ(5) dx. The integral above can be solved through the use of the incomplete gamma function, which becomes the cumulative distribution function for the gamma dis- tribution. This function is written as F(x; α) = x 0 yα−1 e−y Γ(α) dy. If we let y = x/β, so x = βy, we have P(X ≤ 60) = 6 0 y4 e−y Γ(5) dy, which is denoted as F(6; 5) in the table of the incomplete gamma function in Appendix A.23. Note that this allows a quick computation of probabilities for the gamma distribution. Indeed, for this problem, the probability that the rat survives no longer than 60 days is given by P(X ≤ 60) = F(6; 5) = 0.715. Example 6.20: It is known, from previous data, that the length of time in months between cus- tomer complaints about a certain product is a gamma distribution with α = 2 and β = 4. Changes were made to tighten quality control requirements. Following these changes, 20 months passed before the first complaint. Does it appear as if the quality control tightening was effective? Solution: Let X be the time to the first complaint, which, under conditions prior to the changes, followed a gamma distribution with α = 2 and β = 4. The question centers around how rare X ≥ 20 is, given that α and β remain at values 2 and 4, respectively. In other words, under the prior conditions is a “time to complaint” as large as 20 months reasonable? Thus, following the solution to Example 6.19, P(X ≥ 20) = 1 − 1 βα 20 0 xα−1 e−x/β Γ(α) dx. Again, using y = x/β, we have P(X ≥ 20) = 1 − 5 0 ye−y Γ(2) dy = 1 − F(5; 2) = 1 − 0.96 = 0.04, where F(5; 2) = 0.96 is found from Table A.23. As a result, we could conclude that the conditions of the gamma distribution with α = 2 and β = 4 are not supported by the data that an observed time to complaint is as large as 20 months. Thus, it is reasonable to conclude that the quality control work was effective. Example 6.21: Consider Exercise 3.31 on page 94. Based on extensive testing, it is determined that the time Y in years before a major repair is required for a certain washing machine is characterized by the density function f(y) = 1 4 e−y/4 , y ≥ 0, 0, elsewhere.
  • 221. 200 Chapter 6 Some Continuous Probability Distributions Note that Y is an exponential random variable with μ = 4 years. The machine is considered a bargain if it is unlikely to require a major repair before the sixth year. What is the probability P(Y 6)? What is the probability that a major repair is required in the first year? Solution: Consider the cumulative distribution function F(y) for the exponential distribution, F(y) = 1 β y 0 e−t/β dt = 1 − e−y/β . Then P(Y 6) = 1 − F(6) = e−3/2 = 0.2231. Thus, the probability that the washing machine will require major repair after year six is 0.223. Of course, it will require repair before year six with probability 0.777. Thus, one might conclude the machine is not really a bargain. The probability that a major repair is necessary in the first year is P(Y 1) = 1 − e−1/4 = 1 − 0.779 = 0.221. 6.7 Chi-Squared Distribution Another very important special case of the gamma distribution is obtained by letting α = v/2 and β = 2, where v is a positive integer. The result is called the chi-squared distribution. The distribution has a single parameter, v, called the degrees of freedom. Chi-Squared Distribution The continuous random variable X has a chi-squared distribution, with v degrees of freedom, if its density function is given by f(x; v) = 1 2v/2Γ(v/2) xv/2−1 e−x/2 , x 0, 0, elsewhere, where v is a positive integer. The chi-squared distribution plays a vital role in statistical inference. It has considerable applications in both methodology and theory. While we do not discuss applications in detail in this chapter, it is important to understand that Chapters 8, 9, and 16 contain important applications. The chi-squared distribution is an important component of statistical hypothesis testing and estimation. Topics dealing with sampling distributions, analysis of variance, and nonpara- metric statistics involve extensive use of the chi-squared distribution. Theorem 6.5: The mean and variance of the chi-squared distribution are μ = v and σ2 = 2v.
  • 222. 6.9 Lognormal Distribution 201 6.8 Beta Distribution An extension to the uniform distribution is a beta distribution. Let us start by defining a beta function. Definition 6.3: A beta function is defined by B(α, β) = 1 0 xα−1 (1 − x)β−1 dx = Γ(α)Γ(β) Γ(α + β) , for α, β 0, where Γ(α) is the gamma function. Beta Distribution The continuous random variable X has a beta distribution with parameters α 0 and β 0 if its density function is given by f(x) = 1 B(α,β) xα−1 (1 − x)β−1 , 0 x 1, 0, elsewhere. Note that the uniform distribution on (0, 1) is a beta distribution with parameters α = 1 and β = 1. Theorem 6.6: The mean and variance of a beta distribution with parameters α and β are μ = α α + β and σ2 = αβ (α + β)2(α + β + 1) , respectively. For the uniform distribution on (0, 1), the mean and variance are μ = 1 1 + 1 = 1 2 and σ2 = (1)(1) (1 + 1)2(1 + 1 + 1) = 1 12 , respectively. 6.9 Lognormal Distribution The lognormal distribution is used for a wide variety of applications. The dis- tribution applies in cases where a natural log transformation results in a normal distribution. Lognormal Distribution The continuous random variable X has a lognormal distribution if the ran- dom variable Y = ln(X) has a normal distribution with mean μ and standard deviation σ. The resulting density function of X is f(x; μ, σ) = 1 √ 2πσx e− 1 2σ2 [ln(x)−μ]2 , x ≥ 0, 0, x 0.
  • 223. 202 Chapter 6 Some Continuous Probability Distributions 0.2 0.4 0.6 f(x) x μ σ = 0 = 1 μ σ = 1 = 1 0 1 2 3 4 5 Figure 6.29: Lognormal distributions. The graphs of the lognormal distributions are illustrated in Figure 6.29. Theorem 6.7: The mean and variance of the lognormal distribution are μ = eμ+σ2 /2 and σ2 = e2μ+σ2 (eσ2 − 1). The cumulative distribution function is quite simple due to its relationship to the normal distribution. The use of the distribution function is illustrated by the following example. Example 6.22: Concentrations of pollutants produced by chemical plants historically are known to exhibit behavior that resembles a lognormal distribution. This is important when one considers issues regarding compliance with government regulations. Suppose it is assumed that the concentration of a certain pollutant, in parts per million, has a lognormal distribution with parameters μ = 3.2 and σ = 1. What is the probability that the concentration exceeds 8 parts per million? Solution: Let the random variable X be pollutant concentration. Then P(X 8) = 1 − P(X ≤ 8). Since ln(X) has a normal distribution with mean μ = 3.2 and standard deviation σ = 1, P(X ≤ 8) = Φ ln(8) − 3.2 1 = Φ(−1.12) = 0.1314. Here, we use Φ to denote the cumulative distribution function of the standard normal distribution. As a result, the probability that the pollutant concentration exceeds 8 parts per million is 0.1314.
  • 224. 6.10 Weibull Distribution (Optional) 203 Example 6.23: The life, in thousands of miles, of a certain type of electronic control for locomotives has an approximately lognormal distribution with μ = 5.149 and σ = 0.737. Find the 5th percentile of the life of such an electronic control. Solution: From Table A.3, we know that P(Z −1.645) = 0.05. Denote by X the life of such an electronic control. Since ln(X) has a normal distribution with mean μ = 5.149 and σ = 0.737, the 5th percentile of X can be calculated as ln(x) = 5.149 + (0.737)(−1.645) = 3.937. Hence, x = 51.265. This means that only 5% of the controls will have lifetimes less than 51,265 miles. 6.10 Weibull Distribution (Optional) Modern technology has enabled engineers to design many complicated systems whose operation and safety depend on the reliability of the various components making up the systems. For example, a fuse may burn out, a steel column may buckle, or a heat-sensing device may fail. Identical components subjected to iden- tical environmental conditions will fail at different and unpredictable times. We have seen the role that the gamma and exponential distributions play in these types of problems. Another distribution that has been used extensively in recent years to deal with such problems is the Weibull distribution, introduced by the Swedish physicist Waloddi Weibull in 1939. Weibull Distribution The continuous random variable X has a Weibull distribution, with param- eters α and β, if its density function is given by f(x; α, β) = αβxβ−1 e−αxβ , x 0, 0, elsewhere, where α 0 and β 0. The graphs of the Weibull distribution for α = 1 and various values of the param- eter β are illustrated in Figure 6.30. We see that the curves change considerably in shape for different values of the parameter β. If we let β = 1, the Weibull dis- tribution reduces to the exponential distribution. For values of β 1, the curves become somewhat bell shaped and resemble the normal curve but display some skewness. The mean and variance of the Weibull distribution are stated in the following theorem. The reader is asked to provide the proof in Exercise 6.52 on page 206. Theorem 6.8: The mean and variance of the Weibull distribution are μ = α−1/β Γ 1 + 1 β and σ2 = α−2/β Γ 1 + 2 β − Γ 1 + 1 β 2 . Like the gamma and exponential distributions, the Weibull distribution is also applied to reliability and life-testing problems such as the time to failure or
  • 225. 204 Chapter 6 Some Continuous Probability Distributions 0 0.5 1.0 1.5 2.0 f (x) x 1 2 3.5 β β β Figure 6.30: Weibull distributions (α = 1). life length of a component, measured from some specified time until it fails. Let us represent this time to failure by the continuous random variable T, with probability density function f(t), where f(t) is the Weibull distribution. The Weibull distribution has inherent flexibility in that it does not require the lack of memory property of the exponential distribution. The cumulative distribution function (cdf) for the Weibull can be written in closed form and certainly is useful in computing probabilities. cdf for Weibull Distribution The cumulative distribution function for the Weibull distribution is given by F(x) = 1 − e−αxβ , for x ≥ 0, for α 0 and β 0. Example 6.24: The length of life X, in hours, of an item in a machine shop has a Weibull distri- bution with α = 0.01 and β = 2. What is the probability that it fails before eight hours of usage? Solution: P(X 8) = F(8) = 1 − e−(0.01)82 = 1 − 0.527 = 0.473. The Failure Rate for the Weibull Distribution When the Weibull distribution applies, it is often helpful to determine the fail- ure rate (sometimes called the hazard rate) in order to get a sense of wear or deterioration of the component. Let us first define the reliability of a component or product as the probability that it will function properly for at least a specified time under specified experimental conditions. Therefore, if R(t) is defined to be
  • 226. 6.10 Weibull Distribution (Optional) 205 the reliability of the given component at time t, we may write R(t) = P(T t) = ∞ t f(t) dt = 1 − F(t), where F(t) is the cumulative distribution function of T. The conditional probability that a component will fail in the interval from T = t to T = t + Δt, given that it survived to time t, is F(t + Δt) − F(t) R(t) . Dividing this ratio by Δt and taking the limit as Δt → 0, we get the failure rate, denoted by Z(t). Hence, Z(t) = lim Δt→0 F(t + Δt) − F(t) Δt 1 R(t) = F (t) R(t) = f(t) R(t) = f(t) 1 − F(t) , which expresses the failure rate in terms of the distribution of the time to failure. Since Z(t) = f(t)/[1 − F(t)], the failure rate is given as follows: Failure Rate for Weibull Distribution The failure rate at time t for the Weibull distribution is given by Z(t) = αβtβ−1 , t 0. Interpretation of the Failure Rate The quantity Z(t) is aptly named as a failure rate since it does quantify the rate of change over time of the conditional probability that the component lasts an additional Δt given that it has lasted to time t. The rate of decrease (or increase) with time is important. The following are crucial points. (a) If β = 1, the failure rate = α, a constant. This, as indicated earlier, is the special case of the exponential distribution in which lack of memory prevails. (b) If β 1, Z(t) is an increasing function of time t, which indicates that the component wears over time. (c) If β 1, Z(t) is a decreasing function of time t and hence the component strengthens or hardens over time. For example, the item in the machine shop in Example 6.24 has β = 2, and hence it wears over time. In fact, the failure rate function is given by Z(t) = 0.02t. On the other hand, suppose the parameters were β = 3/4 and α = 2. In that case, Z(t) = 1.5/t1/4 and hence the component gets stronger over time.
  • 227. / / 206 Chapter 6 Some Continuous Probability Distributions Exercises 6.39 Use the gamma function with y = √ 2x to show that Γ(1/2) = √ π. 6.40 In a certain city, the daily consumption of water (in millions of liters) follows approximately a gamma distribution with α = 2 and β = 3. If the daily capac- ity of that city is 9 million liters of water, what is the probability that on any given day the water supply is inadequate? 6.41 If a random variable X has the gamma distribu- tion with α = 2 and β = 1, find P(1.8 X 2.4). 6.42 Suppose that the time, in hours, required to repair a heat pump is a random variable X having a gamma distribution with parameters α = 2 and β = 1/2. What is the probability that on the next service call (a) at most 1 hour will be required to repair the heat pump? (b) at least 2 hours will be required to repair the heat pump? 6.43 (a) Find the mean and variance of the daily wa- ter consumption in Exercise 6.40. (b) According to Chebyshev’s theorem, there is a prob- ability of at least 3/4 that the water consumption on any given day will fall within what interval? 6.44 In a certain city, the daily consumption of elec- tric power, in millions of kilowatt-hours, is a random variable X having a gamma distribution with mean μ = 6 and variance σ2 = 12. (a) Find the values of α and β. (b) Find the probability that on any given day the daily power consumption will exceed 12 million kilowatt- hours. 6.45 The length of time for one individual to be served at a cafeteria is a random variable having an ex- ponential distribution with a mean of 4 minutes. What is the probability that a person is served in less than 3 minutes on at least 4 of the next 6 days? 6.46 The life, in years, of a certain type of electrical switch has an exponential distribution with an average life β = 2. If 100 of these switches are installed in dif- ferent systems, what is the probability that at most 30 fail during the first year? 6.47 Suppose that the service life, in years, of a hear- ing aid battery is a random variable having a Weibull distribution with α = 1/2 and β = 2. (a) How long can such a battery be expected to last? (b) What is the probability that such a battery will be operating after 2 years? 6.48 Derive the mean and variance of the beta distri- bution. 6.49 Suppose the random variable X follows a beta distribution with α = 1 and β = 3. (a) Determine the mean and median of X. (b) Determine the variance of X. (c) Find the probability that X 1/3. 6.50 If the proportion of a brand of television set re- quiring service during the first year of operation is a random variable having a beta distribution with α = 3 and β = 2, what is the probability that at least 80% of the new models of this brand sold this year will require service during their first year of operation? 6.51 The lives of a certain automobile seal have the Weibull distribution with failure rate Z(t) =1/ √ t. Find the probability that such a seal is still intact after 4 years. 6.52 Derive the mean and variance of the Weibull dis- tribution. 6.53 In a biomedical research study, it was deter- mined that the survival time, in weeks, of an animal subjected to a certain exposure of gamma radiation has a gamma distribution with α = 5 and β = 10. (a) What is the mean survival time of a randomly se- lected animal of the type used in the experiment? (b) What is the standard deviation of survival time? (c) What is the probability that an animal survives more than 30 weeks? 6.54 The lifetime, in weeks, of a certain type of tran- sistor is known to follow a gamma distribution with mean 10 weeks and standard deviation √ 50 weeks. (a) What is the probability that a transistor of this type will last at most 50 weeks? (b) What is the probability that a transistor of this type will not survive the first 10 weeks? 6.55 Computer response time is an important appli- cation of the gamma and exponential distributions. Suppose that a study of a certain computer system reveals that the response time, in seconds, has an ex- ponential distribution with a mean of 3 seconds.
  • 228. / / Review Exercises 207 (a) What is the probability that response time exceeds 5 seconds? (b) What is the probability that response time exceeds 10 seconds? 6.56 Rate data often follow a lognormal distribution. Average power usage (dB per hour) for a particular company is studied and is known to have a lognormal distribution with parameters μ = 4 and σ = 2. What is the probability that the company uses more than 270 dB during any particular hour? 6.57 For Exercise 6.56, what is the mean power usage (average dB per hour)? What is the variance? 6.58 The number of automobiles that arrive at a cer- tain intersection per minute has a Poisson distribution with a mean of 5. Interest centers around the time that elapses before 10 automobiles appear at the intersec- tion. (a) What is the probability that more than 10 auto- mobiles appear at the intersection during any given minute of time? (b) What is the probability that more than 2 minutes elapse before 10 cars arrive? 6.59 Consider the information in Exercise 6.58. (a) What is the probability that more than 1 minute elapses between arrivals? (b) What is the mean number of minutes that elapse between arrivals? 6.60 Show that the failure-rate function is given by Z(t) = αβtβ−1 , t 0, if and only if the time to failure distribution is the Weibull distribution f(t) = αβtβ−1 e−αtβ , t 0. Review Exercises 6.61 According to a study published by a group of so- ciologists at the University of Massachusetts, approx- imately 49% of the Valium users in the state of Mas- sachusetts are white-collar workers. What is the prob- ability that between 482 and 510, inclusive, of the next 1000 randomly selected Valium users from this state are white-collar workers? 6.62 The exponential distribution is frequently ap- plied to the waiting times between successes in a Pois- son process. If the number of calls received per hour by a telephone answering service is a Poisson random variable with parameter λ = 6, we know that the time, in hours, between successive calls has an exponential distribution with parameter β =1/6. What is the prob- ability of waiting more than 15 minutes between any two successive calls? 6.63 When α is a positive integer n, the gamma dis- tribution is also known as the Erlang distribution. Setting α = n in the gamma distribution on page 195, the Erlang distribution is f(x) = xn−1 e−x/β βn(n−1)! , x 0, 0, elsewhere. It can be shown that if the times between successive events are independent, each having an exponential distribution with parameter β, then the total elapsed waiting time X until all n events occur has the Erlang distribution. Referring to Review Exercise 6.62, what is the probability that the next 3 calls will be received within the next 30 minutes? 6.64 A manufacturer of a certain type of large ma- chine wishes to buy rivets from one of two manufac- turers. It is important that the breaking strength of each rivet exceed 10,000 psi. Two manufacturers (A and B) offer this type of rivet and both have rivets whose breaking strength is normally distributed. The mean breaking strengths for manufacturers A and B are 14,000 psi and 13,000 psi, respectively. The stan- dard deviations are 2000 psi and 1000 psi, respectively. Which manufacturer will produce, on the average, the fewest number of defective rivets? 6.65 According to a recent census, almost 65% of all households in the United States were composed of only one or two persons. Assuming that this percentage is still valid today, what is the probability that between 590 and 625, inclusive, of the next 1000 randomly se- lected households in America consist of either one or two persons? 6.66 A certain type of device has an advertised fail- ure rate of 0.01 per hour. The failure rate is constant and the exponential distribution applies. (a) What is the mean time to failure? (b) What is the probability that 200 hours will pass before a failure is observed? 6.67 In a chemical processing plant, it is important that the yield of a certain type of batch product stay
  • 229. / / 208 Chapter 6 Some Continuous Probability Distributions above 80%. If it stays below 80% for an extended pe- riod of time, the company loses money. Occasional defective batches are of little concern. But if several batches per day are defective, the plant shuts down and adjustments are made. It is known that the yield is normally distributed with standard deviation 4%. (a) What is the probability of a “false alarm” (yield below 80%) when the mean yield is 85%? (b) What is the probability that a batch will have a yield that exceeds 80% when in fact the mean yield is 79%? 6.68 For an electrical component with a failure rate of once every 5 hours, it is important to consider the time that it takes for 2 components to fail. (a) Assuming that the gamma distribution applies, what is the mean time that it takes for 2 compo- nents to fail? (b) What is the probability that 12 hours will elapse before 2 components fail? 6.69 The elongation of a steel bar under a particular load has been established to be normally distributed with a mean of 0.05 inch and σ = 0.01 inch. Find the probability that the elongation is (a) above 0.1 inch; (b) below 0.04 inch; (c) between 0.025 and 0.065 inch. 6.70 A controlled satellite is known to have an error (distance from target) that is normally distributed with mean zero and standard deviation 4 feet. The manu- facturer of the satellite defines a success as a firing in which the satellite comes within 10 feet of the target. Compute the probability that the satellite fails. 6.71 A technician plans to test a certain type of resin developed in the laboratory to determine the nature of the time required before bonding takes place. It is known that the mean time to bonding is 3 hours and the standard deviation is 0.5 hour. It will be con- sidered an undesirable product if the bonding time is either less than 1 hour or more than 4 hours. Com- ment on the utility of the resin. How often would its performance be considered undesirable? Assume that time to bonding is normally distributed. 6.72 Consider the information in Review Exercise 6.66. What is the probability that less than 200 hours will elapse before 2 failures occur? 6.73 For Review Exercise 6.72, what are the mean and variance of the time that elapses before 2 failures occur? 6.74 The average rate of water usage (thousands of gallons per hour) by a certain community is known to involve the lognormal distribution with parameters μ = 5 and σ = 2. It is important for planning purposes to get a sense of periods of high usage. What is the probability that, for any given hour, 50,000 gallons of water are used? 6.75 For Review Exercise 6.74, what is the mean of the average water usage per hour in thousands of gal- lons? 6.76 In Exercise 6.54 on page 206, the lifetime of a transistor is assumed to have a gamma distribution with mean 10 weeks and standard deviation √ 50 weeks. Suppose that the gamma distribution assumption is in- correct. Assume that the distribution is normal. (a) What is the probability that a transistor will last at most 50 weeks? (b) What is the probability that a transistor will not survive for the first 10 weeks? (c) Comment on the difference between your results here and those found in Exercise 6.54 on page 206. 6.77 The beta distribution has considerable applica- tion in reliability problems in which the basic random variable is a proportion, as in the practical scenario il- lustrated in Exercise 6.50 on page 206. In that regard, consider Review Exercise 3.73 on page 108. Impurities in batches of product of a chemical process reflect a serious problem. It is known that the proportion of impurities Y in a batch has the density function f(y) = 10(1 − y)9 , 0 ≤ y ≤ 1, 0, elsewhere. (a) Verify that the above is a valid density function. (b) What is the probability that a batch is considered not acceptable (i.e., Y 0.6)? (c) What are the parameters α and β of the beta dis- tribution illustrated here? (d) The mean of the beta distribution is α α+β . What is the mean proportion of impurities in the batch? (e) The variance of a beta distributed random variable is σ2 = αβ (α + β)2(α + β + 1) . What is the variance of Y in this problem? 6.78 Consider now Review Exercise 3.74 on page 108. The density function of the time Z in minutes between calls to an electrical supply store is given by f(z) = 1 10 e−z/10 , 0 z ∞, 0, elsewhere.
  • 230. 6.11 Potential Misconceptions and Hazards 209 (a) What is the mean time between calls? (b) What is the variance in the time between calls? (c) What is the probability that the time between calls exceeds the mean? 6.79 Consider Review Exercise 6.78. Given the as- sumption of the exponential distribution, what is the mean number of calls per hour? What is the variance in the number of calls per hour? 6.80 In a human factor experimental project, it has been determined that the reaction time of a pilot to a visual stimulus is normally distributed with a mean of 1/2 second and standard deviation of 2/5 second. (a) What is the probability that a reaction from the pilot takes more than 0.3 second? (b) What reaction time is that which is exceeded 95% of the time? 6.81 The length of time between breakdowns of an es- sential piece of equipment is important in the decision of the use of auxiliary equipment. An engineer thinks that the best model for time between breakdowns of a generator is the exponential distribution with a mean of 15 days. (a) If the generator has just broken down, what is the probability that it will break down in the next 21 days? (b) What is the probability that the generator will op- erate for 30 days without a breakdown? 6.82 The length of life, in hours, of a drill bit in a mechanical operation has a Weibull distribution with α = 2 and β = 50. Find the probability that the bit will fail before 10 hours of usage. 6.83 Derive the cdf for the Weibull distribution. [Hint: In the definition of a cdf, make the transfor- mation z = yβ .] 6.84 Explain why the nature of the scenario in Re- view Exercise 6.82 would likely not lend itself to the exponential distribution. 6.85 From the relationship between the chi-squared random variable and the gamma random variable, prove that the mean of the chi-squared random variable is v and the variance is 2v. 6.86 The length of time, in seconds, that a computer user takes to read his or her e-mail is distributed as a lognormal random variable with μ = 1.8 and σ2 = 4.0. (a) What is the probability that a user reads e-mail for more than 20 seconds? More than a minute? (b) What is the probability that a user reads e-mail for a length of time that is equal to the mean of the underlying lognormal distribution? 6.87 Group Project: Have groups of students ob- serve the number of people who enter a specific coffee shop or fast food restaurant over the course of an hour, beginning at the same time every day, for two weeks. The hour should be a time of peak traffic at the shop or restaurant. The data collected will be the number of customers who enter the shop in each half hour of time. Thus, two data points will be collected each day. Let us assume that the random variable X, the num- ber of people entering each half hour, follows a Poisson distribution. The students should calculate the sam- ple mean and variance of X using the 28 data points collected. (a) What evidence indicates that the Poisson distribu- tion assumption may or may not be correct? (b) Given that X is Poisson, what is the distribution of T, the time between arrivals into the shop during a half hour period? Give a numerical estimate of the parameter of that distribution. (c) Give an estimate of the probability that the time between two arrivals is less than 15 minutes. (d) What is the estimated probability that the time between two arrivals is more than 10 minutes? (e) What is the estimated probability that 20 minutes after the start of data collection not one customer has appeared? 6.11 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters Many of the hazards in the use of material in this chapter are quite similar to those of Chapter 5. One of the biggest misuses of statistics is the assumption of an underlying normal distribution in carrying out a type of statistical inference when indeed it is not normal. The reader will be exposed to tests of hypotheses in Chapters 10 through 15 in which the normality assumption is made. In addition,
  • 231. 210 Chapter 6 Some Continuous Probability Distributions however, the reader will be reminded that there are tests of goodness of fit as well as graphical routines discussed in Chapters 8 and 10 that allow for checks on data to determine if the normality assumption is reasonable. Similar warnings should be conveyed regarding assumptions that are often made concerning other distributions, apart from the normal. This chapter has presented examples in which one is required to calculate probabilities to failure of a certain item or the probability that one observes a complaint during a certain time period. Assumptions are made concerning a certain distribution type as well as values of parameters of the distributions. Note that parameter values (for example, the value of β for the exponential distribution) were given in the example problems. However, in real-life problems, parameter values must be estimates from real-life experience or data. Note the emphasis placed on estimation in the projects that appear in Chapters 1, 5, and 6. Note also the reference in Chapter 5 to parameter estimation, which will be discussed extensively beginning in Chapter 9.
  • 232. Chapter 7 Functions of Random Variables (Optional) 7.1 Introduction This chapter contains a broad spectrum of material. Chapters 5 and 6 deal with specific types of distributions, both discrete and continuous. These are distribu- tions that find use in many subject matter applications, including reliability, quality control, and acceptance sampling. In the present chapter, we begin with a more general topic, that of distributions of functions of random variables. General tech- niques are introduced and illustrated by examples. This discussion is followed by coverage of a related concept, moment-generating functions, which can be helpful in learning about distributions of linear functions of random variables. In standard statistical methods, the result of statistical hypothesis testing, es- timation, or even statistical graphics does not involve a single random variable but, rather, functions of one or more random variables. As a result, statistical inference requires the distributions of these functions. For example, the use of averages of random variables is common. In addition, sums and more general linear combinations are important. We are often interested in the distribution of sums of squares of random variables, particularly in the use of analysis of variance techniques discussed in Chapters 11–14. 7.2 Transformations of Variables Frequently in statistics, one encounters the need to derive the probability distribu- tion of a function of one or more random variables. For example, suppose that X is a discrete random variable with probability distribution f(x), and suppose further that Y = u(X) defines a one-to-one transformation between the values of X and Y . We wish to find the probability distribution of Y . It is important to note that the one-to-one transformation implies that each value x is related to one, and only one, value y = u(x) and that each value y is related to one, and only one, value x = w(y), where w(y) is obtained by solving y = u(x) for x in terms of y. 211
  • 233. 212 Chapter 7 Functions of Random Variables (Optional) From our discussion of discrete probability distributions in Chapter 3, it is clear that the random variable Y assumes the value y when X assumes the value w(y). Consequently, the probability distribution of Y is given by g(y) = P(Y = y) = P[X = w(y)] = f[w(y)]. Theorem 7.1: Suppose that X is a discrete random variable with probability distribution f(x). Let Y = u(X) define a one-to-one transformation between the values of X and Y so that the equation y = u(x) can be uniquely solved for x in terms of y, say x = w(y). Then the probability distribution of Y is g(y) = f[w(y)]. Example 7.1: Let X be a geometric random variable with probability distribution f(x) = 3 4 1 4 x−1 , x = 1, 2, 3, . . . . Find the probability distribution of the random variable Y = X2 . Solution: Since the values of X are all positive, the transformation defines a one-to-one correspondence between the x and y values, y = x2 and x = √ y. Hence g(y) = f( √ y) = 3 4 1 4 √ y−1 , y = 1, 4, 9, . . . , 0, elsewhere. Similarly, for a two-dimension transformation, we have the result in Theorem 7.2. Theorem 7.2: Suppose that X1 and X2 are discrete random variables with joint probability distribution f(x1, x2). Let Y1 = u1(X1, X2) and Y2 = u2(X1, X2) define a one-to- one transformation between the points (x1, x2) and (y1, y2) so that the equations y1 = u1(x1, x2) and y2 = u2(x1, x2) may be uniquely solved for x1 and x2 in terms of y1 and y2, say x1 = w1(y1, y2) and x2 = w2(y1, y2). Then the joint probability distribution of Y1 and Y2 is g(y1, y2) = f[w1(y1, y2), w2(y1, y2)]. Theorem 7.2 is extremely useful for finding the distribution of some random variable Y1 = u1(X1, X2), where X1 and X2 are discrete random variables with joint probability distribution f(x1, x2). We simply define a second function, say Y2 = u2(X1, X2), maintaining a one-to-one correspondence between the points (x1, x2) and (y1, y2), and obtain the joint probability distribution g(y1, y2). The distribution of Y1 is just the marginal distribution of g(y1, y2), found by summing over the y2 values. Denoting the distribution of Y1 by h(y1), we can then write h(y1) = y2 g(y1, y2).
  • 234. 7.2 Transformations of Variables 213 Example 7.2: Let X1 and X2 be two independent random variables having Poisson distributions with parameters μ1 and μ2, respectively. Find the distribution of the random variable Y1 = X1 + X2. Solution: Since X1 and X2 are independent, we can write f(x1, x2) = f(x1)f(x2) = e−μ1 μx1 1 x1! e−μ2 μx2 2 x2! = e−(μ1+μ2) μx1 1 μx2 2 x1!x2! , where x1 = 0, 1, 2, . . . and x2 = 0, 1, 2, . . . . Let us now define a second random variable, say Y2 = X2. The inverse functions are given by x1 = y1 −y2 and x2 = y2. Using Theorem 7.2, we find the joint probability distribution of Y1 and Y2 to be g(y1, y2) = e−(μ1+μ2) μy1−y2 1 μy2 2 (y1 − y2)!y2! , where y1 = 0, 1, 2, . . . and y2 = 0, 1, 2, . . . , y1. Note that since x1 0, the trans- formation x1 = y1 − x2 implies that y2 and hence x2 must always be less than or equal to y1. Consequently, the marginal probability distribution of Y1 is h(y1) = y1 y2=0 g(y1, y2) = e−(μ1+μ2) y1 y2=0 μy1−y2 1 μy2 2 (y1 − y2)!y2! = e−(μ1+μ2) y1! y1 y2=0 y1! y2!(y1 − y2)! μy1−y2 1 μy2 2 = e−(μ1+μ2) y1! y1 y2=0 y1 y2 μy1−y2 1 μy2 2 . Recognizing this sum as the binomial expansion of (μ1 + μ2)y1 we obtain h(y1) = e−(μ1+μ2) (μ1 + μ2)y1 y1! , y1 = 0, 1, 2, . . . , from which we conclude that the sum of the two independent random variables having Poisson distributions, with parameters μ1 and μ2, has a Poisson distribution with parameter μ1 + μ2. To find the probability distribution of the random variable Y = u(X) when X is a continuous random variable and the transformation is one-to-one, we shall need Theorem 7.3. The proof of the theorem is left to the reader. Theorem 7.3: Suppose that X is a continuous random variable with probability distribution f(x). Let Y = u(X) define a one-to-one correspondence between the values of X and Y so that the equation y = u(x) can be uniquely solved for x in terms of y, say x = w(y). Then the probability distribution of Y is g(y) = f[w(y)]|J|, where J = w (y) and is called the Jacobian of the transformation.
  • 235. 214 Chapter 7 Functions of Random Variables (Optional) Example 7.3: Let X be a continuous random variable with probability distribution f(x) = x 12 , 1 x 5, 0, elsewhere. Find the probability distribution of the random variable Y = 2X − 3. Solution: The inverse solution of y = 2x − 3 yields x = (y + 3)/2, from which we obtain J = w (y) = dx/dy = 1/2. Therefore, using Theorem 7.3, we find the density function of Y to be g(y) = (y+3)/2 12 1 2 = y+3 48 , −1 y 7, 0, elsewhere. To find the joint probability distribution of the random variables Y1 = u1(X1, X2) and Y2 = u2(X1, X2) when X1 and X2 are continuous and the transformation is one-to-one, we need an additional theorem, analogous to Theorem 7.2, which we state without proof. Theorem 7.4: Suppose that X1 and X2 are continuous random variables with joint probability distribution f(x1, x2). Let Y1 = u1(X1, X2) and Y2 = u2(X1, X2) define a one-to- one transformation between the points (x1, x2) and (y1, y2) so that the equations y1 = u1(x1, x2) and y2 = u2(x1, x2) may be uniquely solved for x1 and x2 in terms of y1 and y2, say x1 = w1(yl, y2) and x2 = w2(y1, y2). Then the joint probability distribution of Y1 and Y2 is g(y1, y2) = f[w1(y1, y2), w2(y1, y2)]|J|, where the Jacobian is the 2 × 2 determinant J = ∂x1 ∂y1 ∂x1 ∂y2 ∂x2 ∂y1 ∂x2 ∂y2 and ∂x1 ∂y1 is simply the derivative of x1 = w1(y1, y2) with respect to y1 with y2 held constant, referred to in calculus as the partial derivative of x1 with respect to y1. The other partial derivatives are defined in a similar manner. Example 7.4: Let X1 and X2 be two continuous random variables with joint probability distri- bution f(x1, x2) = 4x1x2, 0 x1 1, 0 x2 1, 0, elsewhere. Find the joint probability distribution of Y1 = X2 1 and Y2 = X1X2. Solution: The inverse solutions of y1 = x2 1 and y2 = x1x2 are x1 = √ y1 and x2 = y2/ √ y1, from which we obtain J = 1/(2 √ y1) 0 −y2/2y 3/2 1 1/ √ y1 = 1 2y1 .
  • 236. 7.2 Transformations of Variables 215 To determine the set B of points in the y1y2 plane into which the set A of points in the x1x2 plane is mapped, we write x1 = √ y1 and x2 = y2/ √ y1. Then setting x1 = 0, x2 = 0, x1 = 1, and x2 = 1, the boundaries of set A are transformed to y1 = 0, y2 = 0, y1 = 1, and y2 = √ y1, or y2 2 = y1. The two regions are illustrated in Figure 7.1. Clearly, the transformation is one-to- one, mapping the set A = {(x1, x2) | 0 x1 1, 0 x2 1} into the set B = {(y1, y2) | y2 2 y1 1, 0 y2 1}. From Theorem 7.4 the joint probability distribution of Y1 and Y2 is g(y1, y2) = 4( √ y1) y2 √ y1 1 2y1 = 2y2 y1 , y2 2 y1 1, 0 y2 1, 0, elsewhere. x1 x2 A 0 1 1 x2 = 0 x2 = 1 x 1 = 0 x 1 = 1 y1 y2 B 0 1 1 y2 = 0 y 2 2 = y 1 y 1 = 0 y 1 = 1 Figure 7.1: Mapping set A into set B. Problems frequently arise when we wish to find the probability distribution of the random variable Y = u(X) when X is a continuous random variable and the transformation is not one-to-one. That is, to each value x there corresponds exactly one value y, but to each y value there corresponds more than one x value. For example, suppose that f(x) is positive over the interval −1 x 2 and zero elsewhere. Consider the transformation y = x2 . In this case, x = ± √ y for 0 y 1 and x = √ y for 1 y 4. For the interval 1 y 4, the probability distribution of Y is found as before, using Theorem 7.3. That is, g(y) = f[w(y)]|J| = f( √ y) 2 √ y , 1 y 4. However, when 0 y 1, we may partition the interval −1 x 1 to obtain the two inverse functions x = − √ y, −1 x 0, and x = √ y, 0 x 1.
  • 237. 216 Chapter 7 Functions of Random Variables (Optional) Then to every y value there corresponds a single x value for each partition. From Figure 7.2 we see that P(a Y b) = P(− √ b X − √ a) + P( √ a X √ b) = − √ a − √ b f(x) dx + √ b √ a f(x) dx. x y 1 1 a b y x2 b a a b Figure 7.2: Decreasing and increasing function. Changing the variable of integration from x to y, we obtain P(a Y b) = a b f(− √ y)J1 dy + b a f( √ y)J2 dy = − b a f(− √ y)J1 dy + b a f( √ y)J2 dy, where J1 = d(− √ y) dy = −1 2 √ y = −|J1| and J2 = d( √ y) dy = 1 2 √ y = |J2|. Hence, we can write P(a Y b) = b a [f(− √ y)|J1| + f( √ y)|J2|] dy, and then g(y) = f(− √ y)|J1| + f( √ y)|J2| = f(− √ y) + f( √ y) 2 √ y , 0 y 1.
  • 238. 7.2 Transformations of Variables 217 The probability distribution of Y for 0 y 4 may now be written g(y) = ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ f(− √ y)+f( √ y) 2 √ y , 0 y 1, f( √ y) 2 √ y , 1 y 4, 0, elsewhere. This procedure for finding g(y) when 0 y 1 is generalized in Theorem 7.5 for k inverse functions. For transformations not one-to-one of functions of several variables, the reader is referred to Introduction to Mathematical Statistics by Hogg, McKean, and Craig (2005; see the Bibliography). Theorem 7.5: Suppose that X is a continuous random variable with probability distribution f(x). Let Y = u(X) define a transformation between the values of X and Y that is not one-to-one. If the interval over which X is defined can be partitioned into k mutually disjoint sets such that each of the inverse functions x1 = w1(y), x2 = w2(y), . . . , xk = wk(y) of y = u(x) defines a one-to-one correspondence, then the probability distribution of Y is g(y) = k i=1 f[wi(y)]|Ji|, where Ji = w i(y), i = 1, 2, . . . , k. Example 7.5: Show that Y = (X−μ)2 /σ2 has a chi-squared distribution with 1 degree of freedom when X has a normal distribution with mean μ and variance σ2 . Solution: Let Z = (X − μ)/σ, where the random variable Z has the standard normal distri- bution f(z) = 1 √ 2π e−z2 /2 , −∞ z ∞. We shall now find the distribution of the random variable Y = Z2 . The inverse solutions of y = z2 are z = ± √ y. If we designate z1 = − √ y and z2 = √ y, then J1 = −1/2 √ y and J2 = 1/2 √ y. Hence, by Theorem 7.5, we have g(y) = 1 √ 2π e−y/2 −1 2 √ y + 1 √ 2π e−y/2 1 2 √ y = 1 √ 2π y1/2−1 e−y/2 , y 0. Since g(y) is a density function, it follows that 1 = 1 √ 2π ∞ 0 y1/2−1 e−y/2 dy = Γ(1/2) √ π ∞ 0 y1/2−1 e−y/2 √ 2Γ(1/2) dy = Γ(1/2) √ π , the integral being the area under a gamma probability curve with parameters α = 1/2 and β = 2. Hence, √ π = Γ(1/2) and the density of Y is given by g(y) = 1 √ 2Γ(1/2) y1/2−1 e−y/2 , y 0, 0, elsewhere, which is seen to be a chi-squared distribution with 1 degree of freedom.
  • 239. 218 Chapter 7 Functions of Random Variables (Optional) 7.3 Moments and Moment-Generating Functions In this section, we concentrate on applications of moment-generating functions. The obvious purpose of the moment-generating function is in determining moments of random variables. However, the most important contribution is to establish distributions of functions of random variables. If g(X) = Xr for r = 0, 1, 2, 3, . . . , Definition 7.1 yields an expected value called the rth moment about the origin of the random variable X, which we denote by μ r. Definition 7.1: The rth moment about the origin of the random variable X is given by μ r = E(Xr ) = ⎧ ⎨ ⎩ x xr f(x), if X is discrete, ∞ −∞ xr f(x) dx, if X is continuous. Since the first and second moments about the origin are given by μ 1 = E(X) and μ 2 = E(X2 ), we can write the mean and variance of a random variable as μ = μ 1 and σ2 = μ 2 − μ2 . Although the moments of a random variable can be determined directly from Definition 7.1, an alternative procedure exists. This procedure requires us to utilize a moment-generating function. Definition 7.2: The moment-generating function of the random variable X is given by E(etX ) and is denoted by MX(t). Hence, MX(t) = E(etX ) = ⎧ ⎨ ⎩ x etx f(x), if X is discrete, ∞ −∞ etx f(x) dx, if X is continuous. Moment-generating functions will exist only if the sum or integral of Definition 7.2 converges. If a moment-generating function of a random variable X does exist, it can be used to generate all the moments of that variable. The method is described in Theorem 7.6 without proof. Theorem 7.6: Let X be a random variable with moment-generating function MX (t). Then dr MX(t) dtr t=0 = μ r. Example 7.6: Find the moment-generating function of the binomial random variable X and then use it to verify that μ = np and σ2 = npq. Solution: From Definition 7.2 we have MX(t) = n x=0 etx n x px qn−x = n x=0 n x (pet )x qn−x .
  • 240. 7.3 Moments and Moment-Generating Functions 219 Recognizing this last sum as the binomial expansion of (pet + q)n , we obtain MX(t) = (pet + q)n . Now dMX(t) dt = n(pet + q)n−1 pet and d2 MX(t) dt2 = np[et (n − 1)(pet + q)n−2 pet + (pet + q)n−1 et ]. Setting t = 0, we get μ 1 = np and μ 2 = np[(n − 1)p + 1]. Therefore, μ = μ 1 = np and σ2 = μ 2 − μ2 = np(1 − p) = npq, which agrees with the results obtained in Chapter 5. Example 7.7: Show that the moment-generating function of the random variable X having a normal probability distribution with mean μ and variance σ2 is given by MX (t) = exp μt + 1 2 σ2 t2 . Solution: From Definition 7.2 the moment-generating function of the normal random variable X is MX (t) = ∞ −∞ etx 1 √ 2πσ exp − 1 2 x − μ σ 2 dx = ∞ −∞ 1 √ 2πσ exp − x2 − 2(μ + tσ2 )x + μ2 2σ2 dx. Completing the square in the exponent, we can write x2 − 2(μ + tσ2 )x + μ2 = [x − (μ + tσ2 )]2 − 2μtσ2 − t2 σ4 and then MX(t) = ∞ −∞ 1 √ 2πσ exp − [x − (μ + tσ2 )]2 − 2μtσ2 − t2 σ4 2σ2 dx = exp 2μt + σ2 t2 2 ∞ −∞ 1 √ 2πσ exp − [x − (μ + tσ2 )]2 2σ2 dx. Let w = [x − (μ + tσ2 )]/σ; then dx = σ dw and MX(t) = exp μt + 1 2 σ2 t2 ∞ −∞ 1 √ 2π e−w2 /2 dw = exp μt + 1 2 σ2 t2 ,
  • 241. 220 Chapter 7 Functions of Random Variables (Optional) since the last integral represents the area under a standard normal density curve and hence equals 1. Although the method of transforming variables provides an effective way of finding the distribution of a function of several variables, there is an alternative and often preferred procedure when the function in question is a linear combination of independent random variables. This procedure utilizes the properties of moment- generating functions discussed in the following four theorems. In keeping with the mathematical scope of this book, we state Theorem 7.7 without proof. Theorem 7.7: (Uniqueness Theorem) Let X and Y be two random variables with moment- generating functions MX(t) and MY (t), respectively. If MX(t) = MY (t) for all values of t, then X and Y have the same probability distribution. Theorem 7.8: MX+a(t) = eat MX(t). Proof: MX+a(t) = E[et(X+a) ] = eat E(etX ) = eat MX (t). Theorem 7.9: MaX (t) = MX (at). Proof: MaX(t) = E[et(aX) ] = E[e(at)X ] = MX(at). Theorem 7.10: If X1, X2, . . . , Xn are independent random variables with moment-generating func- tions MX1 (t), MX2 (t), . . . , MXn (t), respectively, and Y = X1 +X2 +· · ·+Xn, then MY (t) = MX1 (t)MX2 (t) · · · MXn (t). The proof of Theorem 7.10 is left for the reader. Theorems 7.7 through 7.10 are vital for understanding moment-generating func- tions. An example follows to illustrate. There are many situations in which we need to know the distribution of the sum of random variables. We may use Theorems 7.7 and 7.10 and the result of Exercise 7.19 on page 224 to find the distribution of a sum of two independent Poisson random variables with moment-generating functions given by MX1 (t) = eμ1(et −1) and MX2 (t) = eμ2(et −1) , respectively. According to Theorem 7.10, the moment-generating function of the random variable Y1 = X1 + X2 is MY1 (t) = MX1 (t)MX2 (t) = eμ1(et −1) eμ2(et −1) = e(μ1+μ2)(et −1) , which we immediately identify as the moment-generating function of a random variable having a Poisson distribution with the parameter μ1 + μ2. Hence, accord- ing to Theorem 7.7, we again conclude that the sum of two independent random variables having Poisson distributions, with parameters μ1 and μ2, has a Poisson distribution with parameter μ1 + μ2.
  • 242. 7.3 Moments and Moment-Generating Functions 221 Linear Combinations of Random Variables In applied statistics one frequently needs to know the probability distribution of a linear combination of independent normal random variables. Let us obtain the distribution of the random variable Y = a1X1 +a2X2 when X1 is a normal variable with mean μ1 and variance σ2 1 and X2 is also a normal variable but independent of X1 with mean μ2 and variance σ2 2. First, by Theorem 7.10, we find MY (t) = Ma1X1 (t)Ma2X2 (t), and then, using Theorem 7.9, we find MY (t) = MX1 (a1t)MX2 (a2t). Substituting a1t for t and then a2t for t in a moment-generating function of the normal distribution derived in Example 7.7, we have MY (t) = exp(a1μ1t + a2 1σ2 1t2 /2 + a2μ2t + a2 2σ2 2t2 /2) = exp[(a1μ1 + a2μ2)t + (a2 1σ2 1 + a2 2σ2 2)t2 /2], which we recognize as the moment-generating function of a distribution that is normal with mean a1μ1 + a2μ2 and variance a2 1σ2 1 + a2 2σ2 2. Generalizing to the case of n independent normal variables, we state the fol- lowing result. Theorem 7.11: If X1, X2, . . . , Xn are independent random variables having normal distributions with means μ1, μ2, . . . , μn and variances σ2 1, σ2 2, . . . , σ2 n, respectively, then the ran- dom variable Y = a1X1 + a2X2 + · · · + anXn has a normal distribution with mean μY = a1μ1 + a2μ2 + · · · + anμn and variance σ2 Y = a2 1σ2 1 + a2 2σ2 2 + · · · + a2 nσ2 n. It is now evident that the Poisson distribution and the normal distribution possess a reproductive property in that the sum of independent random variables having either of these distributions is a random variable that also has the same type of distribution. The chi-squared distribution also has this reproductive property. Theorem 7.12: If X1, X2, . . . , Xn are mutually independent random variables that have, respec- tively, chi-squared distributions with v1, v2, . . . , vn degrees of freedom, then the random variable Y = X1 + X2 + · · · + Xn has a chi-squared distribution with v = v1 + v2 + · · · + vn degrees of freedom. Proof: By Theorem 7.10 and Exercise 7.21, MY (t) = MX1 (t)MX2 (t) · · · MXn (t) and MXi (t) = (1 − 2t)−vi/2 , i = 1, 2, . . . , n.
  • 243. / / 222 Chapter 7 Functions of Random Variables (Optional) Therefore, MY (t) = (1 − 2t)−v1/2 (1 − 2t)−v2/2 · · · (1 − 2t)−vn/2 = (1 − 2t)−(v1+v2+···+vn)/2 , which we recognize as the moment-generating function of a chi-squared distribution with v = v1 + v2 + · · · + vn degrees of freedom. Corollary 7.1: If X1, X2, . . . , Xn are independent random variables having identical normal dis- tributions with mean μ and variance σ2 , then the random variable Y = n i=1 Xi − μ σ 2 has a chi-squared distribution with v = n degrees of freedom. This corollary is an immediate consequence of Example 7.5. It establishes a re- lationship between the very important chi-squared distribution and the normal distribution. It also should provide the reader with a clear idea of what we mean by the parameter that we call degrees of freedom. In future chapters, the notion of degrees of freedom will play an increasingly important role. Corollary 7.2: If X1, X2, . . . , Xn are independent random variables and Xi follows a normal dis- tribution with mean μi and variance σ2 i for i = 1, 2, . . . , n, then the random variable Y = n i=1 Xi − μi σi 2 has a chi-squared distribution with v = n degrees of freedom. Exercises 7.1 Let X be a random variable with probability f(x) = 1 3 , x = 1, 2, 3, 0, elsewhere. Find the probability distribution of the random vari- able Y = 2X − 1. 7.2 Let X be a binomial random variable with prob- ability distribution f(x) = 3 x 2 5 x 3 5 3−x , x = 0, 1, 2, 3, 0, elsewhere. Find the probability distribution of the random vari- able Y = X2 . 7.3 Let X1 and X2 be discrete random variables with the joint multinomial distribution f(x1, x2) = 2 x1, x2, 2 − x1 − x2 1 4 x1 1 3 x2 5 12 2−x1−x2 for x1 = 0, 1, 2; x2 = 0, 1, 2; x1 + x2 ≤ 2; and zero elsewhere. Find the joint probability distribution of Y1 = X1 + X2 and Y2 = X1 − X2. 7.4 Let X1 and X2 be discrete random variables with joint probability distribution f(x1, x2) = x1x2 18 , x1 = 1, 2; x2 = 1, 2, 3, 0, elsewhere. Find the probability distribution of the random vari- able Y = X1X2.
  • 244. / / Exercises 223 7.5 Let X have the probability distribution f(x) = 1, 0 x 1, 0, elsewhere. Show that the random variable Y = −2 ln X has a chi- squared distribution with 2 degrees of freedom. 7.6 Given the random variable X with probability distribution f(x) = 2x, 0 x 1, 0, elsewhere, find the probability distribution of Y = 8X3 . 7.7 The speed of a molecule in a uniform gas at equi- librium is a random variable V whose probability dis- tribution is given by f(v) = kv2 e−bv2 , v 0, 0, elsewhere, where k is an appropriate constant and b depends on the absolute temperature and mass of the molecule. Find the probability distribution of the kinetic energy of the molecule W, where W = mV 2 /2. 7.8 A dealer’s profit, in units of $5000, on a new au- tomobile is given by Y = X2 , where X is a random variable having the density function f(x) = 2(1 − x), 0 x 1, 0, elsewhere. (a) Find the probability density function of the random variable Y . (b) Using the density function of Y , find the probabil- ity that the profit on the next new automobile sold by this dealership will be less than $500. 7.9 The hospital period, in days, for patients follow- ing treatment for a certain type of kidney disorder is a random variable Y = X + 4, where X has the density function f(x) = 32 (x+4)3 , x 0, 0, elsewhere. (a) Find the probability density function of the random variable Y . (b) Using the density function of Y , find the probabil- ity that the hospital period for a patient following this treatment will exceed 8 days. 7.10 The random variables X and Y , representing the weights of creams and toffees, respectively, in 1- kilogram boxes of chocolates containing a mixture of creams, toffees, and cordials, have the joint density function f(x, y) = 24xy, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, x + y ≤ 1, 0, elsewhere. (a) Find the probability density function of the random variable Z = X + Y . (b) Using the density function of Z, find the probabil- ity that, in a given box, the sum of the weights of creams and toffees accounts for at least 1/2 but less than 3/4 of the total weight. 7.11 The amount of kerosene, in thousands of liters, in a tank at the beginning of any day is a random amount Y from which a random amount X is sold dur- ing that day. Assume that the joint density function of these variables is given by f(x, y) = 2, 0 x y, 0 y 1, 0, elsewhere. Find the probability density function for the amount of kerosene left in the tank at the end of the day. 7.12 Let X1 and X2 be independent random variables each having the probability distribution f(x) = e−x , x 0, 0, elsewhere. Show that the random variables Y1 and Y2 are inde- pendent when Y1 = X1 + X2 and Y2 = X1/(X1 + X2). 7.13 A current of I amperes flowing through a resis- tance of R ohms varies according to the probability distribution f(i) = 6i(1 − i), 0 i 1, 0, elsewhere. If the resistance varies independently of the current ac- cording to the probability distribution g(r) = 2r, 0 r 1, 0, elsewhere, find the probability distribution for the power W = I2 R watts. 7.14 Let X be a random variable with probability distribution f(x) = 1+x 2 , −1 x 1, 0, elsewhere. Find the probability distribution of the random vari- able Y = X2 .
  • 245. 224 Chapter 7 Functions of Random Variables (Optional) 7.15 Let X have the probability distribution f(x) = 2(x+1) 9 , −1 x 2, 0, elsewhere. Find the probability distribution of the random vari- able Y = X2 . 7.16 Show that the rth moment about the origin of the gamma distribution is μ r = βr Γ(α + r) Γ(α) . [Hint: Substitute y = x/β in the integral defining μ r and then use the gamma function to evaluate the inte- gral.] 7.17 A random variable X has the discrete uniform distribution f(x; k) = 1 k , x = 1, 2, . . . , k, 0, elsewhere. Show that the moment-generating function of X is MX (t) = et (1 − ekt ) k(1 − et) . 7.18 A random variable X has the geometric distri- bution g(x; p) = pqx−1 for x = 1, 2, 3, . . . . Show that the moment-generating function of X is MX (t) = pet 1 − qet , t ln q, and then use MX (t) to find the mean and variance of the geometric distribution. 7.19 A random variable X has the Poisson distribu- tion p(x; μ) = e−μ μx /x! for x = 0, 1, 2, . . . . Show that the moment-generating function of X is MX (t) = eμ(et −1) . Using MX (t), find the mean and variance of the Pois- son distribution. 7.20 The moment-generating function of a certain Poisson random variable X is given by MX (t) = e4(et −1) . Find P(μ − 2σ X μ + 2σ). 7.21 Show that the moment-generating function of the random variable X having a chi-squared distribu- tion with v degrees of freedom is MX (t) = (1 − 2t)−v/2 . 7.22 Using the moment-generating function of Exer- cise 7.21, show that the mean and variance of the chi- squared distribution with v degrees of freedom are, re- spectively, v and 2v. 7.23 If both X and Y , distributed independently, fol- low exponential distributions with mean parameter 1, find the distributions of (a) U = X + Y ; (b) V = X/(X + Y ). 7.24 By expanding etx in a Maclaurin series and in- tegrating term by term, show that MX (t) = ∞ −∞ etx f(x) dx = 1 + μt + μ 2 t2 2! + · · · + μ r tr r! + · · · .
  • 246. Chapter 8 Fundamental Sampling Distributions and Data Descriptions 8.1 Random Sampling The outcome of a statistical experiment may be recorded either as a numerical value or as a descriptive representation. When a pair of dice is tossed and the total is the outcome of interest, we record a numerical value. However, if the students of a certain school are given blood tests and the type of blood is of interest, then a descriptive representation might be more useful. A person’s blood can be classified in 8 ways: AB, A, B, or O, each with a plus or minus sign, depending on the presence or absence of the Rh antigen. In this chapter, we focus on sampling from distributions or populations and study such important quantities as the sample mean and sample variance, which will be of vital importance in future chapters. In addition, we attempt to give the reader an introduction to the role that the sample mean and variance will play in statistical inference in later chapters. The use of modern high-speed computers allows the scientist or engineer to greatly enhance his or her use of formal statistical inference with graphical techniques. Much of the time, formal inference appears quite dry and perhaps even abstract to the practitioner or to the manager who wishes to let statistical analysis be a guide to decision-making. Populations and Samples We begin this section by discussing the notions of populations and samples. Both are mentioned in a broad fashion in Chapter 1. However, much more needs to be presented about them here, particularly in the context of the concept of random variables. The totality of observations with which we are concerned, whether their number be finite or infinite, constitutes what we call a population. There was a time when the word population referred to observations obtained from statistical studies about people. Today, statisticians use the term to refer to observations relevant to anything of interest, whether it be groups of people, animals, or all possible outcomes from some complicated biological or engineering system. 225
  • 247. 226 Chapter 8 Fundamental Sampling Distributions and Data Descriptions Definition 8.1: A population consists of the totality of the observations with which we are concerned. The number of observations in the population is defined to be the size of the population. If there are 600 students in the school whom we classified according to blood type, we say that we have a population of size 600. The numbers on the cards in a deck, the heights of residents in a certain city, and the lengths of fish in a particular lake are examples of populations with finite size. In each case, the total number of observations is a finite number. The observations obtained by measuring the atmospheric pressure every day, from the past on into the future, or all measurements of the depth of a lake, from any conceivable position, are examples of populations whose sizes are infinite. Some finite populations are so large that in theory we assume them to be infinite. This is true in the case of the population of lifetimes of a certain type of storage battery being manufactured for mass distribution throughout the country. Each observation in a population is a value of a random variable X having some probability distribution f(x). If one is inspecting items coming off an assembly line for defects, then each observation in the population might be a value 0 or 1 of the Bernoulli random variable X with probability distribution b(x; 1, p) = px q1−x , x = 0, 1 where 0 indicates a nondefective item and 1 indicates a defective item. Of course, it is assumed that p, the probability of any item being defective, remains constant from trial to trial. In the blood-type experiment, the random variable X represents the type of blood and is assumed to take on values from 1 to 8. Each student is given one of the values of the discrete random variable. The lives of the storage batteries are values assumed by a continuous random variable having perhaps a normal distribution. When we refer hereafter to a “binomial population,” a “nor- mal population,” or, in general, the “population f(x),” we shall mean a population whose observations are values of a random variable having a binomial distribution, a normal distribution, or the probability distribution f(x). Hence, the mean and variance of a random variable or probability distribution are also referred to as the mean and variance of the corresponding population. In the field of statistical inference, statisticians are interested in arriving at con- clusions concerning a population when it is impossible or impractical to observe the entire set of observations that make up the population. For example, in attempting to determine the average length of life of a certain brand of light bulb, it would be impossible to test all such bulbs if we are to have any left to sell. Exorbitant costs can also be a prohibitive factor in studying an entire population. Therefore, we must depend on a subset of observations from the population to help us make inferences concerning that same population. This brings us to consider the notion of sampling. Definition 8.2: A sample is a subset of a population. If our inferences from the sample to the population are to be valid, we must obtain samples that are representative of the population. All too often we are
  • 248. 8.2 Some Important Statistics 227 tempted to choose a sample by selecting the most convenient members of the population. Such a procedure may lead to erroneous inferences concerning the population. Any sampling procedure that produces inferences that consistently overestimate or consistently underestimate some characteristic of the population is said to be biased. To eliminate any possibility of bias in the sampling procedure, it is desirable to choose a random sample in the sense that the observations are made independently and at random. In selecting a random sample of size n from a population f(x), let us define the random variable Xi, i = 1, 2, . . . , n, to represent the ith measurement or sample value that we observe. The random variables X1, X2, . . . , Xn will then constitute a random sample from the population f(x) with numerical values x1, x2, . . . , xn if the measurements are obtained by repeating the experiment n independent times under essentially the same conditions. Because of the identical conditions under which the elements of the sample are selected, it is reasonable to assume that the n random variables X1, X2, . . . , Xn are independent and that each has the same prob- ability distribution f(x). That is, the probability distributions of X1, X2, . . . , Xn are, respectively, f(x1), f(x2), . . . , f(xn), and their joint probability distribution is f(x1, x2, . . . , xn) = f(x1)f(x2) · · · f(xn). The concept of a random sample is described formally by the following definition. Definition 8.3: Let X1, X2, . . . , Xn be n independent random variables, each having the same probability distribution f(x). Define X1, X2, . . . , Xn to be a random sample of size n from the population f(x) and write its joint probability distribution as f(x1, x2, . . . , xn) = f(x1)f(x2) · · · f(xn). If one makes a random selection of n = 8 storage batteries from a manufacturing process that has maintained the same specification throughout and records the length of life for each battery, with the first measurement x1 being a value of X1, the second measurement x2 a value of X2, and so forth, then x1, x2, . . . , x8 are the values of the random sample X1, X2, . . . , X8. If we assume the population of battery lives to be normal, the possible values of any Xi, i = 1, 2, . . . , 8, will be precisely the same as those in the original population, and hence Xi has the same identical normal distribution as X. 8.2 Some Important Statistics Our main purpose in selecting random samples is to elicit information about the unknown population parameters. Suppose, for example, that we wish to arrive at a conclusion concerning the proportion of coffee-drinkers in the United States who prefer a certain brand of coffee. It would be impossible to question every coffee- drinking American in order to compute the value of the parameter p representing the population proportion. Instead, a large random sample is selected and the proportion p̂ of people in this sample favoring the brand of coffee in question is calculated. The value p̂ is now used to make an inference concerning the true proportion p. Now, p̂ is a function of the observed values in the random sample; since many
  • 249. 228 Chapter 8 Fundamental Sampling Distributions and Data Descriptions random samples are possible from the same population, we would expect p̂ to vary somewhat from sample to sample. That is, p̂ is a value of a random variable that we represent by P. Such a random variable is called a statistic. Definition 8.4: Any function of the random variables constituting a random sample is called a statistic. Location Measures of a Sample: The Sample Mean, Median, and Mode In Chapter 4 we introduced the two parameters μ and σ2 , which measure the center of location and the variability of a probability distribution. These are constant population parameters and are in no way affected or influenced by the observations of a random sample. We shall, however, define some important statistics that describe corresponding measures of a random sample. The most commonly used statistics for measuring the center of a set of data, arranged in order of magnitude, are the mean, median, and mode. Although the first two of these statistics were defined in Chapter 1, we repeat the definitions here. Let X1, X2, . . . , Xn represent n random variables. (a) Sample mean: X̄ = 1 n n i=1 Xi. Note that the statistic X̄ assumes the value x̄ = 1 n n i=1 xi when X1 assumes the value x1, X2 assumes the value x2, and so forth. The term sample mean is applied to both the statistic X̄ and its computed value x̄. (b) Sample median: x̃ = x(n+1)/2, if n is odd, 1 2 (xn/2 + xn/2+1), if n is even. The sample median is also a location measure that shows the middle value of the sample. Examples for both the sample mean and the sample median can be found in Section 1.3. The sample mode is defined as follows. (c) The sample mode is the value of the sample that occurs most often. Example 8.1: Suppose a data set consists of the following observations: 0.32 0.53 0.28 0.37 0.47 0.43 0.36 0.42 0.38 0.43. The sample mode is 0.43, since this value occurs more than any other value. As we suggested in Chapter 1, a measure of location or central tendency in a sample does not by itself give a clear indication of the nature of the sample. Thus, a measure of variability in the sample must also be considered.
  • 250. 8.2 Some Important Statistics 229 Variability Measures of a Sample: The Sample Variance, Standard Deviation, and Range The variability in a sample displays how the observations spread out from the average. The reader is referred to Chapter 1 for more discussion. It is possible to have two sets of observations with the same mean or median that differ considerably in the variability of their measurements about the average. Consider the following measurements, in liters, for two samples of orange juice bottled by companies A and B: Sample A 0.97 1.00 0.94 1.03 1.06 Sample B 1.06 1.01 0.88 0.91 1.14 Both samples have the same mean, 1.00 liter. It is obvious that company A bottles orange juice with a more uniform content than company B. We say that the variability, or the dispersion, of the observations from the average is less for sample A than for sample B. Therefore, in buying orange juice, we would feel more confident that the bottle we select will be close to the advertised average if we buy from company A. In Chapter 1 we introduced several measures of sample variability, including the sample variance, sample standard deviation, and sample range. In this chapter, we will focus mainly on the sample variance. Again, let X1, . . . , Xn represent n random variables. (a) Sample variance: S2 = 1 n − 1 n i=1 (Xi − X̄)2 . (8.2.1) The computed value of S2 for a given sample is denoted by s2 . Note that S2 is essentially defined to be the average of the squares of the deviations of the observations from their mean. The reason for using n − 1 as a divisor rather than the more obvious choice n will become apparent in Chapter 9. Example 8.2: A comparison of coffee prices at 4 randomly selected grocery stores in San Diego showed increases from the previous month of 12, 15, 17, and 20 cents for a 1-pound bag. Find the variance of this random sample of price increases. Solution: Calculating the sample mean, we get x̄ = 12 + 15 + 17 + 20 4 = 16 cents. Therefore, s2 = 1 3 4 i=1 (xi − 16)2 = (12 − 16)2 + (15 − 16)2 + (17 − 16)2 + (20 − 16)2 3 = (−4)2 + (−1)2 + (1)2 + (4)2 3 = 34 3 . Whereas the expression for the sample variance best illustrates that S2 is a measure of variability, an alternative expression does have some merit and thus the reader should be aware of it. The following theorem contains this expression.
  • 251. / / 230 Chapter 8 Fundamental Sampling Distributions and Data Descriptions Theorem 8.1: If S2 is the variance of a random sample of size n, we may write S2 = 1 n(n − 1) ⎡ ⎣n n i=1 X2 i − n i=1 Xi 2 ⎤ ⎦ . Proof: By definition, S2 = 1 n − 1 n i=1 (Xi − X̄)2 = 1 n − 1 n i=1 (X2 i − 2X̄Xi + X̄2 ) = 1 n − 1 n i=1 X2 i − 2X̄ n i=1 Xi + nX̄2 . As in Chapter 1, the sample standard deviation and the sample range are defined below. (b) Sample standard deviation: S = √ S2, where S2 is the sample variance. Let Xmax denote the largest of the Xi values and Xmin the smallest. (c) Sample range: R = Xmax − Xmin. Example 8.3: Find the variance of the data 3, 4, 5, 6, 6, and 7, representing the number of trout caught by a random sample of 6 fishermen on June 19, 1996, at Lake Muskoka. Solution: We find that 6 i=1 x2 i = 171, 6 i=1 xi = 31, and n = 6. Hence, s2 = 1 (6)(5) [(6)(171) − (31)2 ] = 13 6 . Thus, the sample standard deviation s = 13/6 = 1.47 and the sample range is 7 − 3 = 4. Exercises 8.1 Define suitable populations from which the fol- lowing samples are selected: (a) Persons in 200 homes in the city of Richmond are called on the phone and asked to name the candi- date they favor for election to the school board. (b) A coin is tossed 100 times and 34 tails are recorded. (c) Two hundred pairs of a new type of tennis shoe were tested on the professional tour and, on aver- age, lasted 4 months. (d) On five different occasions it took a lawyer 21, 26, 24, 22, and 21 minutes to drive from her suburban home to her midtown office.
  • 252. / / Exercises 231 8.2 The lengths of time, in minutes, that 10 patients waited in a doctor’s office before receiving treatment were recorded as follows: 5, 11, 9, 5, 10, 15, 6, 10, 5, and 10. Treating the data as a random sample, find (a) the mean; (b) the median; (c) the mode. 8.3 The reaction times for a random sample of 9 sub- jects to a stimulant were recorded as 2.5, 3.6, 3.1, 4.3, 2.9. 2.3, 2.6, 4.1, and 3.4 seconds. Calculate (a) the mean; (b) the median. 8.4 The number of tickets issued for traffic violations by 8 state troopers during the Memorial Day weekend are 5, 4, 7, 7, 6, 3, 8, and 6. (a) If these values represent the number of tickets is- sued by a random sample of 8 state troopers from Montgomery County in Virginia, define a suitable population. (b) If the values represent the number of tickets issued by a random sample of 8 state troopers from South Carolina, define a suitable population. 8.5 The numbers of incorrect answers on a true-false competency test for a random sample of 15 students were recorded as follows: 2, 1, 3, 0, 1, 3, 6, 0, 3, 3, 5, 2, 1, 4, and 2. Find (a) the mean; (b) the median; (c) the mode. 8.6 Find the mean, median, and mode for the sample whose observations, 15, 7, 8, 95, 19, 12, 8, 22, and 14, represent the number of sick days claimed on 9 fed- eral income tax returns. Which value appears to be the best measure of the center of these data? State reasons for your preference. 8.7 A random sample of employees from a local man- ufacturing plant pledged the following donations, in dollars, to the United Fund: 100, 40, 75, 15, 20, 100, 75, 50, 30, 10, 55, 75, 25, 50, 90, 80, 15, 25, 45, and 100. Calculate (a) the mean; (b) the mode. 8.8 According to ecology writer Jacqueline Killeen, phosphates contained in household detergents pass right through our sewer systems, causing lakes to turn into swamps that eventually dry up into deserts. The following data show the amount of phosphates per load of laundry, in grams, for a random sample of various types of detergents used according to the prescribed directions: Laundry Phosphates per Load Detergent (grams) A P Blue Sail 48 Dash 47 Concentrated All 42 Cold Water All 42 Breeze 41 Oxydol 34 Ajax 31 Sears 30 Fab 29 Cold Power 29 Bold 29 Rinso 26 For the given phosphate data, find (a) the mean; (b) the median; (c) the mode. 8.9 Consider the data in Exercise 8.2, find (a) the range; (b) the standard deviation. 8.10 For the sample of reaction times in Exercise 8.3, calculate (a) the range; (b) the variance, using the formula of form (8.2.1). 8.11 For the data of Exercise 8.5, calculate the vari- ance using the formula (a) of form (8.2.1); (b) in Theorem 8.1. 8.12 The tar contents of 8 brands of cigarettes se- lected at random from the latest list released by the Federal Trade Commission are as follows: 7.3, 8.6, 10.4, 16.1, 12.2, 15.1, 14.5, and 9.3 milligrams. Calculate (a) the mean; (b) the variance. 8.13 The grade-point averages of 20 college seniors selected at random from a graduating class are as fol- lows: 3.2 1.9 2.7 2.4 2.8 2.9 3.8 3.0 2.5 3.3 1.8 2.5 3.7 2.8 2.0 3.2 2.3 2.1 2.5 1.9 Calculate the standard deviation. 8.14 (a) Show that the sample variance is unchanged if a constant c is added to or subtracted from each
  • 253. 232 Chapter 8 Fundamental Sampling Distributions and Data Descriptions value in the sample. (b) Show that the sample variance becomes c2 times its original value if each observation in the sample is multiplied by c. 8.15 Verify that the variance of the sample 4, 9, 3, 6, 4, and 7 is 5.1, and using this fact, along with the results of Exercise 8.14, find (a) the variance of the sample 12, 27, 9, 18, 12, and 21; (b) the variance of the sample 9, 14, 8, 11, 9, and 12. 8.16 In the 2004-05 football season, University of Southern California had the following score differences for the 13 games it played. 11 49 32 3 6 38 38 30 8 40 31 5 36 Find (a) the mean score difference; (b) the median score difference. 8.3 Sampling Distributions The field of statistical inference is basically concerned with generalizations and predictions. For example, we might claim, based on the opinions of several people interviewed on the street, that in a forthcoming election 60% of the eligible voters in the city of Detroit favor a certain candidate. In this case, we are dealing with a random sample of opinions from a very large finite population. As a second il- lustration we might state that the average cost to build a residence in Charleston, South Carolina, is between $330,000 and $335,000, based on the estimates of 3 contractors selected at random from the 30 now building in this city. The popu- lation being sampled here is again finite but very small. Finally, let us consider a soft-drink machine designed to dispense, on average, 240 milliliters per drink. A company official who computes the mean of 40 drinks obtains x̄ = 236 milliliters and, on the basis of this value, decides that the machine is still dispensing drinks with an average content of μ = 240 milliliters. The 40 drinks represent a sam- ple from the infinite population of possible drinks that will be dispensed by this machine. Inference about the Population from Sample Information In each of the examples above, we computed a statistic from a sample selected from the population, and from this statistic we made various statements concerning the values of population parameters that may or may not be true. The company official made the decision that the soft-drink machine dispenses drinks with an average content of 240 milliliters, even though the sample mean was 236 milliliters, because he knows from sampling theory that, if μ = 240 milliliters, such a sample value could easily occur. In fact, if he ran similar tests, say every hour, he would expect the values of the statistic x̄ to fluctuate above and below μ = 240 milliliters. Only when the value of x̄ is substantially different from 240 milliliters will the company official initiate action to adjust the machine. Since a statistic is a random variable that depends only on the observed sample, it must have a probability distribution. Definition 8.5: The probability distribution of a statistic is called a sampling distribution. The sampling distribution of a statistic depends on the distribution of the pop- ulation, the size of the samples, and the method of choosing the samples. In the
  • 254. 8.4 Sampling Distribution of Means and the Central Limit Theorem 233 remainder of this chapter we study several of the important sampling distribu- tions of frequently used statistics. Applications of these sampling distributions to problems of statistical inference are considered throughout most of the remaining chapters. The probability distribution of X̄ is called the sampling distribution of the mean. What Is the Sampling Distribution of X̄? We should view the sampling distributions of X̄ and S2 as the mechanisms from which we will be able to make inferences on the parameters μ and σ2 . The sam- pling distribution of X̄ with sample size n is the distribution that results when an experiment is conducted over and over (always with sample size n) and the many values of X̄ result. This sampling distribution, then, describes the variability of sample averages around the population mean μ. In the case of the soft-drink machine, knowledge of the sampling distribution of X̄ arms the analyst with the knowledge of a “typical” discrepancy between an observed x̄ value and true μ. The same principle applies in the case of the distribution of S2 . The sam- pling distribution produces information about the variability of s2 values around σ2 in repeated experiments. 8.4 Sampling Distribution of Means and the Central Limit Theorem The first important sampling distribution to be considered is that of the mean X̄. Suppose that a random sample of n observations is taken from a normal population with mean μ and variance σ2 . Each observation Xi, i = 1, 2, . . . , n, of the random sample will then have the same normal distribution as the population being sampled. Hence, by the reproductive property of the normal distribution established in Theorem 7.11, we conclude that X̄ = 1 n (X1 + X2 + · · · + Xn) has a normal distribution with mean μX̄ = 1 n (μ + μ + · · · + μ n terms ) = μ and variance σ2 X̄ = 1 n2 (σ2 + σ2 + · · · + σ2 n terms ) = σ2 n . If we are sampling from a population with unknown distribution, either finite or infinite, the sampling distribution of X̄ will still be approximately normal with mean μ and variance σ2 /n, provided that the sample size is large. This amazing result is an immediate consequence of the following theorem, called the Central Limit Theorem.
  • 255. 234 Chapter 8 Fundamental Sampling Distributions and Data Descriptions The Central Limit Theorem Theorem 8.2: Central Limit Theorem: If X̄ is the mean of a random sample of size n taken from a population with mean μ and finite variance σ2 , then the limiting form of the distribution of Z = X̄ − μ σ/ √ n , as n → ∞, is the standard normal distribution n(z; 0, 1). The normal approximation for X̄ will generally be good if n ≥ 30, provided the population distribution is not terribly skewed. If n 30, the approximation is good only if the population is not too different from a normal distribution and, as stated above, if the population is known to be normal, the sampling distribution of X̄ will follow a normal distribution exactly, no matter how small the size of the samples. The sample size n = 30 is a guideline to use for the Central Limit Theorem. However, as the statement of the theorem implies, the presumption of normality on the distribution of X̄ becomes more accurate as n grows larger. In fact, Figure 8.1 illustrates how the theorem works. It shows how the distribution of X̄ becomes closer to normal as n grows larger, beginning with the clearly nonsymmetric dis- tribution of an individual observation (n = 1). It also illustrates that the mean of X̄ remains μ for any sample size and the variance of X̄ gets smaller as n increases. μ Large n (near normal) Small to moderate n n = 1 (population) Figure 8.1: Illustration of the Central Limit Theorem (distribution of X̄ for n = 1, moderate n, and large n). Example 8.4: An electrical firm manufactures light bulbs that have a length of life that is ap- proximately normally distributed, with mean equal to 800 hours and a standard deviation of 40 hours. Find the probability that a random sample of 16 bulbs will have an average life of less than 775 hours. Solution: The sampling distribution of X̄ will be approximately normal, with μX̄ = 800 and σX̄ = 40/ √ 16 = 10. The desired probability is given by the area of the shaded
  • 256. 8.4 Sampling Distribution of Means and the Central Limit Theorem 235 region in Figure 8.2. x 775 800 σ x = 10 Figure 8.2: Area for Example 8.4. Corresponding to x̄ = 775, we find that z = 775 − 800 10 = −2.5, and therefore P(X̄ 775) = P(Z −2.5) = 0.0062. Inferences on the Population Mean One very important application of the Central Limit Theorem is the determination of reasonable values of the population mean μ. Topics such as hypothesis testing, estimation, quality control, and many others make use of the Central Limit Theo- rem. The following example illustrates the use of the Central Limit Theorem with regard to its relationship with μ, the mean of the population, although the formal application to the foregoing topics is relegated to future chapters. In the following case study, an illustration is given which draws an inference that makes use of the sampling distribution of X̄. In this simple illustration, μ and σ are both known. The Central Limit Theorem and the general notion of sampling distributions are often used to produce evidence about some important aspect of a distribution such as a parameter of the distribution. In the case of the Central Limit Theorem, the parameter of interest is the mean μ. The inference made concerning μ may take one of many forms. Often there is a desire on the part of the analyst that the data (in the form of x̄) support (or not) some predetermined conjecture concerning the value of μ. The use of what we know about the sampling distribution can contribute to answering this type of question. In the following case study, the concept of hypothesis testing leads to a formal objective that we will highlight in future chapters. Case Study 8.1: Automobile Parts:An important manufacturing process produces cylindrical com- ponent parts for the automotive industry. It is important that the process produce
  • 257. 236 Chapter 8 Fundamental Sampling Distributions and Data Descriptions parts having a mean diameter of 5.0 millimeters. The engineer involved conjec- tures that the population mean is 5.0 millimeters. An experiment is conducted in which 100 parts produced by the process are selected randomly and the diameter measured on each. It is known that the population standard deviation is σ = 0.1 millimeter. The experiment indicates a sample average diameter of x̄ = 5.027 mil- limeters. Does this sample information appear to support or refute the engineer’s conjecture? Solution: This example reflects the kind of problem often posed and solved with hypothesis testing machinery introduced in future chapters. We will not use the formality associated with hypothesis testing here, but we will illustrate the principles and logic used. Whether the data support or refute the conjecture depends on the probability that data similar to those obtained in this experiment (x̄ = 5.027) can readily occur when in fact μ = 5.0 (Figure 8.3). In other words, how likely is it that one can obtain x̄ ≥ 5.027 with n = 100 if the population mean is μ = 5.0? If this probability suggests that x̄ = 5.027 is not unreasonable, the conjecture is not refuted. If the probability is quite low, one can certainly argue that the data do not support the conjecture that μ = 5.0. The probability that we choose to compute is given by P(|X̄ − 5| ≥ 0.027). x 4.973 5.027 5.0 Figure 8.3: Area for Case Study 8.1. In other words, if the mean μ is 5, what is the chance that X̄ will deviate by as much as 0.027 millimeter? P(|X̄ − 5| ≥ 0.027) = P(X̄ − 5 ≥ 0.027) + P(X̄ − 5 ≤ −0.027) = 2P X̄ − 5 0.1/ √ 100 ≥ 2.7 . Here we are simply standardizing X̄ according to the Central Limit Theorem. If the conjecture μ = 5.0 is true, X̄−5 0.1/ √ 100 should follow N(0, 1). Thus, 2P X̄ − 5 0.1/ √ 100 ≥ 2.7 = 2P(Z ≥ 2.7) = 2(0.0035) = 0.007.
  • 258. 8.4 Sampling Distribution of Means and the Central Limit Theorem 237 Therefore, one would experience by chance that an x̄ would be 0.027 millimeter from the mean in only 7 in 1000 experiments. As a result, this experiment with x̄ = 5.027 certainly does not give supporting evidence to the conjecture that μ = 5.0. In fact, it strongly refutes the conjecture! Example 8.5: Traveling between two campuses of a university in a city via shuttle bus takes, on average, 28 minutes with a standard deviation of 5 minutes. In a given week, a bus transported passengers 40 times. What is the probability that the average transport time was more than 30 minutes? Assume the mean time is measured to the nearest minute. Solution: In this case, μ = 28 and σ = 3. We need to calculate the probability P(X̄ 30) with n = 40. Since the time is measured on a continuous scale to the nearest minute, an x̄ greater than 30 is equivalent to x̄ ≥ 30.5. Hence, P(X̄ 30) = P X̄ − 28 5/ √ 40 ≥ 30.5 − 28 5/ √ 40 = P(Z ≥ 3.16) = 0.0008. There is only a slight chance that the average time of one bus trip will exceed 30 minutes. An illustrative graph is shown in Figure 8.4. x 30.5 28.0 Figure 8.4: Area for Example 8.5. Sampling Distribution of the Difference between Two Means The illustration in Case Study 8.1 deals with notions of statistical inference on a single mean μ. The engineer was interested in supporting a conjecture regarding a single population mean. A far more important application involves two popula- tions. A scientist or engineer may be interested in a comparative experiment in which two manufacturing methods, 1 and 2, are to be compared. The basis for that comparison is μ1 − μ2, the difference in the population means. Suppose that we have two populations, the first with mean μ1 and variance σ2 1, and the second with mean μ2 and variance σ2 2. Let the statistic X̄1 represent the mean of a random sample of size n1 selected from the first population, and the statistic X̄2 represent the mean of a random sample of size n2 selected from
  • 259. 238 Chapter 8 Fundamental Sampling Distributions and Data Descriptions the second population, independent of the sample from the first population. What can we say about the sampling distribution of the difference X̄1 − X̄2 for repeated samples of size n1 and n2? According to Theorem 8.2, the variables X̄1 and X̄2 are both approximately normally distributed with means μ1 and μ2 and variances σ2 1/n1 and σ2 2/n2, respectively. This approximation improves as n1 and n2 increase. By choosing independent samples from the two populations we ensure that the variables X̄1 and X̄2 will be independent, and then using Theorem 7.11, with a1 = 1 and a2 = −1, we can conclude that X̄1 − X̄2 is approximately normally distributed with mean μX̄1−X̄2 = μX̄1 − μX̄2 = μ1 − μ2 and variance σ2 X̄1−X̄2 = σ2 X̄1 + σ2 X̄2 = σ2 1 n1 + σ2 2 n2 . The Central Limit Theorem can be easily extended to the two-sample, two-population case. Theorem 8.3: If independent samples of size n1 and n2 are drawn at random from two popu- lations, discrete or continuous, with means μ1 and μ2 and variances σ2 1 and σ2 2, respectively, then the sampling distribution of the differences of means, X̄1 − X̄2, is approximately normally distributed with mean and variance given by μX̄1−X̄2 = μ1 − μ2 and σ2 X̄1−X̄2 = σ2 1 n1 + σ2 2 n2 . Hence, Z = (X̄1 − X̄2) − (μ1 − μ2) (σ2 1/n1) + (σ2 2/n2) is approximately a standard normal variable. If both n1 and n2 are greater than or equal to 30, the normal approximation for the distribution of X̄1 − X̄2 is very good when the underlying distributions are not too far away from normal. However, even when n1 and n2 are less than 30, the normal approximation is reasonably good except when the populations are decidedly nonnormal. Of course, if both populations are normal, then X̄1 −X̄2 has a normal distribution no matter what the sizes of n1 and n2 are. The utility of the sampling distribution of the difference between two sample averages is very similar to that described in Case Study 8.1 on page 235 for the case of a single mean. Case Study 8.2 that follows focuses on the use of the difference between two sample means to support (or not) the conjecture that two population means are the same. Case Study 8.2: Paint Drying Time: Two independent experiments are run in which two different types of paint are compared. Eighteen specimens are painted using type A, and the drying time, in hours, is recorded for each. The same is done with type B. The population standard deviations are both known to be 1.0.
  • 260. 8.4 Sampling Distribution of Means and the Central Limit Theorem 239 Assuming that the mean drying time is equal for the two types of paint, find P(X̄A −X̄B 1.0), where X̄A and X̄B are average drying times for samples of size nA = nB = 18. Solution: From the sampling distribution of X̄A − X̄B, we know that the distribution is approximately normal with mean μX̄A−X̄B = μA − μB = 0 and variance σ2 X̄A−X̄B = σ2 A nA + σ2 B nB = 1 18 + 1 18 = 1 9 . xA − xB μ μ A − B = 0 1.0 σ XA−XB = 1 9 Figure 8.5: Area for Case Study 8.2. The desired probability is given by the shaded region in Figure 8.5. Corre- sponding to the value X̄A − X̄B = 1.0, we have z = 1 − (μA − μB) 1/9 = 1 − 0 1/9 = 3.0; so P(Z 3.0) = 1 − P(Z 3.0) = 1 − 0.9987 = 0.0013. What Do We Learn from Case Study 8.2? The machinery in the calculation is based on the presumption that μA = μB. Suppose, however, that the experiment is actually conducted for the purpose of drawing an inference regarding the equality of μA and μB, the two population mean drying times. If the two averages differ by as much as 1 hour (or more), this clearly is evidence that would lead one to conclude that the population mean drying time is not equal for the two types of paint. On the other hand, suppose
  • 261. 240 Chapter 8 Fundamental Sampling Distributions and Data Descriptions that the difference in the two sample averages is as small as, say, 15 minutes. If μA = μB, P[(X̄A − X̄B) 0.25 hour] = P X̄A − X̄B − 0 1/9 3 4 = P Z 3 4 = 1 − P(Z 0.75) = 1 − 0.7734 = 0.2266. Since this probability is not low, one would conclude that a difference in sample means of 15 minutes can happen by chance (i.e., it happens frequently even though μA = μB). As a result, that type of difference in average drying times certainly is not a clear signal that μA = μB. As we indicated earlier, a more detailed formalism regarding this and other types of statistical inference (e.g., hypothesis testing) will be supplied in future chapters. The Central Limit Theorem and sampling distributions discussed in the next three sections will also play a vital role. Example 8.6: The television picture tubes of manufacturer A have a mean lifetime of 6.5 years and a standard deviation of 0.9 year, while those of manufacturer B have a mean lifetime of 6.0 years and a standard deviation of 0.8 year. What is the probability that a random sample of 36 tubes from manufacturer A will have a mean lifetime that is at least 1 year more than the mean lifetime of a sample of 49 tubes from manufacturer B? Solution: We are given the following information: Population 1 Population 2 μ1 = 6.5 μ2 = 6.0 σ1 = 0.9 σ2 = 0.8 n1 = 36 n2 = 49 If we use Theorem 8.3, the sampling distribution of X̄1 − X̄2 will be approxi- mately normal and will have a mean and standard deviation μX̄1−X̄2 = 6.5 − 6.0 = 0.5 and σX̄1−X̄2 = 0.81 36 + 0.64 49 = 0.189. The probability that the mean lifetime for 36 tubes from manufacturer A will be at least 1 year longer than the mean lifetime for 49 tubes from manufacturer B is given by the area of the shaded region in Figure 8.6. Corresponding to the value x̄1 − x̄2 = 1.0, we find that z = 1.0 − 0.5 0.189 = 2.65, and hence P(X̄1 − X̄2 ≥ 1.0) = P(Z 2.65) = 1 − P(Z 2.65) = 1 − 0.9960 = 0.0040.
  • 262. / / Exercises 241 0.5 1.0 x1 x2 x1 x2 0.189 σ Figure 8.6: Area for Example 8.6. More on Sampling Distribution of Means—Normal Approximation to the Binomial Distribution Section 6.5 presented the normal approximation to the binomial distribution at length. Conditions were given on the parameters n and p for which the distribution of a binomial random variable can be approximated by the normal distribution. Examples and exercises reflected the importance of the concept of the “normal approximation.” It turns out that the Central Limit Theorem sheds even more light on how and why this approximation works. We certainly know that a binomial random variable is the number X of successes in n independent trials, where the outcome of each trial is binary. We also illustrated in Chapter 1 that the proportion computed in such an experiment is an average of a set of 0s and 1s. Indeed, while the proportion X/n is an average, X is the sum of this set of 0s and 1s, and both X and X/n are approximately normal if n is sufficiently large. Of course, from what we learned in Chapter 6, we know that there are conditions on n and p that affect the quality of the approximation, namely np ≥ 5 and nq ≥ 5. Exercises 8.17 If all possible samples of size 16 are drawn from a normal population with mean equal to 50 and stan- dard deviation equal to 5, what is the probability that a sample mean X̄ will fall in the interval from μX̄ −1.9σX̄ to μX̄ −0.4σX̄ ? Assume that the sample means can be measured to any degree of accuracy. 8.18 If the standard deviation of the mean for the sampling distribution of random samples of size 36 from a large or infinite population is 2, how large must the sample size become if the standard deviation is to be reduced to 1.2? 8.19 A certain type of thread is manufactured with a mean tensile strength of 78.3 kilograms and a standard deviation of 5.6 kilograms. How is the variance of the sample mean changed when the sample size is (a) increased from 64 to 196? (b) decreased from 784 to 49? 8.20 Given the discrete uniform population f(x) = 1 3 , x = 2, 4, 6, 0, elsewhere, find the probability that a random sample of size 54, selected with replacement, will yield a sample mean greater than 4.1 but less than 4.4. Assume the means are measured to the nearest tenth. 8.21 A soft-drink machine is regulated so that the amount of drink dispensed averages 240 milliliters with
  • 263. / / 242 Chapter 8 Fundamental Sampling Distributions and Data Descriptions a standard deviation of 15 milliliters. Periodically, the machine is checked by taking a sample of 40 drinks and computing the average content. If the mean of the 40 drinks is a value within the interval μX̄ ± 2σX̄ , the machine is thought to be operating satisfactorily; oth- erwise, adjustments are made. In Section 8.3, the com- pany official found the mean of 40 drinks to be x̄ = 236 milliliters and concluded that the machine needed no adjustment. Was this a reasonable decision? 8.22 The heights of 1000 students are approximately normally distributed with a mean of 174.5 centimeters and a standard deviation of 6.9 centimeters. Suppose 200 random samples of size 25 are drawn from this pop- ulation and the means recorded to the nearest tenth of a centimeter. Determine (a) the mean and standard deviation of the sampling distribution of X̄; (b) the number of sample means that fall between 172.5 and 175.8 centimeters inclusive; (c) the number of sample means falling below 172.0 centimeters. 8.23 The random variable X, representing the num- ber of cherries in a cherry puff, has the following prob- ability distribution: x 4 5 6 7 P(X = x) 0.2 0.4 0.3 0.1 (a) Find the mean μ and the variance σ2 of X. (b) Find the mean μX̄ and the variance σ2 X̄ of the mean X̄ for random samples of 36 cherry puffs. (c) Find the probability that the average number of cherries in 36 cherry puffs will be less than 5.5. 8.24 If a certain machine makes electrical resistors having a mean resistance of 40 ohms and a standard deviation of 2 ohms, what is the probability that a random sample of 36 of these resistors will have a com- bined resistance of more than 1458 ohms? 8.25 The average life of a bread-making machine is 7 years, with a standard deviation of 1 year. Assuming that the lives of these machines follow approximately a normal distribution, find (a) the probability that the mean life of a random sam- ple of 9 such machines falls between 6.4 and 7.2 years; (b) the value of x to the right of which 15% of the means computed from random samples of size 9 would fall. 8.26 The amount of time that a drive-through bank teller spends on a customer is a random variable with a mean μ = 3.2 minutes and a standard deviation σ = 1.6 minutes. If a random sample of 64 customers is observed, find the probability that their mean time at the teller’s window is (a) at most 2.7 minutes; (b) more than 3.5 minutes; (c) at least 3.2 minutes but less than 3.4 minutes. 8.27 In a chemical process, the amount of a certain type of impurity in the output is difficult to control and is thus a random variable. Speculation is that the population mean amount of the impurity is 0.20 gram per gram of output. It is known that the standard deviation is 0.1 gram per gram. An experiment is con- ducted to gain more insight regarding the speculation that μ = 0.2. The process is run on a lab scale 50 times and the sample average x̄ turns out to be 0.23 gram per gram. Comment on the speculation that the mean amount of impurity is 0.20 gram per gram. Make use of the Central Limit Theorem in your work. 8.28 A random sample of size 25 is taken from a nor- mal population having a mean of 80 and a standard deviation of 5. A second random sample of size 36 is taken from a different normal population having a mean of 75 and a standard deviation of 3. Find the probability that the sample mean computed from the 25 measurements will exceed the sample mean com- puted from the 36 measurements by at least 3.4 but less than 5.9. Assume the difference of the means to be measured to the nearest tenth. 8.29 The distribution of heights of a certain breed of terrier has a mean of 72 centimeters and a standard de- viation of 10 centimeters, whereas the distribution of heights of a certain breed of poodle has a mean of 28 centimeters with a standard deviation of 5 centimeters. Assuming that the sample means can be measured to any degree of accuracy, find the probability that the sample mean for a random sample of heights of 64 ter- riers exceeds the sample mean for a random sample of heights of 100 poodles by at most 44.2 centimeters. 8.30 The mean score for freshmen on an aptitude test at a certain college is 540, with a standard deviation of 50. Assume the means to be measured to any degree of accuracy. What is the probability that two groups selected at random, consisting of 32 and 50 students, respectively, will differ in their mean scores by (a) more than 20 points? (b) an amount between 5 and 10 points? 8.31 Consider Case Study 8.2 on page 238. Suppose 18 specimens were used for each type of paint in an experiment and x̄A −x̄B, the actual difference in mean drying time, turned out to be 1.0. (a) Does this seem to be a reasonable result if the
  • 264. 8.5 Sampling Distribution of S2 243 two population mean drying times truly are equal? Make use of the result in the solution to Case Study 8.2. (b) If someone did the experiment 10,000 times un- der the condition that μA = μB, in how many of those 10,000 experiments would there be a differ- ence x̄A − x̄B that was as large as (or larger than) 1.0? 8.32 Two different box-filling machines are used to fill cereal boxes on an assembly line. The critical measure- ment influenced by these machines is the weight of the product in the boxes. Engineers are quite certain that the variance of the weight of product is σ2 = 1 ounce. Experiments are conducted using both machines with sample sizes of 36 each. The sample averages for ma- chines A and B are x̄A = 4.5 ounces and x̄B = 4.7 ounces. Engineers are surprised that the two sample averages for the filling machines are so different. (a) Use the Central Limit Theorem to determine P(X̄B − X̄A ≥ 0.2) under the condition that μA = μB. (b) Do the aforementioned experiments seem to, in any way, strongly support a conjecture that the popu- lation means for the two machines are different? Explain using your answer in (a). 8.33 The chemical benzene is highly toxic to hu- mans. However, it is used in the manufacture of many medicine dyes, leather, and coverings. Government regulations dictate that for any production process in- volving benzene, the water in the output of the process must not exceed 7950 parts per million (ppm) of ben- zene. For a particular process of concern, the water sample was collected by a manufacturer 25 times ran- domly and the sample average x̄ was 7960 ppm. It is known from historical data that the standard deviation σ is 100 ppm. (a) What is the probability that the sample average in this experiment would exceed the government limit if the population mean is equal to the limit? Use the Central Limit Theorem. (b) Is an observed x̄ = 7960 in this experiment firm evidence that the population mean for the process exceeds the government limit? Answer your ques- tion by computing P(X̄ ≥ 7960 | μ = 7950). Assume that the distribution of benzene concentra- tion is normal. 8.34 Two alloys A and B are being used to manufac- ture a certain steel product. An experiment needs to be designed to compare the two in terms of maximum load capacity in tons (the maximum weight that can be tolerated without breaking). It is known that the two standard deviations in load capacity are equal at 5 tons each. An experiment is conducted in which 30 specimens of each alloy (A and B) are tested and the results recorded as follows: x̄A = 49.5, x̄B = 45.5; x̄A − x̄B = 4. The manufacturers of alloy A are convinced that this evidence shows conclusively that μA μB and strongly supports the claim that their alloy is superior. Man- ufacturers of alloy B claim that the experiment could easily have given x̄A − x̄B = 4 even if the two popula- tion means are equal. In other words, “the results are inconclusive!” (a) Make an argument that manufacturers of alloy B are wrong. Do it by computing P(X̄A − X̄B 4 | μA = μB). (b) Do you think these data strongly support alloy A? 8.35 Consider the situation described in Example 8.4 on page 234. Do these results prompt you to question the premise that μ = 800 hours? Give a probabilis- tic result that indicates how rare an event X̄ ≤ 775 is when μ = 800. On the other hand, how rare would it be if μ truly were, say, 760 hours? 8.36 Let X1, X2, . . . , Xn be a random sample from a distribution that can take on only positive values. Use the Central Limit Theorem to produce an argument that if n is sufficiently large, then Y = X1X2 · · · Xn has approximately a lognormal distribution. 8.5 Sampling Distribution of S2 In the preceding section we learned about the sampling distribution of X̄. The Central Limit Theorem allowed us to make use of the fact that X̄ − μ σ/ √ n
  • 265. 244 Chapter 8 Fundamental Sampling Distributions and Data Descriptions tends toward N(0, 1) as the sample size grows large. Sampling distributions of important statistics allow us to learn information about parameters. Usually, the parameters are the counterpart to the statistics in question. For example, if an engineer is interested in the population mean resistance of a certain type of resistor, the sampling distribution of X̄ will be exploited once the sample information is gathered. On the other hand, if the variability in resistance is to be studied, clearly the sampling distribution of S2 will be used in learning about the parametric counterpart, the population variance σ2 . If a random sample of size n is drawn from a normal population with mean μ and variance σ2 , and the sample variance is computed, we obtain a value of the statistic S2 . We shall proceed to consider the distribution of the statistic (n − 1)S2 /σ2 . By the addition and subtraction of the sample mean X̄, it is easy to see that n i=1 (Xi − μ)2 = n i=1 [(Xi − X̄) + (X̄ − μ)]2 = n i=1 (Xi − X̄)2 + n i=1 (X̄ − μ)2 + 2(X̄ − μ) n i=1 (Xi − X̄) = n i=1 (Xi − X̄)2 + n(X̄ − μ)2 . Dividing each term of the equality by σ2 and substituting (n−1)S2 for n i=1 (Xi−X̄)2 , we obtain 1 σ2 n i=1 (Xi − μ)2 = (n − 1)S2 σ2 + (X̄ − μ)2 σ2/n . Now, according to Corollary 7.1 on page 222, we know that n i=1 (Xi − μ)2 σ2 is a chi-squared random variable with n degrees of freedom. We have a chi-squared random variable with n degrees of freedom partitioned into two components. Note that in Section 6.7 we showed that a chi-squared distribution is a special case of a gamma distribution. The second term on the right-hand side is Z2 , which is a chi-squared random variable with 1 degree of freedom, and it turns out that (n − 1)S2 /σ2 is a chi-squared random variable with n − 1 degree of freedom. We formalize this in the following theorem. Theorem 8.4: If S2 is the variance of a random sample of size n taken from a normal population having the variance σ2 , then the statistic χ2 = (n − 1)S2 σ2 = n i=1 (Xi − X̄)2 σ2 has a chi-squared distribution with v = n − 1 degrees of freedom. The values of the random variable χ2 are calculated from each sample by the
  • 266. 8.5 Sampling Distribution of S2 245 formula χ2 = (n − 1)s2 σ2 . The probability that a random sample produces a χ2 value greater than some specified value is equal to the area under the curve to the right of this value. It is customary to let χ2 α represent the χ2 value above which we find an area of α. This is illustrated by the shaded region in Figure 8.7. 0 χ χ 2 2 α α Figure 8.7: The chi-squared distribution. Table A.5 gives values of χ2 α for various values of α and v. The areas, α, are the column headings; the degrees of freedom, v, are given in the left column; and the table entries are the χ2 values. Hence, the χ2 value with 7 degrees of freedom, leaving an area of 0.05 to the right, is χ2 0.05 = 14.067. Owing to lack of symmetry, we must also use the tables to find χ2 0.95 = 2.167 for v = 7. Exactly 95% of a chi-squared distribution lies between χ2 0.975 and χ2 0.025. A χ2 value falling to the right of χ2 0.025 is not likely to occur unless our assumed value of σ2 is too small. Similarly, a χ2 value falling to the left of χ2 0.975 is unlikely unless our assumed value of σ2 is too large. In other words, it is possible to have a χ2 value to the left of χ2 0.975 or to the right of χ2 0.025 when σ2 is correct, but if this should occur, it is more probable that the assumed value of σ2 is in error. Example 8.7: A manufacturer of car batteries guarantees that the batteries will last, on average, 3 years with a standard deviation of 1 year. If five of these batteries have lifetimes of 1.9, 2.4, 3.0, 3.5, and 4.2 years, should the manufacturer still be convinced that the batteries have a standard deviation of 1 year? Assume that the battery lifetime follows a normal distribution. Solution: We first find the sample variance using Theorem 8.1, s2 = (5)(48.26) − (15)2 (5)(4) = 0.815. Then χ2 = (4)(0.815) 1 = 3.26
  • 267. 246 Chapter 8 Fundamental Sampling Distributions and Data Descriptions is a value from a chi-squared distribution with 4 degrees of freedom. Since 95% of the χ2 values with 4 degrees of freedom fall between 0.484 and 11.143, the computed value with σ2 = 1 is reasonable, and therefore the manufacturer has no reason to suspect that the standard deviation is other than 1 year. Degrees of Freedom as a Measure of Sample Information Recall from Corollary 7.1 in Section 7.3 that n i=1 (Xi − μ)2 σ2 has a χ2 -distribution with n degrees of freedom. Note also Theorem 8.4, which indicates that the random variable (n − 1)S2 σ2 = n i=1 (Xi − X̄)2 σ2 has a χ2 -distribution with n−1 degrees of freedom. The reader may also recall that the term degrees of freedom, used in this identical context, is discussed in Chapter 1. As we indicated earlier, the proof of Theorem 8.4 will not be given. However, the reader can view Theorem 8.4 as indicating that when μ is not known and one considers the distribution of n i=1 (Xi − X̄)2 σ2 , there is 1 less degree of freedom, or a degree of freedom is lost in the estimation of μ (i.e., when μ is replaced by x̄). In other words, there are n degrees of free- dom, or independent pieces of information, in the random sample from the normal distribution. When the data (the values in the sample) are used to compute the mean, there is 1 less degree of freedom in the information used to estimate σ2 . 8.6 t-Distribution In Section 8.4, we discussed the utility of the Central Limit Theorem. Its applica- tions revolve around inferences on a population mean or the difference between two population means. Use of the Central Limit Theorem and the normal distribution is certainly helpful in this context. However, it was assumed that the population standard deviation is known. This assumption may not be unreasonable in situ- ations where the engineer is quite familiar with the system or process. However, in many experimental scenarios, knowledge of σ is certainly no more reasonable than knowledge of the population mean μ. Often, in fact, an estimate of σ must be supplied by the same sample information that produced the sample average x̄. As a result, a natural statistic to consider to deal with inferences on μ is T = X̄ − μ S/ √ n ,
  • 268. 8.6 t-Distribution 247 since S is the sample analog to σ. If the sample size is small, the values of S2 fluc- tuate considerably from sample to sample (see Exercise 8.43 on page 259) and the distribution of T deviates appreciably from that of a standard normal distribution. If the sample size is large enough, say n ≥ 30, the distribution of T does not differ considerably from the standard normal. However, for n 30, it is useful to deal with the exact distribution of T. In developing the sampling distribution of T, we shall assume that our random sample was selected from a normal population. We can then write T = (X̄ − μ)/(σ/ √ n) S2/σ2 = Z V/(n − 1) , where Z = X̄ − μ σ/ √ n has the standard normal distribution and V = (n − 1)S2 σ2 has a chi-squared distribution with v = n−1 degrees of freedom. In sampling from normal populations, we can show that X̄ and S2 are independent, and consequently so are Z and V . The following theorem gives the definition of a random variable T as a function of Z (standard normal) and χ2 . For completeness, the density function of the t-distribution is given. Theorem 8.5: Let Z be a standard normal random variable and V a chi-squared random variable with v degrees of freedom. If Z and V are independent, then the distribution of the random variable T, where T = Z V/v , is given by the density function h(t) = Γ[(v + 1)/2] Γ(v/2) √ πv 1 + t2 v −(v+1)/2 , − ∞ t ∞. This is known as the t-distribution with v degrees of freedom. From the foregoing and the theorem above we have the following corollary.
  • 269. 248 Chapter 8 Fundamental Sampling Distributions and Data Descriptions Corollary 8.1: Let X1, X2, . . . , Xn be independent random variables that are all normal with mean μ and standard deviation σ. Let X̄ = 1 n n i=1 Xi and S2 = 1 n − 1 n i=1 (Xi − X̄)2 . Then the random variable T = X̄−μ S/ √ n has a t-distribution with v = n − 1 degrees of freedom. The probability distribution of T was first published in 1908 in a paper written by W. S. Gosset. At the time, Gosset was employed by an Irish brewery that prohibited publication of research by members of its staff. To circumvent this re- striction, he published his work secretly under the name “Student.” Consequently, the distribution of T is usually called the Student t-distribution or simply the t- distribution. In deriving the equation of this distribution, Gosset assumed that the samples were selected from a normal population. Although this would seem to be a very restrictive assumption, it can be shown that nonnormal populations possess- ing nearly bell-shaped distributions will still provide values of T that approximate the t-distribution very closely. What Does the t-Distribution Look Like? The distribution of T is similar to the distribution of Z in that they both are symmetric about a mean of zero. Both distributions are bell shaped, but the t- distribution is more variable, owing to the fact that the T-values depend on the fluctuations of two quantities, X̄ and S2 , whereas the Z-values depend only on the changes in X̄ from sample to sample. The distribution of T differs from that of Z in that the variance of T depends on the sample size n and is always greater than 1. Only when the sample size n → ∞ will the two distributions become the same. In Figure 8.8, we show the relationship between a standard normal distribution (v = ∞) and t-distributions with 2 and 5 degrees of freedom. The percentage points of the t-distribution are given in Table A.4. 2 1 0 1 2 v 2 v v 5 Figure 8.8: The t-distribution curves for v = 2, 5, and ∞. t t1 t t 0 α α α Figure 8.9: Symmetry property (about 0) of the t-distribution.
  • 270. 8.6 t-Distribution 249 It is customary to let tα represent the t-value above which we find an area equal to α. Hence, the t-value with 10 degrees of freedom leaving an area of 0.025 to the right is t = 2.228. Since the t-distribution is symmetric about a mean of zero, we have t1−α = −tα; that is, the t-value leaving an area of 1 − α to the right and therefore an area of α to the left is equal to the negative t-value that leaves an area of α in the right tail of the distribution (see Figure 8.9). That is, t0.95 = −t0.05, t0.99 = −t0.01, and so forth. Example 8.8: The t-value with v = 14 degrees of freedom that leaves an area of 0.025 to the left, and therefore an area of 0.975 to the right, is t0.975 = −t0.025 = −2.145. Example 8.9: Find P(−t0.025 T t0.05). Solution: Since t0.05 leaves an area of 0.05 to the right, and −t0.025 leaves an area of 0.025 to the left, we find a total area of 1 − 0.05 − 0.025 = 0.925 between −t0.025 and t0.05. Hence P(−t0.025 T t0.05) = 0.925. Example 8.10: Find k such that P(k T −1.761) = 0.045 for a random sample of size 15 selected from a normal distribution and X−μ s/ √ n . t 0 k −t0.005 0.045 Figure 8.10: The t-values for Example 8.10. Solution: From Table A.4 we note that 1.761 corresponds to t0.05 when v = 14. Therefore, −t0.05 = −1.761. Since k in the original probability statement is to the left of −t0.05 = −1.761, let k = −tα. Then, from Figure 8.10, we have 0.045 = 0.05 − α, or α = 0.005. Hence, from Table A.4 with v = 14, k = −t0.005 = −2.977 and P(−2.977 T −1.761) = 0.045.
  • 271. 250 Chapter 8 Fundamental Sampling Distributions and Data Descriptions Exactly 95% of the values of a t-distribution with v = n − 1 degrees of freedom lie between −t0.025 and t0.025. Of course, there are other t-values that contain 95% of the distribution, such as −t0.02 and t0.03, but these values do not appear in Table A.4, and furthermore, the shortest possible interval is obtained by choosing t-values that leave exactly the same area in the two tails of our distribution. A t-value that falls below −t0.025 or above t0.025 would tend to make us believe either that a very rare event has taken place or that our assumption about μ is in error. Should this happen, we shall make the the decision that our assumed value of μ is in error. In fact, a t-value falling below −t0.01 or above t0.01 would provide even stronger evidence that our assumed value of μ is quite unlikely. General procedures for testing claims concerning the value of the parameter μ will be treated in Chapter 10. A preliminary look into the foundation of these procedure is illustrated by the following example. Example 8.11: A chemical engineer claims that the population mean yield of a certain batch process is 500 grams per milliliter of raw material. To check this claim he samples 25 batches each month. If the computed t-value falls between −t0.05 and t0.05, he is satisfied with this claim. What conclusion should he draw from a sample that has a mean x̄ = 518 grams per milliliter and a sample standard deviation s = 40 grams? Assume the distribution of yields to be approximately normal. Solution: From Table A.4 we find that t0.05 = 1.711 for 24 degrees of freedom. Therefore, the engineer can be satisfied with his claim if a sample of 25 batches yields a t-value between −1.711 and 1.711. If μ= 500, then t = 518 − 500 40/ √ 25 = 2.25, a value well above 1.711. The probability of obtaining a t-value, with v = 24, equal to or greater than 2.25 is approximately 0.02. If μ 500, the value of t computed from the sample is more reasonable. Hence, the engineer is likely to conclude that the process produces a better product than he thought. What Is the t-Distribution Used For? The t-distribution is used extensively in problems that deal with inference about the population mean (as illustrated in Example 8.11) or in problems that involve comparative samples (i.e., in cases where one is trying to determine if means from two samples are significantly different). The use of the distribution will be extended in Chapters 9, 10, 11, and 12. The reader should note that use of the t-distribution for the statistic T = X̄ − μ S/ √ n requires that X1, X2, . . . , Xn be normal. The use of the t-distribution and the sample size consideration do not relate to the Central Limit Theorem. The use of the standard normal distribution rather than T for n ≥ 30 merely implies that S is a sufficiently good estimator of σ in this case. In chapters that follow the t-distribution finds extensive usage.
  • 272. 8.7 F-Distribution 251 8.7 F-Distribution We have motivated the t-distribution in part by its application to problems in which there is comparative sampling (i.e., a comparison between two sample means). For example, some of our examples in future chapters will take a more formal approach, chemical engineer collects data on two catalysts, biologist collects data on two growth media, or chemist gathers data on two methods of coating material to inhibit corrosion. While it is of interest to let sample information shed light on two population means, it is often the case that a comparison of variability is equally important, if not more so. The F-distribution finds enormous application in comparing sample variances. Applications of the F-distribution are found in problems involving two or more samples. The statistic F is defined to be the ratio of two independent chi-squared random variables, each divided by its number of degrees of freedom. Hence, we can write F = U/v1 V/v2 , where U and V are independent random variables having chi-squared distributions with v1 and v2 degrees of freedom, respectively. We shall now state the sampling distribution of F. Theorem 8.6: Let U and V be two independent random variables having chi-squared distributions with v1 and v2 degrees of freedom, respectively. Then the distribution of the random variable F = U/v1 V/v2 is given by the density function h(f) = Γ[(v1+v2)/2](v1/v2)v1/2 Γ(v1/2)Γ(v2/2) f(v1/2)−1 (1+v1f/v2)(v1+v2)/2 , f 0, 0, f ≤ 0. This is known as the F-distribution with v1 and v2 degrees of freedom (d.f.). We will make considerable use of the random variable F in future chapters. How- ever, the density function will not be used and is given only for completeness. The curve of the F-distribution depends not only on the two parameters v1 and v2 but also on the order in which we state them. Once these two values are given, we can identify the curve. Typical F-distributions are shown in Figure 8.11. Let fα be the f-value above which we find an area equal to α. This is illustrated by the shaded region in Figure 8.12. Table A.6 gives values of fα only for α = 0.05 and α = 0.01 for various combinations of the degrees of freedom v1 and v2. Hence, the f-value with 6 and 10 degrees of freedom, leaving an area of 0.05 to the right, is f0.05 = 3.22. By means of the following theorem, Table A.6 can also be used to find values of f0.95 and f0.99. The proof is left for the reader.
  • 273. 252 Chapter 8 Fundamental Sampling Distributions and Data Descriptions f 0 d.f. (6, 10) d.f. (10, 30) Figure 8.11: Typical F-distributions. f 0 f α α Figure 8.12: Illustration of the fα for the F- distribution. Theorem 8.7: Writing fα(v1, v2) for fα with v1 and v2 degrees of freedom, we obtain f1−α(v1, v2) = 1 fα(v2, v1) . Thus, the f-value with 6 and 10 degrees of freedom, leaving an area of 0.95 to the right, is f0.95(6, 10) = 1 f0.05(10, 6) = 1 4.06 = 0.246. The F -Distribution with Two Sample Variances Suppose that random samples of size n1 and n2 are selected from two normal populations with variances σ2 1 and σ2 2, respectively. From Theorem 8.4, we know that χ2 1 = (n1 − 1)S2 1 σ2 1 and χ2 2 = (n2 − 1)S2 2 σ2 2 are random variables having chi-squared distributions with v1 = n1 − 1 and v2 = n2 − 1 degrees of freedom. Furthermore, since the samples are selected at random, we are dealing with independent random variables. Then, using Theorem 8.6 with χ2 1 = U and χ2 2 = V , we obtain the following result. Theorem 8.8: If S2 1 and S2 2 are the variances of independent random samples of size n1 and n2 taken from normal populations with variances σ2 1 and σ2 2, respectively, then F = S2 1 /σ2 1 S2 2 /σ2 2 = σ2 2S2 1 σ2 1S2 2 has an F-distribution with v1 = n1 − 1 and v2 = n2 − 1 degrees of freedom.
  • 274. 8.7 F-Distribution 253 What Is the F -Distribution Used For? We answered this question, in part, at the beginning of this section. The F- distribution is used in two-sample situations to draw inferences about the pop- ulation variances. This involves the application of Theorem 8.8. However, the F-distribution can also be applied to many other types of problems involving sam- ple variances. In fact, the F-distribution is called the variance ratio distribution. As an illustration, consider Case Study 8.2, in which two paints, A and B, were compared with regard to mean drying time. The normal distribution applies nicely (assuming that σA and σB are known). However, suppose that there are three types of paints to compare, say A, B, and C. We wish to determine if the population means are equivalent. Suppose that important summary information from the experiment is as follows: Paint Sample Mean Sample Variance Sample Size A X̄A = 4.5 s2 A = 0.20 10 B X̄B = 5.5 s2 B = 0.14 10 C X̄C = 6.5 s2 C = 0.11 10 The problem centers around whether or not the sample averages (x̄A, x̄B, x̄C) are far enough apart. The implication of “far enough apart” is very important. It would seem reasonable that if the variability between sample averages is larger than what one would expect by chance, the data do not support the conclusion that μA = μB = μC. Whether these sample averages could have occurred by chance depends on the variability within samples, as quantified by s2 A, s2 B, and s2 C. The notion of the important components of variability is best seen through some simple graphics. Consider the plot of raw data from samples A, B, and C, shown in Figure 8.13. These data could easily have generated the above summary information. 4.5 5.5 6.5 A A A A A A A A A A B B B B B B B BB B CC C C CC C C C C xA xB xC Figure 8.13: Data from three distinct samples. It appears evident that the data came from distributions with different pop- ulation means, although there is some overlap between the samples. An analysis that involves all of the data would attempt to determine if the variability between the sample averages and the variability within the samples could have occurred jointly if in fact the populations have a common mean. Notice that the key to this analysis centers around the two following sources of variability. (1) Variability within samples (between observations in distinct samples) (2) Variability between samples (between sample averages) Clearly, if the variability in (1) is considerably larger than that in (2), there will be considerable overlap in the sample data, a signal that the data could all have come
  • 275. 254 Chapter 8 Fundamental Sampling Distributions and Data Descriptions from a common distribution. An example is found in the data set shown in Figure 8.14. On the other hand, it is very unlikely that data from distributions with a common mean could have variability between sample averages that is considerably larger than the variability within samples. A A A A A A A A A A B B B B B B B B B B C C C C C C C C C C xA xB xC Figure 8.14: Data that easily could have come from the same population. The sources of variability in (1) and (2) above generate important ratios of sample variances, and ratios are used in conjunction with the F-distribution. The general procedure involved is called analysis of variance. It is interesting that in the paint example described here, we are dealing with inferences on three pop- ulation means, but two sources of variability are used. We will not supply details here, but in Chapters 13 through 15 we make extensive use of analysis of variance, and, of course, the F-distribution plays an important role. 8.8 Quantile and Probability Plots In Chapter 1 we introduced the reader to empirical distributions. The motivation is to use creative displays to extract information about properties of a set of data. For example, stem-and-leaf plots provide the viewer with a look at symmetry and other properties of the data. In this chapter we deal with samples, which, of course, are collections of experimental data from which we draw conclusions about populations. Often the appearance of the sample provides information about the distribution from which the data are taken. For example, in Chapter 1 we illustrated the general nature of pairs of samples with point plots that displayed a relative comparison between central tendency and variability in two samples. In chapters that follow, we often make the assumption that a distribution is normal. Graphical information regarding the validity of this assumption can be retrieved from displays like stem-and-leaf plots and frequency histograms. In ad- dition, we will introduce the notion of normal probability plots and quantile plots in this section. These plots are used in studies that have varying degrees of com- plexity, with the main objective of the plots being to provide a diagnostic check on the assumption that the data came from a normal distribution. We can characterize statistical analysis as the process of drawing conclusions about systems in the presence of system variability. For example, an engineer’s attempt to learn about a chemical process is often clouded by process variability. A study involving the number of defective items in a production process is often made more difficult by variability in the method of manufacture of the items. In what has preceded, we have learned about samples and statistics that express center of location and variability in the sample. These statistics provide single measures, whereas a graphical display adds additional information through a picture. One type of plot that can be particularly useful in characterizing the nature of a data set is the quantile plot. As in the case of the box-and-whisker plot (Section
  • 276. 8.8 Quantile and Probability Plots 255 1.6), one can use the basic ideas in the quantile plot to compare samples of data, where the goal of the analyst is to draw distinctions. Further illustrations of this type of usage of quantile plots will be given in future chapters where the formal statistical inference associated with comparing samples is discussed. At that point, case studies will expose the reader to both the formal inference and the diagnostic graphics for the same data set. Quantile Plot The purpose of the quantile plot is to depict, in sample form, the cumulative distribution function discussed in Chapter 3. Definition 8.6: A quantile of a sample, q(f), is a value for which a specified fraction f of the data values is less than or equal to q(f). Obviously, a quantile represents an estimate of a characteristic of a population, or rather, the theoretical distribution. The sample median is q(0.5). The 75th percentile (upper quartile) is q(0.75) and the lower quartile is q(0.25). A quantile plot simply plots the data values on the vertical axis against an empirical assessment of the fraction of observations exceeded by the data value. For theoretical purposes, this fraction is computed as fi = i − 3 8 n + 1 4 , where i is the order of the observations when they are ranked from low to high. In other words, if we denote the ranked observations as y(1) ≤ y(2) ≤ y(3) ≤ · · · ≤ y(n−1) ≤ y(n), then the quantile plot depicts a plot of y(i) against fi. In Figure 8.15, the quantile plot is given for the paint can ear data discussed previously. Unlike the box-and-whisker plot, the quantile plot actually shows all observa- tions. All quantiles, including the median and the upper and lower quantile, can be approximated visually. For example, we readily observe a median of 35 and an upper quartile of about 36. Relatively large clusters around specific values are indicated by slopes near zero, while sparse data in certain areas produce steeper slopes. Figure 8.15 depicts sparsity of data from the values 28 through 30 but relatively high density at 36 through 38. In Chapters 9 and 10 we pursue quantile plotting further by illustrating useful ways of comparing distinct samples. It should be somewhat evident to the reader that detection of whether or not a data set came from a normal distribution can be an important tool for the data analyst. As we indicated earlier in this section, we often make the assumption that all or subsets of observations in a data set are realizations of independent identically distributed normal random variables. Once again, the diagnostic plot can often nicely augment (for display purposes) a formal goodness-of-fit test on the data. Goodness-of-fit tests are discussed in Chapter 10. Readers of a scientific paper or report tend to find diagnostic information much clearer, less dry, and perhaps less boring than a formal analysis. In later chapters (Chapters 9 through 13), we focus
  • 277. 256 Chapter 8 Fundamental Sampling Distributions and Data Descriptions 0.0 0.2 0.4 0.6 0.8 1.0 28 30 32 34 36 38 40 Quantile Fraction, f Figure 8.15: Quantile plot for paint data. again on methods of detecting deviations from normality as an augmentation of formal statistical inference. Quantile plots are useful in detection of distribution types. There are also situations in both model building and design of experiments in which the plots are used to detect important model terms or effects that are active. In other situations, they are used to determine whether or not the underlying assumptions made by the scientist or engineer in building the model are reasonable. Many examples with illustrations will be encountered in Chapters 11, 12, and 13. The following subsection provides a discussion and illustration of a diagnostic plot called the normal quantile-quantile plot. Normal Quantile-Quantile Plot The normal quantile-quantile plot takes advantage of what is known about the quantiles of the normal distribution. The methodology involves a plot of the em- pirical quantiles recently discussed against the corresponding quantile of the normal distribution. Now, the expression for a quantile of an N(μ, σ) random variable is very complicated. However, a good approximation is given by qμ,σ(f) = μ + σ{4.91[f0.14 − (1 − f)0.14 ]}. The expression in braces (the multiple of σ) is the approximation for the corre- sponding quantile for the N(0, 1) random variable, that is, q0,1(f) = 4.91[f0.14 − (1 − f)0.14 ].
  • 278. 8.8 Quantile and Probability Plots 257 Definition 8.7: The normal quantile-quantile plot is a plot of y(i) (ordered observations) against q0,1(fi), where fi = i− 3 8 n+ 1 4 . A nearly straight-line relationship suggests that the data came from a normal distribution. The intercept on the vertical axis is an estimate of the population mean μ and the slope is an estimate of the standard deviation σ. Figure 8.16 shows a normal quantile-quantile plot for the paint can data. −2 2 1 −2 2 28 30 32 34 36 38 40 Quantile y Standard normal quantile, q (f) 0,1 Figure 8.16: Normal quantile-quantile plot for paint data. Normal Probability Plotting Notice how the deviation from normality becomes clear from the appearance of the plot. The asymmetry exhibited in the data results in changes in the slope. The ideas of probability plotting are manifested in plots other than the normal quantile-quantile plot discussed here. For example, much attention is given to the so-called normal probability plot, in which f is plotted against the ordered data values on special paper and the scale used results in a straight line. In addition, an alternative plot makes use of the expected values of the ranked observations for the normal distribution and plots the ranked observations against their expected value, under the assumption of data from N(μ, σ). Once again, the straight line is the graphical yardstick used. We continue to suggest that the foundation in graphical analytical methods developed in this section will aid in understanding formal methods of distinguishing between distinct samples of data.
  • 279. 258 Chapter 8 Fundamental Sampling Distributions and Data Descriptions Example 8.12: Consider the data in Exercise 10.41 on page 358 in Chapter 10. In a study “Nu- trient Retention and Macro Invertebrate Community Response to Sewage Stress in a Stream Ecosystem,” conducted in the Department of Zoology at the Virginia Polytechnic Institute and State University, data were collected on density measure- ments (number of organisms per square meter) at two different collecting stations. Details are given in Chapter 10 regarding analytical methods of comparing samples to determine if both are from the same N(μ, σ) distribution. The data are given in Table 8.1. Table 8.1: Data for Example 8.12 Number of Organisms per Square Meter Station 1 Station 2 5, 030 13, 700 10, 730 11, 400 860 2, 200 4, 250 15, 040 4, 980 11, 910 8, 130 26, 850 17, 660 22, 800 1, 130 1, 690 2, 800 4, 670 6, 890 7, 720 7, 030 7, 330 2, 810 1, 330 3, 320 1, 230 2, 130 2, 190 Construct a normal quantile-quantile plot and draw conclusions regarding whether or not it is reasonable to assume that the two samples are from the same n(x; μ, σ) distribution. −2 −1 0 1 2 5,000 10,000 15,000 20,000 25,000 Standard normal quantile, q ( f) 0,1 Station 1 Station 2 Quantile Figure 8.17: Normal quantile-quantile plot for density data of Example 8.12.
  • 280. / / Exercises 259 Solution: Figure 8.17 shows the normal quantile-quantile plot for the density measurements. The plot is far from a single straight line. In fact, the data from station 1 reflect a few values in the lower tail of the distribution and several in the upper tail. The “clustering” of observations would make it seem unlikely that the two samples came from a common N(μ, σ) distribution. Although we have concentrated our development and illustration on probability plotting for the normal distribution, we could focus on any distribution. We would merely need to compute quantities analytically for the theoretical distribution in question. Exercises 8.37 For a chi-squared distribution, find (a) χ2 0.025 when v = 15; (b) χ2 0.01 when v = 7; (c) χ2 0.05 when v = 24. 8.38 For a chi-squared distribution, find (a) χ2 0.005 when v = 5; (b) χ2 0.05 when v = 19; (c) χ2 0.01 when v = 12. 8.39 For a chi-squared distribution, find χ2 α such that (a) P(X2 χ2 α) = 0.99 when v = 4; (b) P(X2 χ2 α) = 0.025 when v = 19; (c) P(37.652 X2 χ2 α) = 0.045 when v = 25. 8.40 For a chi-squared distribution, find χ2 α such that (a) P(X2 χ2 α) = 0.01 when v = 21; (b) P(X2 χ2 α) = 0.95 when v = 6; (c) P(χ2 α X2 23.209) = 0.015 when v = 10. 8.41 Assume the sample variances to be continuous measurements. Find the probability that a random sample of 25 observations, from a normal population with variance σ2 = 6, will have a sample variance S2 (a) greater than 9.1; (b) between 3.462 and 10.745. 8.42 The scores on a placement test given to college freshmen for the past five years are approximately nor- mally distributed with a mean μ = 74 and a variance σ2 = 8. Would you still consider σ2 = 8 to be a valid value of the variance if a random sample of 20 students who take the placement test this year obtain a value of s2 = 20? 8.43 Show that the variance of S2 for random sam- ples of size n from a normal population decreases as n becomes large. [Hint: First find the variance of (n − 1)S2 /σ2 .] 8.44 (a) Find t0.025 when v = 14. (b) Find −t0.10 when v = 10. (c) Find t0.995 when v = 7. 8.45 (a) Find P(T 2.365) when v = 7. (b) Find P(T 1.318) when v = 24. (c) Find P(−1.356 T 2.179) when v = 12. (d) Find P(T −2.567) when v = 17. 8.46 (a) Find P(−t0.005 T t0.01) for v = 20. (b) Find P(T −t0.025). 8.47 Given a random sample of size 24 from a normal distribution, find k such that (a) P(−2.069 T k) = 0.965; (b) P(k T 2.807) = 0.095; (c) P(−k T k) = 0.90. 8.48 A manufacturing firm claims that the batteries used in their electronic games will last an average of 30 hours. To maintain this average, 16 batteries are tested each month. If the computed t-value falls be- tween −t0.025 and t0.025, the firm is satisfied with its claim. What conclusion should the firm draw from a sample that has a mean of x̄ = 27.5 hours and a stan- dard deviation of s = 5 hours? Assume the distribution of battery lives to be approximately normal. 8.49 A normal population with unknown variance has a mean of 20. Is one likely to obtain a random sample of size 9 from this population with a mean of 24 and a standard deviation of 4.1? If not, what conclusion would you draw?
  • 281. / / 260 Chapter 8 Fundamental Sampling Distributions and Data Descriptions 8.50 A maker of a certain brand of low-fat cereal bars claims that the average saturated fat content is 0.5 gram. In a random sample of 8 cereal bars of this brand, the saturated fat content was 0.6, 0.7, 0.7, 0.3, 0.4, 0.5, 0.4, and 0.2. Would you agree with the claim? Assume a normal distribution. 8.51 For an F-distribution, find (a) f0.05 with v1 = 7 and v2 = 15; (b) f0.05 with v1 = 15 and v2 = 7: (c) f0.01 with v1 = 24 and v2 = 19; (d) f0.95 with v1 = 19 and v2 = 24; (e) f0.99 with v1 = 28 and v2 = 12. 8.52 Pull-strength tests on 10 soldered leads for a semiconductor device yield the following results, in pounds of force required to rupture the bond: 19.8 12.7 13.2 16.9 10.6 18.8 11.1 14.3 17.0 12.5 Another set of 8 leads was tested after encapsulation to determine whether the pull strength had been in- creased by encapsulation of the device, with the fol- lowing results: 24.9 22.8 23.6 22.1 20.4 21.6 21.8 22.5 Comment on the evidence available concerning equal- ity of the two population variances. 8.53 Consider the following measurements of the heat-producing capacity of the coal produced by two mines (in millions of calories per ton): Mine 1: 8260 8130 8350 8070 8340 Mine 2: 7950 7890 7900 8140 7920 7840 Can it be concluded that the two population variances are equal? 8.54 Construct a quantile plot of these data, which represent the lifetimes, in hours, of fifty 40-watt, 110- volt internally frosted incandescent lamps taken from forced life tests: 919 1196 785 1126 936 918 1156 920 948 1067 1092 1162 1170 929 950 905 972 1035 1045 855 1195 1195 1340 1122 938 970 1237 956 1102 1157 978 832 1009 1157 1151 1009 765 958 902 1022 1333 811 1217 1085 896 958 1311 1037 702 923 8.55 Construct a normal quantile-quantile plot of these data, which represent the diameters of 36 rivet heads in 1/100 of an inch: 6.72 6.77 6.82 6.70 6.78 6.70 6.62 6.75 6.66 6.66 6.64 6.76 6.73 6.80 6.72 6.76 6.76 6.68 6.66 6.62 6.72 6.76 6.70 6.78 6.76 6.67 6.70 6.72 6.74 6.81 6.79 6.78 6.66 6.76 6.76 6.72 Review Exercises 8.56 Consider the data displayed in Exercise 1.20 on page 31. Construct a box-and-whisker plot and com- ment on the nature of the sample. Compute the sample mean and sample standard deviation. 8.57 If X1, X2, . . . , Xn are independent random vari- ables having identical exponential distributions with parameter θ, show that the density function of the ran- dom variable Y = X1+X2+· · ·+Xn is that of a gamma distribution with parameters α = n and β = θ. 8.58 In testing for carbon monoxide in a certain brand of cigarette, the data, in milligrams per cigarette, were coded by subtracting 12 from each ob- servation. Use the results of Exercise 8.14 on page 231 to find the standard deviation for the carbon monox- ide content of a random sample of 15 cigarettes of this brand if the coded measurements are 3.8, −0.9, 5.4, 4.5, 5.2, 5.6, 2.7, −0.1, −0.3, −1.7, 5.7, 3.3, 4.4, −0.5, and 1.9. 8.59 If S2 1 and S2 2 represent the variances of indepen- dent random samples of size n1 = 8 and n2 = 12, taken from normal populations with equal variances, find P(S2 1 /S2 2 4.89). 8.60 A random sample of 5 bank presidents indi- cated annual salaries of $395,000, $521,000, $483,000, $479,000, and $510,000. Find the variance of this set. 8.61 If the number of hurricanes that hit a certain area of the eastern United States per year is a random variable having a Poisson distribution with μ = 6, find the probability that this area will be hit by (a) exactly 15 hurricanes in 2 years; (b) at most 9 hurricanes in 2 years. 8.62 A taxi company tests a random sample of 10 steel-belted radial tires of a certain brand and records the following tread wear: 48,000, 53,000, 45,000, 61,000, 59,000, 56,000, 63,000, 49,000, 53,000, and 54,000 kilometers. Use the results of Exercise 8.14 on page 231 to find the standard deviation of this set of
  • 282. / / Review Exercises 261 data by first dividing each observation by 1000 and then subtracting 55. 8.63 Consider the data of Exercise 1.19 on page 31. Construct a box-and-whisker plot. Comment. Com- pute the sample mean and sample standard deviation. 8.64 If S2 1 and S2 2 represent the variances of indepen- dent random samples of size n1 = 25 and n2 = 31, taken from normal populations with variances σ2 1 = 10 and σ2 2 = 15, respectively, find P(S2 1 /S2 2 1.26). 8.65 Consider Example 1.5 on page 25. Comment on any outliers. 8.66 Consider Review Exercise 8.56. Comment on any outliers in the data. 8.67 The breaking strength X of a certain rivet used in a machine engine has a mean 5000 psi and stan- dard deviation 400 psi. A random sample of 36 rivets is taken. Consider the distribution of X̄, the sample mean breaking strength. (a) What is the probability that the sample mean falls between 4800 psi and 5200 psi? (b) What sample n would be necessary in order to have P(4900 X̄ 5100) = 0.99? 8.68 Consider the situation of Review Exercise 8.62. If the population from which the sample was taken has population mean μ = 53, 000 kilometers, does the sam- ple information here seem to support that claim? In your answer, compute t = x̄ − 53, 000 s/ √ 10 and determine from Table A.4 (with 9 d.f.) whether the computed t-value is reasonable or appears to be a rare event. 8.69 Two distinct solid fuel propellants, type A and type B, are being considered for a space program activ- ity. Burning rates of the propellant are crucial. Ran- dom samples of 20 specimens of the two propellants are taken with sample means 20.5 cm/sec for propel- lant A and 24.50 cm/sec for propellant B. It is gen- erally assumed that the variability in burning rate is roughly the same for the two propellants and is given by a population standard deviation of 5 cm/sec. As- sume that the burning rates for each propellant are approximately normal and hence make use of the Cen- tral Limit Theorem. Nothing is known about the two population mean burning rates, and it is hoped that this experiment might shed some light on them. (a) If, indeed, μA = μB, what is P(X̄B − X̄A ≥ 4.0)? (b) Use your answer in (a) to shed some light on the proposition that μA = μB. 8.70 The concentration of an active ingredient in the output of a chemical reaction is strongly influenced by the catalyst that is used in the reaction. It is felt that when catalyst A is used, the population mean concen- tration exceeds 65%. The standard deviation is known to be σ = 5%. A sample of outputs from 30 inde- pendent experiments gives the average concentration of x̄A = 64.5%. (a) Does this sample information with an average con- centration of x̄A = 64.5% provide disturbing in- formation that perhaps μA is not 65%, but less than 65%? Support your answer with a probability statement. (b) Suppose a similar experiment is done with the use of another catalyst, catalyst B. The standard devi- ation σ is still assumed to be 5% and x̄B turns out to be 70%. Comment on whether or not the sample information on catalyst B strongly suggests that μB is truly greater than μA. Support your answer by computing P(X̄B − X̄A ≥ 5.5 | μB = μA). (c) Under the condition that μA = μB = 65%, give the approximate distribution of the following quantities (with mean and variance of each). Make use of the Central Limit Theorem. i)X̄B; ii)X̄A − X̄B; iii)X̄A−X̄B σ √ 2/30 . 8.71 From the information in Review Exercise 8.70, compute (assuming μB = 65%) P(X̄B ≥ 70). 8.72 Given a normal random variable X with mean 20 and variance 9, and a random sample of size n taken from the distribution, what sample size n is necessary in order that P(19.9 ≤ X̄ ≤ 20.1) = 0.95? 8.73 In Chapter 9, the concept of parameter esti- mation will be discussed at length. Suppose X is a random variable with mean μ and variance σ2 = 1.0. Suppose also that a random sample of size n is to be taken and x̄ is to be used as an estimate of μ. When the data are taken and the sample mean is measured, we wish it to be within 0.05 unit of the true mean with probability 0.99. That is, we want there to be a good chance that the computed x̄ from the sample is “very
  • 283. 262 Chapter 8 Fundamental Sampling Distributions and Data Descriptions close” to the population mean (wherever it is!), so we wish P(|X̄ − μ| 0.05) = 0.99. What sample size is required? 8.74 Suppose a filling machine is used to fill cartons with a liquid product. The specification that is strictly enforced for the filling machine is 9 ± 1.5 oz. If any car- ton is produced with weight outside these bounds, it is considered by the supplier to be defective. It is hoped that at least 99% of cartons will meet these specifica- tions. With the conditions μ = 9 and σ = 1, what proportion of cartons from the process are defective? If changes are made to reduce variability, what must σ be reduced to in order to meet specifications with probability 0.99? Assume a normal distribution for the weight. 8.75 Consider the situation in Review Exercise 8.74. Suppose a considerable effort is conducted to “tighten” the variability in the system. Following the effort, a random sample of size 40 is taken from the new assem- bly line and the sample variance is s2 = 0.188 ounces2 . Do we have strong numerical evidence that σ2 has been reduced below 1.0? Consider the probability P(S2 ≤ 0.188 | σ2 = 1.0), and give your conclusion. 8.76 Group Project: The class should be divided into groups of four people. The four students in each group should go to the college gym or a local fit- ness center. The students should ask each person who comes through the door his or her height in inches. Each group will then divide the height data by gender and work together to answer the following questions. (a) Construct a normal quantile-quantile plot of the data. Based on the plot, do the data appear to follow a normal distribution? (b) Use the estimated sample variance as the true vari- ance for each gender. Assume that the popula- tion mean height for male students is actually three inches larger than that of female students. What is the probability that the average height of the male students will be 4 inches larger than that of the female students in your sample? (c) What factors could render these results misleading? 8.9 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters The Central Limit Theorem is one of the most powerful tools in all of statistics, and even though this chapter is relatively short, it contains a wealth of fundamental information about tools that will be used throughout the balance of the text. The notion of a sampling distribution is one of the most important fundamental concepts in all of statistics, and the student at this point in his or her training should gain a clear understanding of it before proceeding beyond this chapter. All chapters that follow will make considerable use of sampling distributions. Suppose one wants to use the statistic X̄ to draw inferences about the population mean μ. This will be done by using the observed value x̄ from a single sample of size n. Then any inference made must be accomplished by taking into account not just the single value but rather the theoretical structure, or distribution of all x̄ values that could be observed from samples of size n. Thus, the concept of a sampling distribution comes to the surface. This distribution is the basis for the Central Limit Theorem. The t, χ2 , and F-distributions are also used in the context of sampling distributions. For example, the t-distribution, pictured in Figure 8.8, represents the structure that occurs if all of the values of x̄−μ s/ √ n are formed, where x̄ and s are taken from samples of size n from a n(x; μ, σ) distribution. Similar remarks can be made about χ2 and F, and the reader should not forget that the sample information forming the statistics for all of these distributions is the normal. So it can be said that where there is a t, F, or χ2 , the source was a sample from a normal distribution.
  • 284. 8.9 Potential Misconceptions and Hazards 263 The three distributions described above may appear to have been introduced in a rather self-contained fashion with no indication of what they are about. However, they will appear in practical problem-solving throughout the balance of the text. Now, there are three things that one must bear in mind, lest confusion set in regarding these fundamental sampling distributions: (i) One cannot use the Central Limit Theorem unless σ is known. When σ is not known, it should be replaced by s, the sample standard deviation, in order to use the Central Limit Theorem. (ii) The T statistic is not a result of the Central Limit Theorem and x1, x2, . . . , xn must come from a n(x; μ, σ) distribution in order for x̄−μ s/ √ n to be a t-distribution; s is, of course, merely an estimate of σ. (iii) While the notion of degrees of freedom is new at this point, the concept should be very intuitive, since it is reasonable that the nature of the distri- bution of S and also t should depend on the amount of information in the sample x1, x2, . . . , xn.
  • 285. This page intentionally left blank
  • 286. Chapter 9 One- and Two-Sample Estimation Problems 9.1 Introduction In previous chapters, we emphasized sampling properties of the sample mean and variance. We also emphasized displays of data in various forms. The purpose of these presentations is to build a foundation that allows us to draw conclusions about the population parameters from experimental data. For example, the Central Limit Theorem provides information about the distribution of the sample mean X̄. The distribution involves the population mean μ. Thus, any conclusions concerning μ drawn from an observed sample average must depend on knowledge of this sampling distribution. Similar comments apply to S2 and σ2 . Clearly, any conclusions we draw about the variance of a normal distribution will likely involve the sampling distribution of S2 . In this chapter, we begin by formally outlining the purpose of statistical in- ference. We follow this by discussing the problem of estimation of population parameters. We confine our formal developments of specific estimation proce- dures to problems involving one and two samples. 9.2 Statistical Inference In Chapter 1, we discussed the general philosophy of formal statistical inference. Statistical inference consists of those methods by which one makes inferences or generalizations about a population. The trend today is to distinguish between the classical method of estimating a population parameter, whereby inferences are based strictly on information obtained from a random sample selected from the population, and the Bayesian method, which utilizes prior subjective knowledge about the probability distribution of the unknown parameters in conjunction with the information provided by the sample data. Throughout most of this chapter, we shall use classical methods to estimate unknown population parameters such as the mean, the proportion, and the variance by computing statistics from random 265
  • 287. 266 Chapter 9 One- and Two-Sample Estimation Problems samples and applying the theory of sampling distributions, much of which was covered in Chapter 8. Bayesian estimation will be discussed in Chapter 18. Statistical inference may be divided into two major areas: estimation and tests of hypotheses. We treat these two areas separately, dealing with theory and applications of estimation in this chapter and hypothesis testing in Chapter 10. To distinguish clearly between the two areas, consider the following examples. A candidate for public office may wish to estimate the true proportion of voters favoring him by obtaining opinions from a random sample of 100 eligible voters. The fraction of voters in the sample favoring the candidate could be used as an estimate of the true proportion in the population of voters. A knowledge of the sampling distribution of a proportion enables one to establish the degree of accuracy of such an estimate. This problem falls in the area of estimation. Now consider the case in which one is interested in finding out whether brand A floor wax is more scuff-resistant than brand B floor wax. He or she might hypothesize that brand A is better than brand B and, after proper testing, accept or reject this hypothesis. In this example, we do not attempt to estimate a parameter, but instead we try to arrive at a correct decision about a prestated hypothesis. Once again we are dependent on sampling theory and the use of data to provide us with some measure of accuracy for our decision. 9.3 Classical Methods of Estimation A point estimate of some population parameter θ is a single value θ̂ of a statistic Θ̂. For example, the value x̄ of the statistic X̄, computed from a sample of size n, is a point estimate of the population parameter μ. Similarly, p̂ = x/n is a point estimate of the true proportion p for a binomial experiment. An estimator is not expected to estimate the population parameter without error. We do not expect X̄ to estimate μ exactly, but we certainly hope that it is not far off. For a particular sample, it is possible to obtain a closer estimate of μ by using the sample median X̃ as an estimator. Consider, for instance, a sample consisting of the values 2, 5, and 11 from a population whose mean is 4 but is supposedly unknown. We would estimate μ to be x̄ = 6, using the sample mean as our estimate, or x̃ = 5, using the sample median as our estimate. In this case, the estimator X̃ produces an estimate closer to the true parameter than does the estimator X̄. On the other hand, if our random sample contains the values 2, 6, and 7, then x̄ = 5 and x̃ = 6, so X̄ is the better estimator. Not knowing the true value of μ, we must decide in advance whether to use X̄ or X̃ as our estimator. Unbiased Estimator What are the desirable properties of a “good” decision function that would influ- ence us to choose one estimator rather than another? Let Θ̂ be an estimator whose value θ̂ is a point estimate of some unknown population parameter θ. Certainly, we would like the sampling distribution of Θ̂ to have a mean equal to the parameter estimated. An estimator possessing this property is said to be unbiased.
  • 288. 9.3 Classical Methods of Estimation 267 Definition 9.1: A statistic Θ̂ is said to be an unbiased estimator of the parameter θ if μΘ̂ = E(Θ̂) = θ. Example 9.1: Show that S2 is an unbiased estimator of the parameter σ2 . Solution: In Section 8.5 on page 244, we showed that n i=1 (Xi − X̄)2 = n i=1 (Xi − μ)2 − n(X̄ − μ)2 . Now E(S2 ) = E 1 n − 1 n i=1 (Xi − X̄)2 = 1 n − 1 n i=1 E(Xi − μ)2 − nE(X̄ − μ)2 = 1 n − 1 n i=1 σ2 Xi − nσ2 X̄ . However, σ2 Xi = σ2 , for i = 1, 2, . . . , n, and σ2 X̄ = σ2 n . Therefore, E(S2 ) = 1 n − 1 nσ2 − n σ2 n = σ2 . Although S2 is an unbiased estimator of σ2 , S, on the other hand, is usually a biased estimator of σ, with the bias becoming insignificant for large samples. This example illustrates why we divide by n − 1 rather than n when the variance is estimated. Variance of a Point Estimator If Θ̂1 and Θ̂2 are two unbiased estimators of the same population parameter θ, we want to choose the estimator whose sampling distribution has the smaller variance. Hence, if σ2 θ̂1 σ2 θ̂2 , we say that Θ̂1 is a more efficient estimator of θ than Θ̂2. Definition 9.2: If we consider all possible unbiased estimators of some parameter θ, the one with the smallest variance is called the most efficient estimator of θ. Figure 9.1 illustrates the sampling distributions of three different estimators, Θ̂1, Θ̂2, and Θ̂3, all estimating θ. It is clear that only Θ̂1 and Θ̂2 are unbiased, since their distributions are centered at θ. The estimator Θ̂1 has a smaller variance than Θ̂2 and is therefore more efficient. Hence, our choice for an estimator of θ, among the three considered, would be Θ̂1. For normal populations, one can show that both X̄ and X̃ are unbiased estima- tors of the population mean μ, but the variance of X̄ is smaller than the variance
  • 289. 268 Chapter 9 One- and Two-Sample Estimation Problems θ ^ θ 2 ^ 1 ^ 3 ^ Figure 9.1: Sampling distributions of different estimators of θ. of X̃. Thus, both estimates x̄ and x̃ will, on average, equal the population mean μ, but x̄ is likely to be closer to μ for a given sample, and thus X̄ is more efficient than X̃. Interval Estimation Even the most efficient unbiased estimator is unlikely to estimate the population parameter exactly. It is true that estimation accuracy increases with large samples, but there is still no reason we should expect a point estimate from a given sample to be exactly equal to the population parameter it is supposed to estimate. There are many situations in which it is preferable to determine an interval within which we would expect to find the value of the parameter. Such an interval is called an interval estimate. An interval estimate of a population parameter θ is an interval of the form θ̂L θ θ̂U , where θ̂L and θ̂U depend on the value of the statistic Θ̂ for a particular sample and also on the sampling distribution of Θ̂. For example, a random sample of SAT verbal scores for students in the entering freshman class might produce an interval from 530 to 550, within which we expect to find the true average of all SAT verbal scores for the freshman class. The values of the endpoints, 530 and 550, will depend on the computed sample mean x̄ and the sampling distribution of X̄. As the sample size increases, we know that σ2 X̄ = σ2 /n decreases, and consequently our estimate is likely to be closer to the parameter μ, resulting in a shorter interval. Thus, the interval estimate indicates, by its length, the accuracy of the point estimate. An engineer will gain some insight into the population proportion defective by taking a sample and computing the sample proportion defective. But an interval estimate might be more informative. Interpretation of Interval Estimates Since different samples will generally yield different values of Θ̂ and, therefore, different values for θ̂L and θ̂U , these endpoints of the interval are values of corre- sponding random variables Θ̂L and Θ̂U . From the sampling distribution of Θ̂ we shall be able to determine Θ̂L and Θ̂U such that P(Θ̂L θ Θ̂U ) is equal to any
  • 290. 9.4 Single Sample: Estimating the Mean 269 positive fractional value we care to specify. If, for instance, we find Θ̂L and Θ̂U such that P(Θ̂L θ Θ̂U ) = 1 − α, for 0 α 1, then we have a probability of 1−α of selecting a random sample that will produce an interval containing θ. The interval θ̂L θ θ̂U , computed from the selected sample, is called a 100(1 − α)% confidence interval, the fraction 1 − α is called the confidence coefficient or the degree of confidence, and the endpoints, θ̂L and θ̂U , are called the lower and upper confidence limits. Thus, when α = 0.05, we have a 95% confidence interval, and when α = 0.01, we obtain a wider 99% confidence interval. The wider the confidence interval is, the more confident we can be that the interval contains the unknown parameter. Of course, it is better to be 95% confident that the average life of a certain television transistor is between 6 and 7 years than to be 99% confident that it is between 3 and 10 years. Ideally, we prefer a short interval with a high degree of confidence. Sometimes, restrictions on the size of our sample prevent us from achieving short intervals without sacrificing some degree of confidence. In the sections that follow, we pursue the notions of point and interval esti- mation, with each section presenting a different special case. The reader should notice that while point and interval estimation represent different approaches to gaining information regarding a parameter, they are related in the sense that con- fidence interval estimators are based on point estimators. In the following section, for example, we will see that X̄ is a very reasonable point estimator of μ. As a result, the important confidence interval estimator of μ depends on knowledge of the sampling distribution of X̄. We begin the following section with the simplest case of a confidence interval. The scenario is simple and yet unrealistic. We are interested in estimating a popu- lation mean μ and yet σ is known. Clearly, if μ is unknown, it is quite unlikely that σ is known. Any historical results that produced enough information to allow the assumption that σ is known would likely have produced similar information about μ. Despite this argument, we begin with this case because the concepts and indeed the resulting mechanics associated with confidence interval estimation remain the same for the more realistic situations presented later in Section 9.4 and beyond. 9.4 Single Sample: Estimating the Mean The sampling distribution of X̄ is centered at μ, and in most applications the variance is smaller than that of any other estimators of μ. Thus, the sample mean x̄ will be used as a point estimate for the population mean μ. Recall that σ2 X̄ = σ2 /n, so a large sample will yield a value of X̄ that comes from a sampling distribution with a small variance. Hence, x̄ is likely to be a very accurate estimate of μ when n is large. Let us now consider the interval estimate of μ. If our sample is selected from a normal population or, failing this, if n is sufficiently large, we can establish a confidence interval for μ by considering the sampling distribution of X̄. According to the Central Limit Theorem, we can expect the sampling distri- bution of X̄ to be approximately normally distributed with mean μX̄ = μ and
  • 291. 270 Chapter 9 One- and Two-Sample Estimation Problems standard deviation σX̄ = σ/ √ n. Writing zα/2 for the z-value above which we find an area of α/2 under the normal curve, we can see from Figure 9.2 that P(−zα/2 Z zα/2) = 1 − α, where Z = X̄ − μ σ/ √ n . Hence, P −zα/2 X̄ − μ σ/ √ n zα/2 = 1 − α. z 1 − −zα/2 0 zα/2 α/2 α /2 α Figure 9.2: P(−zα/2 Z zα/2) = 1 − α. Multiplying each term in the inequality by σ/ √ n and then subtracting X̄ from each term and multiplying by −1 (reversing the sense of the inequalities), we obtain P X̄ − zα/2 σ √ n μ X̄ + zα/2 σ √ n = 1 − α. A random sample of size n is selected from a population whose variance σ2 is known, and the mean x̄ is computed to give the 100(1 − α)% confidence interval below. It is important to emphasize that we have invoked the Central Limit Theorem above. As a result, it is important to note the conditions for applications that follow. Confidence Interval on μ, σ2 Known If x̄ is the mean of a random sample of size n from a population with known variance σ2 , a 100(1 − α)% confidence interval for μ is given by x̄ − zα/2 σ √ n μ x̄ + zα/2 σ √ n , where zα/2 is the z-value leaving an area of α/2 to the right. For small samples selected from nonnormal populations, we cannot expect our degree of confidence to be accurate. However, for samples of size n ≥ 30, with
  • 292. 9.4 Single Sample: Estimating the Mean 271 the shape of the distributions not too skewed, sampling theory guarantees good results. Clearly, the values of the random variables Θ̂L and Θ̂U , defined in Section 9.3, are the confidence limits θ̂L = x̄ − zα/2 σ √ n and θ̂U = x̄ + zα/2 σ √ n . Different samples will yield different values of x̄ and therefore produce different interval estimates of the parameter μ, as shown in Figure 9.3. The dot at the center of each interval indicates the position of the point estimate x̄ for that random sample. Note that all of these intervals are of the same width, since their widths depend only on the choice of zα/2 once x̄ is determined. The larger the value we choose for zα/2, the wider we make all the intervals and the more confident we can be that the particular sample selected will produce an interval that contains the unknown parameter μ. In general, for a selection of zα/2, 100(1 − α)% of the intervals will cover μ. 1 2 3 4 5 6 7 8 9 10 μ x Sample Figure 9.3: Interval estimates of μ for different samples. Example 9.2: The average zinc concentration recovered from a sample of measurements taken in 36 different locations in a river is found to be 2.6 grams per milliliter. Find the 95% and 99% confidence intervals for the mean zinc concentration in the river. Assume that the population standard deviation is 0.3 gram per milliliter. Solution: The point estimate of μ is x̄ = 2.6. The z-value leaving an area of 0.025 to the right, and therefore an area of 0.975 to the left, is z0.025 = 1.96 (Table A.3). Hence, the 95% confidence interval is 2.6 − (1.96) 0.3 √ 36 μ 2.6 + (1.96) 0.3 √ 36 ,
  • 293. 272 Chapter 9 One- and Two-Sample Estimation Problems which reduces to 2.50 μ 2.70. To find a 99% confidence interval, we find the z-value leaving an area of 0.005 to the right and 0.995 to the left. From Table A.3 again, z0.005 = 2.575, and the 99% confidence interval is 2.6 − (2.575) 0.3 √ 36 μ 2.6 + (2.575) 0.3 √ 36 , or simply 2.47 μ 2.73. We now see that a longer interval is required to estimate μ with a higher degree of confidence. The 100(1−α)% confidence interval provides an estimate of the accuracy of our point estimate. If μ is actually the center value of the interval, then x̄ estimates μ without error. Most of the time, however, x̄ will not be exactly equal to μ and the point estimate will be in error. The size of this error will be the absolute value of the difference between μ and x̄, and we can be 100(1 − α)% confident that this difference will not exceed zα/2 σ √ n . We can readily see this if we draw a diagram of a hypothetical confidence interval, as in Figure 9.4. x μ Error x z σ σ n x z n /2 α /2 α / / Figure 9.4: Error in estimating μ by x̄. Theorem 9.1: If x̄ is used as an estimate of μ, we can be 100(1 − α)% confident that the error will not exceed zα/2 σ √ n . In Example 9.2, we are 95% confident that the sample mean x̄ = 2.6 differs from the true mean μ by an amount less than (1.96)(0.3)/ √ 36 = 0.1 and 99% confident that the difference is less than (2.575)(0.3)/ √ 36 = 0.13. Frequently, we wish to know how large a sample is necessary to ensure that the error in estimating μ will be less than a specified amount e. By Theorem 9.1, we must choose n such that zα/2 σ √ n = e. Solving this equation gives the following formula for n. Theorem 9.2: If x̄ is used as an estimate of μ, we can be 100(1 − α)% confident that the error will not exceed a specified amount e when the sample size is n = #zα/2σ e $2 . When solving for the sample size, n, we round all fractional values up to the next whole number. By adhering to this principle, we can be sure that our degree of confidence never falls below 100(1 − α)%.
  • 294. 9.4 Single Sample: Estimating the Mean 273 Strictly speaking, the formula in Theorem 9.2 is applicable only if we know the variance of the population from which we select our sample. Lacking this information, we could take a preliminary sample of size n ≥ 30 to provide an estimate of σ. Then, using s as an approximation for σ in Theorem 9.2, we could determine approximately how many observations are needed to provide the desired degree of accuracy. Example 9.3: How large a sample is required if we want to be 95% confident that our estimate of μ in Example 9.2 is off by less than 0.05? Solution: The population standard deviation is σ = 0.3. Then, by Theorem 9.2, n = (1.96)(0.3) 0.05 2 = 138.3. Therefore, we can be 95% confident that a random sample of size 139 will provide an estimate x̄ differing from μ by an amount less than 0.05. One-Sided Confidence Bounds The confidence intervals and resulting confidence bounds discussed thus far are two-sided (i.e., both upper and lower bounds are given). However, there are many applications in which only one bound is sought. For example, if the measurement of interest is tensile strength, the engineer receives better information from a lower bound only. This bound communicates the worst-case scenario. On the other hand, if the measurement is something for which a relatively large value of μ is not profitable or desirable, then an upper confidence bound is of interest. An example would be a case in which inferences need to be made concerning the mean mercury composition in a river. An upper bound is very informative in this case. One-sided confidence bounds are developed in the same fashion as two-sided intervals. However, the source is a one-sided probability statement that makes use of the Central Limit Theorem: P X̄ − μ σ/ √ n zα = 1 − α. One can then manipulate the probability statement much as before and obtain P(μ X̄ − zασ/ √ n) = 1 − α. Similar manipulation of P # X̄−μ σ/ √ n −zα $ = 1 − α gives P(μ X̄ + zασ/ √ n) = 1 − α. As a result, the upper and lower one-sided bounds follow. One-Sided Confidence Bounds on μ, σ2 Known If X̄ is the mean of a random sample of size n from a population with variance σ2 , the one-sided 100(1 − α)% confidence bounds for μ are given by upper one-sided bound: x̄ + zασ/ √ n; lower one-sided bound: x̄ − zασ/ √ n.
  • 295. 274 Chapter 9 One- and Two-Sample Estimation Problems Example 9.4: In a psychological testing experiment, 25 subjects are selected randomly and their reaction time, in seconds, to a particular stimulus is measured. Past experience suggests that the variance in reaction times to these types of stimuli is 4 sec2 and that the distribution of reaction times is approximately normal. The average time for the subjects is 6.2 seconds. Give an upper 95% bound for the mean reaction time. Solution: The upper 95% bound is given by x̄ + zασ/ √ n = 6.2 + (1.645) 4/25 = 6.2 + 0.658 = 6.858 seconds. Hence, we are 95% confident that the mean reaction time is less than 6.858 seconds. The Case of σ Unknown Frequently, we must attempt to estimate the mean of a population when the vari- ance is unknown. The reader should recall learning in Chapter 8 that if we have a random sample from a normal distribution, then the random variable T = X̄ − μ S/ √ n has a Student t-distribution with n − 1 degrees of freedom. Here S is the sample standard deviation. In this situation, with σ unknown, T can be used to construct a confidence interval on μ. The procedure is the same as that with σ known except that σ is replaced by S and the standard normal distribution is replaced by the t-distribution. Referring to Figure 9.5, we can assert that P(−tα/2 T tα/2) = 1 − α, where tα/2 is the t-value with n−1 degrees of freedom, above which we find an area of α/2. Because of symmetry, an equal area of α/2 will fall to the left of −tα/2. Substituting for T, we write P −tα/2 X̄ − μ S/ √ n tα/2 = 1 − α. Multiplying each term in the inequality by S/ √ n, and then subtracting X̄ from each term and multiplying by −1, we obtain P X̄ − tα/2 S √ n μ X̄ + tα/2 S √ n = 1 − α. For a particular random sample of size n, the mean x̄ and standard deviation s are computed and the following 100(1 − α)% confidence interval for μ is obtained.
  • 296. 9.4 Single Sample: Estimating the Mean 275 t 1 − −tα 2 0 tα 2 α/2 α/2 α Figure 9.5: P(−tα/2 T tα/2) = 1 − α. Confidence Interval on μ, σ2 Unknown If x̄ and s are the mean and standard deviation of a random sample from a normal population with unknown variance σ2 , a 100(1−α)% confidence interval for μ is x̄ − tα/2 s √ n μ x̄ + tα/2 s √ n , where tα/2 is the t-value with v = n − 1 degrees of freedom, leaving an area of α/2 to the right. We have made a distinction between the cases of σ known and σ unknown in computing confidence interval estimates. We should emphasize that for σ known we exploited the Central Limit Theorem, whereas for σ unknown we made use of the sampling distribution of the random variable T. However, the use of the t- distribution is based on the premise that the sampling is from a normal distribution. As long as the distribution is approximately bell shaped, confidence intervals can be computed when σ2 is unknown by using the t-distribution and we may expect very good results. Computed one-sided confidence bounds for μ with σ unknown are as the reader would expect, namely x̄ + tα s √ n and x̄ − tα s √ n . They are the upper and lower 100(1 − α)% bounds, respectively. Here tα is the t-value having an area of α to the right. Example 9.5: The contents of seven similar containers of sulfuric acid are 9.8, 10.2, 10.4, 9.8, 10.0, 10.2, and 9.6 liters. Find a 95% confidence interval for the mean contents of all such containers, assuming an approximately normal distribution. Solution: The sample mean and standard deviation for the given data are x̄ = 10.0 and s = 0.283. Using Table A.4, we find t0.025 = 2.447 for v = 6 degrees of freedom. Hence, the
  • 297. 276 Chapter 9 One- and Two-Sample Estimation Problems 95% confidence interval for μ is 10.0 − (2.447) 0.283 √ 7 μ 10.0 + (2.447) 0.283 √ 7 , which reduces to 9.74 μ 10.26. Concept of a Large-Sample Confidence Interval Often statisticians recommend that even when normality cannot be assumed, σ is unknown, and n ≥ 30, s can replace σ and the confidence interval x̄ ± zα/2 s √ n may be used. This is often referred to as a large-sample confidence interval. The justification lies only in the presumption that with a sample as large as 30 and the population distribution not too skewed, s will be very close to the true σ and thus the Central Limit Theorem prevails. It should be emphasized that this is only an approximation and the quality of the result becomes better as the sample size grows larger. Example 9.6: Scholastic Aptitude Test (SAT) mathematics scores of a random sample of 500 high school seniors in the state of Texas are collected, and the sample mean and standard deviation are found to be 501 and 112, respectively. Find a 99% confidence interval on the mean SAT mathematics score for seniors in the state of Texas. Solution: Since the sample size is large, it is reasonable to use the normal approximation. Using Table A.3, we find z0.005 = 2.575. Hence, a 99% confidence interval for μ is 501 ± (2.575) 112 √ 500 = 501 ± 12.9, which yields 488.1 μ 513.9. 9.5 Standard Error of a Point Estimate We have made a rather sharp distinction between the goal of a point estimate and that of a confidence interval estimate. The former supplies a single number extracted from a set of experimental data, and the latter provides an interval that is reasonable for the parameter, given the experimental data; that is, 100(1 − α)% of such computed intervals “cover” the parameter. These two approaches to estimation are related to each other. The common thread is the sampling distribution of the point estimator. Consider, for example, the estimator X̄ of μ with σ known. We indicated earlier that a measure of the quality of an unbiased estimator is its variance. The variance of X̄ is σ2 X̄ = σ2 n .
  • 298. 9.6 Prediction Intervals 277 Thus, the standard deviation of X̄, or standard error of X̄, is σ/ √ n. Simply put, the standard error of an estimator is its standard deviation. For X̄, the computed confidence limit x̄ ± zα/2 σ √ n is written as x̄ ± zα/2 s.e.(x̄), where “s.e.” is the “standard error.” The important point is that the width of the confidence interval on μ is dependent on the quality of the point estimator through its standard error. In the case where σ is unknown and sampling is from a normal distribution, s replaces σ and the estimated standard error s/ √ n is involved. Thus, the confidence limits on μ are Confidence Limits on μ, σ2 Unknown x̄ ± tα/2 s √ n = x̄ ± tα/2 s.e.(x̄) Again, the confidence interval is no better (in terms of width) than the quality of the point estimate, in this case through its estimated standard error. Computer packages often refer to estimated standard errors simply as “standard errors.” As we move to more complex confidence intervals, there is a prevailing notion that widths of confidence intervals become shorter as the quality of the correspond- ing point estimate becomes better, although it is not always quite as simple as we have illustrated here. It can be argued that a confidence interval is merely an augmentation of the point estimate to take into account the precision of the point estimate. 9.6 Prediction Intervals The point and interval estimations of the mean in Sections 9.4 and 9.5 provide good information about the unknown parameter μ of a normal distribution or a nonnormal distribution from which a large sample is drawn. Sometimes, other than the population mean, the experimenter may also be interested in predicting the possible value of a future observation. For instance, in quality control, the experimenter may need to use the observed data to predict a new observation. A process that produces a metal part may be evaluated on the basis of whether the part meets specifications on tensile strength. On certain occasions, a customer may be interested in purchasing a single part. In this case, a confidence interval on the mean tensile strength does not capture the required information. The customer requires a statement regarding the uncertainty of a single observation. This type of requirement is nicely fulfilled by the construction of a prediction interval. It is quite simple to obtain a prediction interval for the situations we have considered so far. Assume that the random sample comes from a normal population with unknown mean μ and known variance σ2 . A natural point estimator of a new observation is X̄. It is known, from Section 8.4, that the variance of X̄ is σ2 /n. However, to predict a new observation, not only do we need to account for the variation due to estimating the mean, but also we should account for the variation of a future observation. From the assumption, we know that the variance of the random error in a new observation is σ2 . The development of a
  • 299. 278 Chapter 9 One- and Two-Sample Estimation Problems prediction interval is best illustrated by beginning with a normal random variable x0 − x̄, where x0 is the new observation and x̄ comes from the sample. Since x0 and x̄ are independent, we know that z = x0 − x̄ σ2 + σ2/n = x0 − x̄ σ 1 + 1/n is n(z; 0, 1). As a result, if we use the probability statement P(−zα/2 Z zα/2) = 1 − α with the z-statistic above and place x0 in the center of the probability statement, we have the following event occurring with probability 1 − α: x̄ − zα/2σ 1 + 1/n x0 x̄ + zα/2σ 1 + 1/n. As a result, computation of the prediction interval is formalized as follows. Prediction Interval of a Future Observation, σ2 Known For a normal distribution of measurements with unknown mean μ and known variance σ2 , a 100(1 − α)% prediction interval of a future observation x0 is x̄ − zα/2σ 1 + 1/n x0 x̄ + zα/2σ 1 + 1/n, where zα/2 is the z-value leaving an area of α/2 to the right. Example 9.7: Due to the decrease in interest rates, the First Citizens Bank received a lot of mortgage applications. A recent sample of 50 mortgage loans resulted in an average loan amount of $257,300. Assume a population standard deviation of $25,000. For the next customer who fills out a mortgage application, find a 95% prediction interval for the loan amount. Solution: The point prediction of the next customer’s loan amount is x̄ = $257, 300. The z-value here is z0.025 = 1.96. Hence, a 95% prediction interval for the future loan amount is 257, 300 − (1.96)(25, 000) 1 + 1/50 x0 257, 300 + (1.96)(25, 000) 1 + 1/50, which gives the interval ($207,812.43, $306,787.57). The prediction interval provides a good estimate of the location of a future observation, which is quite different from the estimate of the sample mean value. It should be noted that the variation of this prediction is the sum of the variation due to an estimation of the mean and the variation of a single observation. However, as in the past, we first consider the case with known variance. It is also important to deal with the prediction interval of a future observation in the situation where the variance is unknown. Indeed a Student t-distribution may be used in this case, as described in the following result. The normal distribution is merely replaced by the t-distribution.
  • 300. 9.6 Prediction Intervals 279 Prediction Interval of a Future Observation, σ2 Unknown For a normal distribution of measurements with unknown mean μ and unknown variance σ2 , a 100(1 − α)% prediction interval of a future observation x0 is x̄ − tα/2s 1 + 1/n x0 x̄ + tα/2s 1 + 1/n, where tα/2 is the t-value with v = n − 1 degrees of freedom, leaving an area of α/2 to the right. One-sided prediction intervals can also be constructed. Upper prediction bounds apply in cases where focus must be placed on future large observations. Concern over future small observations calls for the use of lower prediction bounds. The upper bound is given by x̄ + tαs 1 + 1/n and the lower bound by x̄ − tαs 1 + 1/n. Example 9.8: A meat inspector has randomly selected 30 packs of 95% lean beef. The sample resulted in a mean of 96.2% with a sample standard deviation of 0.8%. Find a 99% prediction interval for the leanness of a new pack. Assume normality. Solution: For v = 29 degrees of freedom, t0.005 = 2.756. Hence, a 99% prediction interval for a new observation x0 is 96.2 − (2.756)(0.8) 1 + 1 30 x0 96.2 + (2.756)(0.8) 1 + 1 30 , which reduces to (93.96, 98.44). Use of Prediction Limits for Outlier Detection To this point in the text very little attention has been paid to the concept of outliers, or aberrant observations. The majority of scientific investigators are keenly sensitive to the existence of outlying observations or so-called faulty or “bad data.” We deal with the concept of outlier detection extensively in Chapter 12. However, it is certainly of interest here since there is an important relationship between outlier detection and prediction intervals. It is convenient for our purposes to view an outlying observation as one that comes from a population with a mean that is different from the mean that governs the rest of the sample of size n being studied. The prediction interval produces a bound that “covers” a future single observation with probability 1 − α if it comes from the population from which the sample was drawn. As a result, a methodol- ogy for outlier detection involves the rule that an observation is an outlier if it falls outside the prediction interval computed without including the questionable observation in the sample. As a result, for the prediction inter- val of Example 9.8, if a new pack of beef is measured and its leanness is outside the interval (93.96, 98.44), that observation can be viewed as an outlier.
  • 301. 280 Chapter 9 One- and Two-Sample Estimation Problems 9.7 Tolerance Limits As discussed in Section 9.6, the scientist or engineer may be less interested in esti- mating parameters than in gaining a notion about where an individual observation or measurement might fall. Such situations call for the use of prediction intervals. However, there is yet a third type of interval that is of interest in many applica- tions. Once again, suppose that interest centers around the manufacturing of a component part and specifications exist on a dimension of that part. In addition, there is little concern about the mean of the dimension. But unlike in the scenario in Section 9.6, one may be less interested in a single observation and more inter- ested in where the majority of the population falls. If process specifications are important, the manager of the process is concerned about long-range performance, not the next observation. One must attempt to determine bounds that, in some probabilistic sense, “cover” values in the population (i.e., the measured values of the dimension). One method of establishing the desired bounds is to determine a confidence interval on a fixed proportion of the measurements. This is best motivated by visualizing a situation in which we are doing random sampling from a normal distribution with known mean μ and variance σ2 . Clearly, a bound that covers the middle 95% of the population of observations is μ ± 1.96σ. This is called a tolerance interval, and indeed its coverage of 95% of measured observations is exact. However, in practice, μ and σ are seldom known; thus, the user must apply x̄ ± ks. Now, of course, the interval is a random variable, and hence the coverage of a proportion of the population by the interval is not exact. As a result, a 100(1−γ)% confidence interval must be used since x̄ ± ks cannot be expected to cover any specified proportion all the time. As a result, we have the following definition. Tolerance Limits For a normal distribution of measurements with unknown mean μ and unknown standard deviation σ, tolerance limits are given by x̄ ± ks, where k is de- termined such that one can assert with 100(1 − γ)% confidence that the given limits contain at least the proportion 1 − α of the measurements. Table A.7 gives values of k for 1 − α = 0.90, 0.95, 0.99; γ = 0.05, 0.01; and selected values of n from 2 to 300. Example 9.9: Consider Example 9.8. With the information given, find a tolerance interval that gives two-sided 95% bounds on 90% of the distribution of packages of 95% lean beef. Assume the data came from an approximately normal distribution. Solution: Recall from Example 9.8 that n = 30, the sample mean is 96.2%, and the sample standard deviation is 0.8%. From Table A.7, k = 2.14. Using x̄ ± ks = 96.2 ± (2.14)(0.8),
  • 302. 9.7 Tolerance Limits 281 we find that the lower and upper bounds are 94.5 and 97.9. We are 95% confident that the above range covers the central 90% of the dis- tribution of 95% lean beef packages. Distinction among Confidence Intervals, Prediction Intervals, and Tolerance Intervals It is important to reemphasize the difference among the three types of intervals dis- cussed and illustrated in the preceding sections. The computations are straightfor- ward, but interpretation can be confusing. In real-life applications, these intervals are not interchangeable because their interpretations are quite distinct. In the case of confidence intervals, one is attentive only to the population mean. For example, Exercise 9.13 on page 283 deals with an engineering process that produces shearing pins. A specification will be set on Rockwell hardness, below which a customer will not accept any pins. Here, a population parameter must take a backseat. It is important that the engineer know where the majority of the values of Rockwell hardness are going to be. Thus, tolerance limits should be used. Surely, when tolerance limits on any process output are tighter than process specifications, that is good news for the process manager. It is true that the tolerance limit interpretation is somewhat related to the confidence interval. The 100(1−α)% tolerance interval on, say, the proportion 0.95 can be viewed as a confidence interval on the middle 95% of the corresponding normal distribution. One-sided tolerance limits are also relevant. In the case of the Rockwell hardness problem, it is desirable to have a lower bound of the form x̄ − ks such that there is 99% confidence that at least 99% of Rockwell hardness values will exceed the computed value. Prediction intervals are applicable when it is important to determine a bound on a single value. The mean is not the issue here, nor is the location of the majority of the population. Rather, the location of a single new observation is required. Case Study 9.1: Machine Quality: A machine produces metal pieces that are cylindrical in shape. A sample of these pieces is taken and the diameters are found to be 1.01, 0.97, 1.03, 1.04, 0.99, 0.98, 0.99, 1.01, and 1.03 centimeters. Use these data to calculate three interval types and draw interpretations that illustrate the distinction between them in the context of the system. For all computations, assume an approximately normal distribution. The sample mean and standard deviation for the given data are x̄ = 1.0056 and s = 0.0246. (a) Find a 99% confidence interval on the mean diameter. (b) Compute a 99% prediction interval on a measured diameter of a single metal piece taken from the machine. (c) Find the 99% tolerance limits that will contain 95% of the metal pieces pro- duced by this machine. Solution: (a) The 99% confidence interval for the mean diameter is given by x̄ ± t0.005s/ √ n = 1.0056 ± (3.355)(0.0246/3) = 1.0056 ± 0.0275.
  • 303. / / 282 Chapter 9 One- and Two-Sample Estimation Problems Thus, the 99% confidence bounds are 0.9781 and 1.0331. (b) The 99% prediction interval for a future observation is given by x̄ ± t0.005s 1 + 1/n = 1.0056 ± (3.355)(0.0246) 1 + 1/9, with the bounds being 0.9186 and 1.0926. (c) From Table A.7, for n = 9, 1 − γ = 0.99, and 1 − α = 0.95, we find k = 4.550 for two-sided limits. Hence, the 99% tolerance limits are given by x̄ + ks = 1.0056 ± (4.550)(0.0246), with the bounds being 0.8937 and 1.1175. We are 99% confident that the tolerance interval from 0.8937 to 1.1175 will contain the central 95% of the distribution of diameters produced. This case study illustrates that the three types of limits can give appreciably dif- ferent results even though they are all 99% bounds. In the case of the confidence interval on the mean, 99% of such intervals cover the population mean diameter. Thus, we say that we are 99% confident that the mean diameter produced by the process is between 0.9781 and 1.0331 centimeters. Emphasis is placed on the mean, with less concern about a single reading or the general nature of the distribution of diameters in the population. In the case of the prediction limits, the bounds 0.9186 and 1.0926 are based on the distribution of a single “new” metal piece taken from the process, and again 99% of such limits will cover the diameter of a new measured piece. On the other hand, the tolerance limits, as suggested in the previous section, give the engineer a sense of where the “majority,” say the central 95%, of the diameters of measured pieces in the population reside. The 99% tolerance limits, 0.8937 and 1.1175, are numerically quite different from the other two bounds. If these bounds appear alarmingly wide to the engineer, it re- flects negatively on process quality. On the other hand, if the bounds represent a desirable result, the engineer may conclude that a majority (95% in here) of the diameters are in a desirable range. Again, a confidence interval interpretation may be used: namely, 99% of such calculated bounds will cover the middle 95% of the population of diameters. Exercises 9.1 A UCLA researcher claims that the life span of mice can be extended by as much as 25% when the calories in their diet are reduced by approximately 40% from the time they are weaned. The restricted diet is enriched to normal levels by vitamins and protein. Assuming that it is known from previous studies that σ = 5.8 months, how many mice should be included in our sample if we wish to be 99% confident that the mean life span of the sample will be within 2 months of the population mean for all mice subjected to this reduced diet? 9.2 An electrical firm manufactures light bulbs that have a length of life that is approximately normally distributed with a standard deviation of 40 hours. If a sample of 30 bulbs has an average life of 780 hours, find a 96% confidence interval for the population mean of all bulbs produced by this firm. 9.3 Many cardiac patients wear an implanted pace- maker to control their heartbeat. A plastic connec- tor module mounts on the top of the pacemaker. As- suming a standard deviation of 0.0015 inch and an ap- proximately normal distribution, find a 95% confidence
  • 304. / / Exercises 283 interval for the mean of the depths of all connector modules made by a certain manufacturing company. A random sample of 75 modules has an average depth of 0.310 inch. 9.4 The heights of a random sample of 50 college stu- dents showed a mean of 174.5 centimeters and a stan- dard deviation of 6.9 centimeters. (a) Construct a 98% confidence interval for the mean height of all college students. (b) What can we assert with 98% confidence about the possible size of our error if we estimate the mean height of all college students to be 174.5 centime- ters? 9.5 A random sample of 100 automobile owners in the state of Virginia shows that an automobile is driven on average 23,500 kilometers per year with a standard de- viation of 3900 kilometers. Assume the distribution of measurements to be approximately normal. (a) Construct a 99% confidence interval for the aver- age number of kilometers an automobile is driven annually in Virginia. (b) What can we assert with 99% confidence about the possible size of our error if we estimate the aver- age number of kilometers driven by car owners in Virginia to be 23,500 kilometers per year? 9.6 How large a sample is needed in Exercise 9.2 if we wish to be 96% confident that our sample mean will be within 10 hours of the true mean? 9.7 How large a sample is needed in Exercise 9.3 if we wish to be 95% confident that our sample mean will be within 0.0005 inch of the true mean? 9.8 An efficiency expert wishes to determine the av- erage time that it takes to drill three holes in a certain metal clamp. How large a sample will she need to be 95% confident that her sample mean will be within 15 seconds of the true mean? Assume that it is known from previous studies that σ = 40 seconds. 9.9 Regular consumption of presweetened cereals con- tributes to tooth decay, heart disease, and other degen- erative diseases, according to studies conducted by Dr. W. H. Bowen of the National Institute of Health and Dr. J. Yudben, Professor of Nutrition and Dietetics at the University of London. In a random sample con- sisting of 20 similar single servings of Alpha-Bits, the average sugar content was 11.3 grams with a standard deviation of 2.45 grams. Assuming that the sugar con- tents are normally distributed, construct a 95% con- fidence interval for the mean sugar content for single servings of Alpha-Bits. 9.10 A random sample of 12 graduates of a certain secretarial school typed an average of 79.3 words per minute with a standard deviation of 7.8 words per minute. Assuming a normal distribution for the num- ber of words typed per minute, find a 95% confidence interval for the average number of words typed by all graduates of this school. 9.11 A machine produces metal pieces that are cylin- drical in shape. A sample of pieces is taken, and the diameters are found to be 1.01, 0.97, 1.03, 1.04, 0.99, 0.98, 0.99, 1.01, and 1.03 centimeters. Find a 99% con- fidence interval for the mean diameter of pieces from this machine, assuming an approximately normal dis- tribution. 9.12 A random sample of 10 chocolate energy bars of a certain brand has, on average, 230 calories per bar, with a standard deviation of 15 calories. Construct a 99% confidence interval for the true mean calorie con- tent of this brand of energy bar. Assume that the dis- tribution of the calorie content is approximately nor- mal. 9.13 A random sample of 12 shearing pins is taken in a study of the Rockwell hardness of the pin head. Measurements on the Rockwell hardness are made for each of the 12, yielding an average value of 48.50 with a sample standard deviation of 1.5. Assuming the mea- surements to be normally distributed, construct a 90% confidence interval for the mean Rockwell hardness. 9.14 The following measurements were recorded for the drying time, in hours, of a certain brand of latex paint: 3.4 2.5 4.8 2.9 3.6 2.8 3.3 5.6 3.7 2.8 4.4 4.0 5.2 3.0 4.8 Assuming that the measurements represent a random sample from a normal population, find a 95% predic- tion interval for the drying time for the next trial of the paint. 9.15 Referring to Exercise 9.5, construct a 99% pre- diction interval for the kilometers traveled annually by an automobile owner in Virginia. 9.16 Consider Exercise 9.10. Compute the 95% pre- diction interval for the next observed number of words per minute typed by a graduate of the secretarial school. 9.17 Consider Exercise 9.9. Compute a 95% predic- tion interval for the sugar content of the next single serving of Alpha-Bits. 9.18 Referring to Exercise 9.13, construct a 95% tol- erance interval containing 90% of the measurements.
  • 305. / / 284 Chapter 9 One- and Two-Sample Estimation Problems 9.19 A random sample of 25 tablets of buffered as- pirin contains, on average, 325.05 mg of aspirin per tablet, with a standard deviation of 0.5 mg. Find the 95% tolerance limits that will contain 90% of the tablet contents for this brand of buffered aspirin. Assume that the aspirin content is normally distributed. 9.20 Consider the situation of Exercise 9.11. Esti- mation of the mean diameter, while important, is not nearly as important as trying to pin down the loca- tion of the majority of the distribution of diameters. Find the 95% tolerance limits that contain 95% of the diameters. 9.21 In a study conducted by the Department of Zoology at Virginia Tech, fifteen samples of water were collected from a certain station in the James River in order to gain some insight regarding the amount of or- thophosphorus in the river. The concentration of the chemical is measured in milligrams per liter. Let us suppose that the mean at the station is not as impor- tant as the upper extreme of the distribution of the concentration of the chemical at the station. Concern centers around whether the concentration at the ex- treme is too large. Readings for the fifteen water sam- ples gave a sample mean of 3.84 milligrams per liter and a sample standard deviation of 3.07 milligrams per liter. Assume that the readings are a random sam- ple from a normal distribution. Calculate a prediction interval (upper 95% prediction limit) and a tolerance limit (95% upper tolerance limit that exceeds 95% of the population of values). Interpret both; that is, tell what each communicates about the upper extreme of the distribution of orthophosphorus at the sampling station. 9.22 A type of thread is being studied for its ten- sile strength properties. Fifty pieces were tested under similar conditions, and the results showed an average tensile strength of 78.3 kilograms and a standard devi- ation of 5.6 kilograms. Assuming a normal distribution of tensile strengths, give a lower 95% prediction limit on a single observed tensile strength value. In addi- tion, give a lower 95% tolerance limit that is exceeded by 99% of the tensile strength values. 9.23 Refer to Exercise 9.22. Why are the quantities requested in the exercise likely to be more important to the manufacturer of the thread than, say, a confidence interval on the mean tensile strength? 9.24 Refer to Exercise 9.22 again. Suppose that spec- ifications by a buyer of the thread are that the tensile strength of the material must be at least 62 kilograms. The manufacturer is satisfied if at most 5% of the man- ufactured pieces have tensile strength less than 62 kilo- grams. Is there cause for concern? Use a one-sided 99% tolerance limit that is exceeded by 95% of the tensile strength values. 9.25 Consider the drying time measurements in Ex- ercise 9.14. Suppose the 15 observations in the data set are supplemented by a 16th value of 6.9 hours. In the context of the original 15 observations, is the 16th value an outlier? Show work. 9.26 Consider the data in Exercise 9.13. Suppose the manufacturer of the shearing pins insists that the Rock- well hardness of the product be less than or equal to 44.0 only 5% of the time. What is your reaction? Use a tolerance limit calculation as the basis for your judg- ment. 9.27 Consider the situation of Case Study 9.1 on page 281 with a larger sample of metal pieces. The di- ameters are as follows: 1.01, 0.97, 1.03, 1.04, 0.99, 0.98, 1.01, 1.03, 0.99, 1.00, 1.00, 0.99, 0.98, 1.01, 1.02, 0.99 centimeters. Once again the normality assump- tion may be made. Do the following and compare your results to those of the case study. Discuss how they are different and why. (a) Compute a 99% confidence interval on the mean diameter. (b) Compute a 99% prediction interval on the next di- ameter to be measured. (c) Compute a 99% tolerance interval for coverage of the central 95% of the distribution of diameters. 9.28 In Section 9.3, we emphasized the notion of “most efficient estimator” by comparing the variance of two unbiased estimators Θ̂1 and Θ̂2. However, this does not take into account bias in case one or both estimators are not unbiased. Consider the quantity MSE = E(Θ̂ − θ), where MSE denotes mean squared error. The MSE is often used to compare two estimators Θ̂1 and Θ̂2 of θ when either or both is unbiased because (i) it is intuitively reasonable and (ii) it accounts for bias. Show that MSE can be written MSE = E[Θ̂ − E(Θ̂)]2 + [E(Θ̂ − θ)]2 = Var(Θ̂) + [Bias(Θ̂)]2 . 9.29 Let us define S2 = n i=1 (Xi − X̄)2 /n. Show that E(S2 ) = [(n − 1)/n]σ2 , and hence S2 is a biased estimator for σ2 . 9.30 Consider S2 , the estimator of σ2 , from Exer- cise 9.29. Analysts often use S2 rather than dividing n i=1 (Xi − X̄)2 by n − 1, the degrees of freedom in the sample.
  • 306. 9.8 Two Samples: Estimating the Difference between Two Means 285 (a) What is the bias of S2 ? (b) Show that the bias of S2 approaches zero as n → ∞. 9.31 If X is a binomial random variable, show that (a) P = X/n is an unbiased estimator of p; (b) P = X+ √ n/2 n+ √ n is a biased estimator of p. 9.32 Show that the estimator P of Exercise 9.31(b) becomes unbiased as n → ∞. 9.33 Compare S2 and S2 (see Exercise 9.29), the two estimators of σ2 , to determine which is more efficient. Assume these estimators are found using X1, X2, . . . , Xn, independent random variables from n(x; μ, σ). Which estimator is more efficient consid- ering only the variance of the estimators? [Hint: Make use of Theorem 8.4 and the fact that the variance of χ2 v is 2v, from Section 6.7.] 9.34 Consider Exercise 9.33. Use the MSE discussed in Exercise 9.28 to determine which estimator is more efficient. Write out MSE(S2 ) MSE(S2) . 9.8 Two Samples: Estimating the Difference between Two Means If we have two populations with means μ1 and μ2 and variances σ2 1 and σ2 2, re- spectively, a point estimator of the difference between μ1 and μ2 is given by the statistic X̄1 − X̄2. Therefore, to obtain a point estimate of μ1 − μ2, we shall select two independent random samples, one from each population, of sizes n1 and n2, and compute x̄1 −x̄2, the difference of the sample means. Clearly, we must consider the sampling distribution of X̄1 − X̄2. According to Theorem 8.3, we can expect the sampling distribution of X̄1 − X̄2 to be approximately normally distributed with mean μX̄1−X̄2 = μ1 − μ2 and standard deviation σX̄1−X̄2 = σ2 1/n1 + σ2 2/n2. Therefore, we can assert with a probability of 1 − α that the standard normal variable Z = (X̄1 − X̄2) − (μ1 − μ2) σ2 1/n1 + σ2 2/n2 will fall between −zα/2 and zα/2. Referring once again to Figure 9.2, we write P(−zα/2 Z zα/2) = 1 − α. Substituting for Z, we state equivalently that P −zα/2 (X̄1 − X̄2) − (μ1 − μ2) σ2 1/n1 + σ2 2/n2 zα/2 = 1 − α, which leads to the following 100(1 − α)% confidence interval for μ1 − μ2. Confidence Interval for μ1 − μ2, σ2 1 and σ2 2 Known If x̄1 and x̄2 are means of independent random samples of sizes n1 and n2 from populations with known variances σ2 1 and σ2 2, respectively, a 100(l − α)% confidence interval for μ1 − μ2 is given by (x̄1 − x̄2) − zα/2 % σ2 1 n1 + σ2 2 n2 μ1 − μ2 (x̄1 − x̄2) + zα/2 % σ2 1 n1 + σ2 2 n2 , where zα/2 is the z-value leaving an area of α/2 to the right.
  • 307. 286 Chapter 9 One- and Two-Sample Estimation Problems The degree of confidence is exact when samples are selected from normal popula- tions. For nonnormal populations, the Central Limit Theorem allows for a good approximation for reasonable size samples. The Experimental Conditions and the Experimental Unit For the case of confidence interval estimation on the difference between two means, we need to consider the experimental conditions in the data-taking process. It is assumed that we have two independent random samples from distributions with means μ1 and μ2, respectively. It is important that experimental conditions emu- late this ideal described by these assumptions as closely as possible. Quite often, the experimenter should plan the strategy of the experiment accordingly. For al- most any study of this type, there is a so-called experimental unit, which is that part of the experiment that produces experimental error and is responsible for the population variance we refer to as σ2 . In a drug study, the experimental unit is the patient or subject. In an agricultural experiment, it may be a plot of ground. In a chemical experiment, it may be a quantity of raw materials. It is important that differences between the experimental units have minimal impact on the re- sults. The experimenter will have a degree of insurance that experimental units will not bias results if the conditions that define the two populations are randomly assigned to the experimental units. We shall again focus on randomization in future chapters that deal with hypothesis testing. Example 9.10: A study was conducted in which two types of engines, A and B, were compared. Gas mileage, in miles per gallon, was measured. Fifty experiments were conducted using engine type A and 75 experiments were done with engine type B. The gasoline used and other conditions were held constant. The average gas mileage was 36 miles per gallon for engine A and 42 miles per gallon for engine B. Find a 96% confidence interval on μB − μA, where μA and μB are population mean gas mileages for engines A and B, respectively. Assume that the population standard deviations are 6 and 8 for engines A and B, respectively. Solution: The point estimate of μB − μA is x̄B − x̄A = 42 − 36 = 6. Using α = 0.04, we find z0.02 = 2.05 from Table A.3. Hence, with substitution in the formula above, the 96% confidence interval is 6 − 2.05 64 75 + 36 50 μB − μA 6 + 2.05 64 75 + 36 50 , or simply 3.43 μB − μA 8.57. This procedure for estimating the difference between two means is applicable if σ2 1 and σ2 2 are known. If the variances are not known and the two distributions involved are approximately normal, the t-distribution becomes involved, as in the case of a single sample. If one is not willing to assume normality, large samples (say greater than 30) will allow the use of s1 and s2 in place of σ1 and σ2, respectively, with the rationale that s1 ≈ σ1 and s2 ≈ σ2. Again, of course, the confidence interval is an approximate one.
  • 308. 9.8 Two Samples: Estimating the Difference between Two Means 287 Variances Unknown but Equal Consider the case where σ2 1 and σ2 2 are unknown. If σ2 1 = σ2 2 = σ2 , we obtain a standard normal variable of the form Z = (X̄1 − X̄2) − (μ1 − μ2) σ2[(1/n1) + (1/n2)] . According to Theorem 8.4, the two random variables (n1 − 1)S2 1 σ2 and (n2 − 1)S2 2 σ2 have chi-squared distributions with n1 − 1 and n2 − 1 degrees of freedom, respec- tively. Furthermore, they are independent chi-squared variables, since the random samples were selected independently. Consequently, their sum V = (n1 − 1)S2 1 σ2 + (n2 − 1)S2 2 σ2 = (n1 − 1)S2 1 + (n2 − 1)S2 2 σ2 has a chi-squared distribution with v = n1 + n2 − 2 degrees of freedom. Since the preceding expressions for Z and V can be shown to be independent, it follows from Theorem 8.5 that the statistic T = (X̄1 − X̄2) − (μ1 − μ2) σ2[(1/n1) + (1/n2)] % (n1 − 1)S2 1 + (n2 − 1)S2 2 σ2(n1 + n2 − 2) has the t-distribution with v = n1 + n2 − 2 degrees of freedom. A point estimate of the unknown common variance σ2 can be obtained by pooling the sample variances. Denoting the pooled estimator by S2 p, we have the following. Pooled Estimate of Variance S2 p = (n1 − 1)S2 1 + (n2 − 1)S2 2 n1 + n2 − 2 . Substituting S2 p in the T statistic, we obtain the less cumbersome form T = (X̄1 − X̄2) − (μ1 − μ2) Sp (1/n1) + (1/n2) . Using the T statistic, we have P(−tα/2 T tα/2) = 1 − α, where tα/2 is the t-value with n1 + n2 − 2 degrees of freedom, above which we find an area of α/2. Substituting for T in the inequality, we write P −tα/2 (X̄1 − X̄2) − (μ1 − μ2) Sp (1/n1) + (1/n2) tα/2 = 1 − α. After the usual mathematical manipulations, the difference of the sample means x̄1 − x̄2 and the pooled variance are computed and then the following 100(1 − α)% confidence interval for μ1 − μ2 is obtained. The value of s2 p is easily seen to be a weighted average of the two sample variances s2 1 and s2 2, where the weights are the degrees of freedom.
  • 309. 288 Chapter 9 One- and Two-Sample Estimation Problems Confidence Interval for μ1 − μ2, σ2 1 = σ2 2 but Both Unknown If x̄1 and x̄2 are the means of independent random samples of sizes n1 and n2, respectively, from approximately normal populations with unknown but equal variances, a 100(1 − α)% confidence interval for μ1 − μ2 is given by (x̄1 − x̄2) − tα/2sp 1 n1 + 1 n2 μ1 − μ2 (x̄1 − x̄2) + tα/2sp 1 n1 + 1 n2 , where sp is the pooled estimate of the population standard deviation and tα/2 is the t-value with v = n1 + n2 − 2 degrees of freedom, leaving an area of α/2 to the right. Example 9.11: The article “Macroinvertebrate Community Structure as an Indicator of Acid Mine Pollution,” published in the Journal of Environmental Pollution, reports on an in- vestigation undertaken in Cane Creek, Alabama, to determine the relationship between selected physiochemical parameters and different measures of macroinver- tebrate community structure. One facet of the investigation was an evaluation of the effectiveness of a numerical species diversity index to indicate aquatic degrada- tion due to acid mine drainage. Conceptually, a high index of macroinvertebrate species diversity should indicate an unstressed aquatic system, while a low diversity index should indicate a stressed aquatic system. Two independent sampling stations were chosen for this study, one located downstream from the acid mine discharge point and the other located upstream. For 12 monthly samples collected at the downstream station, the species diversity index had a mean value x̄1 = 3.11 and a standard deviation s1 = 0.771, while 10 monthly samples collected at the upstream station had a mean index value x̄2 = 2.04 and a standard deviation s2 = 0.448. Find a 90% confidence interval for the difference between the population means for the two locations, assuming that the populations are approximately normally distributed with equal variances. Solution: Let μ1 and μ2 represent the population means, respectively, for the species diversity indices at the downstream and upstream stations. We wish to find a 90% confidence interval for μ1 − μ2. Our point estimate of μ1 − μ2 is x̄1 − x̄2 = 3.11 − 2.04 = 1.07. The pooled estimate, s2 p, of the common variance, σ2 , is s2 p = (n1 − 1)s2 1 + (n2 − 1)s2 2 n1 + n2 − 2 = (11)(0.7712 ) + (9)(0.4482 ) 12 + 10 − 2 = 0.417. Taking the square root, we obtain sp = 0.646. Using α = 0.1, we find in Table A.4 that t0.05 = 1.725 for v = n1 + n2 − 2 = 20 degrees of freedom. Therefore, the 90% confidence interval for μ1 − μ2 is 1.07 − (1.725)(0.646) 1 12 + 1 10 μ1 − μ2 1.07 + (1.725)(0.646) 1 12 + 1 10 , which simplifies to 0.593 μ1 − μ2 1.547.
  • 310. 9.8 Two Samples: Estimating the Difference between Two Means 289 Interpretation of the Confidence Interval For the case of a single parameter, the confidence interval simply provides error bounds on the parameter. Values contained in the interval should be viewed as reasonable values given the experimental data. In the case of a difference between two means, the interpretation can be extended to one of comparing the two means. For example, if we have high confidence that a difference μ1 − μ2 is positive, we would certainly infer that μ1 μ2 with little risk of being in error. For example, in Example 9.11, we are 90% confident that the interval from 0.593 to 1.547 contains the difference of the population means for values of the species diversity index at the two stations. The fact that both confidence limits are positive indicates that, on the average, the index for the station located downstream from the discharge point is greater than the index for the station located upstream. Equal Sample Sizes The procedure for constructing confidence intervals for μ1 − μ2 with σ1 = σ2 = σ unknown requires the assumption that the populations are normal. Slight depar- tures from either the equal variance or the normality assumption do not seriously alter the degree of confidence for our interval. (A procedure is presented in Chap- ter 10 for testing the equality of two unknown population variances based on the information provided by the sample variances.) If the population variances are considerably different, we still obtain reasonable results when the populations are normal, provided that n1 = n2. Therefore, in planning an experiment, one should make every effort to equalize the size of the samples. Unknown and Unequal Variances Let us now consider the problem of finding an interval estimate of μ1 − μ2 when the unknown population variances are not likely to be equal. The statistic most often used in this case is T = (X̄1 − X̄2) − (μ1 − μ2) (S2 1 /n1) + (S2 2 /n2) , which has approximately a t-distribution with v degrees of freedom, where v = (s2 1/n1 + s2 2/n2)2 [(s2 1/n1)2/(n1 − 1)] + [(s2 2/n2)2/(n2 − 1)] . Since v is seldom an integer, we round it down to the nearest whole number. The above estimate of the degrees of freedom is called the Satterthwaite approximation (Satterthwaite, 1946, in the Bibliography). Using the statistic T , we write P(−tα/2 T tα/2) ≈ 1 − α, where tα/2 is the value of the t-distribution with v degrees of freedom, above which we find an area of α/2. Substituting for T in the inequality and following the same steps as before, we state the final result.
  • 311. 290 Chapter 9 One- and Two-Sample Estimation Problems Confidence Interval for μ1 − μ2, σ2 1 = σ2 2 and Both Unknown If x̄1 and s2 1 and x̄2 and s2 2 are the means and variances of independent random samples of sizes n1 and n2, respectively, from approximately normal populations with unknown and unequal variances, an approximate 100(1 − α)% confidence interval for μ1 − μ2 is given by (x̄1 − x̄2) − tα/2 % s2 1 n1 + s2 2 n2 μ1 − μ2 (x̄1 − x̄2) + tα/2 % s2 1 n1 + s2 2 n2 , where tα/2 is the t-value with v = (s2 1/n1 + s2 2/n2)2 [(s2 1/n1)2/(n1 − 1)] + [(s2 2/n2)2/(n2 − 1)] degrees of freedom, leaving an area of α/2 to the right. Note that the expression for v above involves random variables, and thus v is an estimate of the degrees of freedom. In applications, this estimate will not result in a whole number, and thus the analyst must round down to the nearest integer to achieve the desired confidence. Before we illustrate the above confidence interval with an example, we should point out that all the confidence intervals on μ1 − μ2 are of the same general form as those on a single mean; namely, they can be written as point estimate ± tα/2 ' s.e.(point estimate) or point estimate ± zα/2 s.e.(point estimate). For example, in the case where σ1 = σ2 = σ, the estimated standard error of x̄1 − x̄2 is sp 1/n1 + 1/n2. For the case where σ2 1 = σ2 2, ' s.e.(x̄1 − x̄2) = % s2 1 n1 + s2 2 n2 . Example 9.12: A study was conducted by the Department of Zoology at the Virginia Tech to estimate the difference in the amounts of the chemical orthophosphorus measured at two different stations on the James River. Orthophosphorus was measured in milligrams per liter. Fifteen samples were collected from station 1, and 12 samples were obtained from station 2. The 15 samples from station 1 had an average orthophosphorus content of 3.84 milligrams per liter and a standard deviation of 3.07 milligrams per liter, while the 12 samples from station 2 had an average content of 1.49 milligrams per liter and a standard deviation of 0.80 milligram per liter. Find a 95% confidence interval for the difference in the true average orthophosphorus contents at these two stations, assuming that the observations came from normal populations with different variances. Solution: For station 1, we have x̄1 = 3.84, s1 = 3.07, and n1 = 15. For station 2, x̄2 = 1.49, s2 = 0.80, and n2 = 12. We wish to find a 95% confidence interval for μ1 − μ2.
  • 312. 9.9 Paired Observations 291 Since the population variances are assumed to be unequal, we can only find an approximate 95% confidence interval based on the t-distribution with v degrees of freedom, where v = (3.072 /15 + 0.802 /12)2 [(3.072/15)2/14] + [(0.802/12)2/11] = 16.3 ≈ 16. Our point estimate of μ1 − μ2 is x̄1 − x̄2 = 3.84 − 1.49 = 2.35. Using α = 0.05, we find in Table A.4 that t0.025 = 2.120 for v = 16 degrees of freedom. Therefore, the 95% confidence interval for μ1 − μ2 is 2.35 − 2.120 3.072 15 + 0.802 12 μ1 − μ2 2.35 + 2.120 3.072 15 + 0.802 12 , which simplifies to 0.60 μ1 − μ2 4.10. Hence, we are 95% confident that the interval from 0.60 to 4.10 milligrams per liter contains the difference of the true average orthophosphorus contents for these two locations. When two population variances are unknown, the assumption of equal vari- ances or unequal variances may be precarious. In Section 10.10, a procedure will be introduced that will aid in discriminating between the equal variance and the unequal variance situation. 9.9 Paired Observations At this point, we shall consider estimation procedures for the difference of two means when the samples are not independent and the variances of the two popu- lations are not necessarily equal. The situation considered here deals with a very special experimental condition, namely that of paired observations. Unlike in the situation described earlier, the conditions of the two populations are not assigned randomly to experimental units. Rather, each homogeneous experimental unit re- ceives both population conditions; as a result, each experimental unit has a pair of observations, one for each population. For example, if we run a test on a new diet using 15 individuals, the weights before and after going on the diet form the information for our two samples. The two populations are “before” and “after,” and the experimental unit is the individual. Obviously, the observations in a pair have something in common. To determine if the diet is effective, we consider the differences d1, d2, . . . , dn in the paired observations. These differences are the val- ues of a random sample D1, D2, . . . , Dn from a population of differences that we shall assume to be normally distributed with mean μD = μ1 − μ2 and variance σ2 D. We estimate σ2 D by s2 d, the variance of the differences that constitute our sample. The point estimator of μD is given by D̄. When Should Pairing Be Done? Pairing observations in an experiment is a strategy that can be employed in many fields of application. The reader will be exposed to this concept in material related
  • 313. 292 Chapter 9 One- and Two-Sample Estimation Problems to hypothesis testing in Chapter 10 and experimental design issues in Chapters 13 and 15. Selecting experimental units that are relatively homogeneous (within the units) and allowing each unit to experience both population conditions reduces the effective experimental error variance (in this case, σ2 D). The reader may visualize the ith pair difference as Di = X1i − X2i. Since the two observations are taken on the sample experimental unit, they are not independent and, in fact, Var(Di) = Var(X1i − X2i) = σ2 1 + σ2 2 − 2 Cov(X1i, X2i). Now, intuitively, we expect that σ2 D should be reduced because of the similarity in nature of the “errors” of the two observations within a given experimental unit, and this comes through in the expression above. One certainly expects that if the unit is homogeneous, the covariance is positive. As a result, the gain in quality of the confidence interval over that obtained without pairing will be greatest when there is homogeneity within units and large differences as one goes from unit to unit. One should keep in mind that the performance of the confidence interval will depend on the standard error of D̄, which is, of course, σD/ √ n, where n is the number of pairs. As we indicated earlier, the intent of pairing is to reduce σD. Tradeoff between Reducing Variance and Losing Degrees of Freedom Comparing the confidence intervals obtained with and without pairing makes ap- parent that there is a tradeoff involved. Although pairing should indeed reduce variance and hence reduce the standard error of the point estimate, the degrees of freedom are reduced by reducing the problem to a one-sample problem. As a result, the tα/2 point attached to the standard error is adjusted accordingly. Thus, pair- ing may be counterproductive. This would certainly be the case if one experienced only a modest reduction in variance (through σ2 D) by pairing. Another illustration of pairing involves choosing n pairs of subjects, with each pair having a similar characteristic such as IQ, age, or breed, and then selecting one member of each pair at random to yield a value of X1, leaving the other member to provide the value of X2. In this case, X1 and X2 might represent the grades obtained by two individuals of equal IQ when one of the individuals is assigned at random to a class using the conventional lecture approach while the other individual is assigned to a class using programmed materials. A 100(1 − α)% confidence interval for μD can be established by writing P(−tα/2 T tα/2) = 1 − α, where T = D̄−μD Sd/ √ n and tα/2, as before, is a value of the t-distribution with n − 1 degrees of freedom. It is now a routine procedure to replace T by its definition in the inequality above and carry out the mathematical steps that lead to the following 100(1−α)% confidence interval for μ1 − μ2 = μD.
  • 314. 9.9 Paired Observations 293 Confidence Interval for μD = μ1 − μ2 for Paired Observations If ¯ d and sd are the mean and standard deviation, respectively, of the normally distributed differences of n random pairs of measurements, a 100(1 − α)% con- fidence interval for μD = μ1 − μ2 is ¯ d − tα/2 sd √ n μD ¯ d + tα/2 sd √ n , where tα/2 is the t-value with v = n − 1 degrees of freedom, leaving an area of α/2 to the right. Example 9.13: A study published in Chemosphere reported the levels of the dioxin TCDD of 20 Massachusetts Vietnam veterans who were possibly exposed to Agent Orange. The TCDD levels in plasma and in fat tissue are listed in Table 9.1. Find a 95% confidence interval for μ1 − μ2, where μ1 and μ2 represent the true mean TCDD levels in plasma and in fat tissue, respectively. Assume the distribution of the differences to be approximately normal. Table 9.1: Data for Example 9.13 TCDD TCDD TCDD TCDD Levels in Levels in Levels in Levels in Veteran Plasma Fat Tissue di Veteran Plasma Fat Tissue di 1 2 3 4 5 6 7 8 9 10 2.5 3.1 2.1 3.5 3.1 1.8 6.0 3.0 36.0 4.7 4.9 5.9 4.4 6.9 7.0 4.2 10.0 5.5 41.0 4.4 −2.4 −2.8 −2.3 −3.4 −3.9 −2.4 −4.0 −2.5 −5.0 0.3 11 12 13 14 15 16 17 18 19 20 6.9 3.3 4.6 1.6 7.2 1.8 20.0 2.0 2.5 4.1 7.0 2.9 4.6 1.4 7.7 1.1 11.0 2.5 2.3 2.5 −0.1 0.4 0.0 0.2 −0.5 0.7 9.0 −0.5 0.2 1.6 Source: Schecter, A. et al. “Partitioning of 2,3,7,8-chlorinated dibenzo-p-dioxins and dibenzofurans between adipose tissue and plasma lipid of 20 Massachusetts Vietnam veterans,” Chemosphere, Vol. 20, Nos. 7–9, 1990, pp. 954–955 (Tables I and II). Solution: We wish to find a 95% confidence interval for μ1 − μ2. Since the observations are paired, μ1 − μ2 = μD. The point estimate of μD is ¯ d = −0.87. The standard deviation, sd, of the sample differences is sd = ( ) ) * 1 n − 1 n i=1 (di − ¯ d)2 = 168.4220 19 = 2.9773. Using α = 0.05, we find in Table A.4 that t0.025 = 2.093 for v = n−1 = 19 degrees of freedom. Therefore, the 95% confidence interval is −0.8700 − (2.093) 2.9773 √ 20 μD −0.8700 + (2.093) 2.9773 √ 20 ,
  • 315. / / 294 Chapter 9 One- and Two-Sample Estimation Problems or simply −2.2634 μD 0.5234, from which we can conclude that there is no significant difference between the mean TCDD level in plasma and the mean TCDD level in fat tissue. Exercises 9.35 A random sample of size n1 = 25, taken from a normal population with a standard deviation σ1 = 5, has a mean x̄1 = 80. A second random sample of size n2 = 36, taken from a different normal population with a standard deviation σ2 = 3, has a mean x̄2 = 75. Find a 94% confidence interval for μ1 − μ2. 9.36 Two kinds of thread are being compared for strength. Fifty pieces of each type of thread are tested under similar conditions. Brand A has an average ten- sile strength of 78.3 kilograms with a standard devi- ation of 5.6 kilograms, while brand B has an average tensile strength of 87.2 kilograms with a standard de- viation of 6.3 kilograms. Construct a 95% confidence interval for the difference of the population means. 9.37 A study was conducted to determine if a cer- tain treatment has any effect on the amount of metal removed in a pickling operation. A random sample of 100 pieces was immersed in a bath for 24 hours without the treatment, yielding an average of 12.2 millimeters of metal removed and a sample standard deviation of 1.1 millimeters. A second sample of 200 pieces was exposed to the treatment, followed by the 24-hour im- mersion in the bath, resulting in an average removal of 9.1 millimeters of metal with a sample standard de- viation of 0.9 millimeter. Compute a 98% confidence interval estimate for the difference between the popu- lation means. Does the treatment appear to reduce the mean amount of metal removed? 9.38 Two catalysts in a batch chemical process, are being compared for their effect on the output of the process reaction. A sample of 12 batches was prepared using catalyst 1, and a sample of 10 batches was pre- pared using catalyst 2. The 12 batches for which cat- alyst 1 was used in the reaction gave an average yield of 85 with a sample standard deviation of 4, and the 10 batches for which catalyst 2 was used gave an aver- age yield of 81 and a sample standard deviation of 5. Find a 90% confidence interval for the difference be- tween the population means, assuming that the pop- ulations are approximately normally distributed with equal variances. 9.39 Students may choose between a 3-semester-hour physics course without labs and a 4-semester-hour course with labs. The final written examination is the same for each section. If 12 students in the section with labs made an average grade of 84 with a standard devi- ation of 4, and 18 students in the section without labs made an average grade of 77 with a standard deviation of 6, find a 99% confidence interval for the difference between the average grades for the two courses. As- sume the populations to be approximately normally distributed with equal variances. 9.40 In a study conducted at Virginia Tech on the development of ectomycorrhizal, a symbiotic relation- ship between the roots of trees and a fungus, in which minerals are transferred from the fungus to the trees and sugars from the trees to the fungus, 20 northern red oak seedlings exposed to the fungus Pisolithus tinc- torus were grown in a greenhouse. All seedlings were planted in the same type of soil and received the same amount of sunshine and water. Half received no ni- trogen at planting time, to serve as a control, and the other half received 368 ppm of nitrogen in the form NaNO3. The stem weights, in grams, at the end of 140 days were recorded as follows: No Nitrogen Nitrogen 0.32 0.26 0.53 0.43 0.28 0.47 0.37 0.49 0.47 0.52 0.43 0.75 0.36 0.79 0.42 0.86 0.38 0.62 0.43 0.46 Construct a 95% confidence interval for the difference in the mean stem weight between seedlings that re- ceive no nitrogen and those that receive 368 ppm of nitrogen. Assume the populations to be normally dis- tributed with equal variances. 9.41 The following data represent the length of time, in days, to recovery for patients randomly treated with one of two medications to clear up severe bladder in- fections: Medication 1 Medication 2 n1 = 14 n2 = 16 x̄1 = 17 x̄2 = 19 s2 1 = 1.5 s2 2 = 1.8 Find a 99% confidence interval for the difference μ2−μ1
  • 316. / / Exercises 295 in the mean recovery times for the two medications, as- suming normal populations with equal variances. 9.42 An experiment reported in Popular Science compared fuel economies for two types of similarly equipped diesel mini-trucks. Let us suppose that 12 Volkswagen and 10 Toyota trucks were tested in 90- kilometer-per-hour steady-paced trials. If the 12 Volks- wagen trucks averaged 16 kilometers per liter with a standard deviation of 1.0 kilometer per liter and the 10 Toyota trucks averaged 11 kilometers per liter with a standard deviation of 0.8 kilometer per liter, construct a 90% confidence interval for the difference between the average kilometers per liter for these two mini-trucks. Assume that the distances per liter for the truck mod- els are approximately normally distributed with equal variances. 9.43 A taxi company is trying to decide whether to purchase brand A or brand B tires for its fleet of taxis. To estimate the difference in the two brands, an exper- iment is conducted using 12 of each brand. The tires are run until they wear out. The results are Brand A: x̄1 = 36, 300 kilometers, s1 = 5000 kilometers. Brand B: x̄2 = 38, 100 kilometers, s2 = 6100 kilometers. Compute a 95% confidence interval for μA − μB as- suming the populations to be approximately normally distributed. You may not assume that the variances are equal. 9.44 Referring to Exercise 9.43, find a 99% confidence interval for μ1 − μ2 if tires of the two brands are as- signed at random to the left and right rear wheels of 8 taxis and the following distances, in kilometers, are recorded: Taxi Brand A Brand B 1 34,400 36,700 2 45,500 46,800 3 36,700 37,700 4 32,000 31,100 5 48,400 47,800 6 32,800 36,400 7 38,100 38,900 8 30,100 31,500 Assume that the differences of the distances are ap- proximately normally distributed. 9.45 The federal government awarded grants to the agricultural departments of 9 universities to test the yield capabilities of two new varieties of wheat. Each variety was planted on a plot of equal area at each university, and the yields, in kilograms per plot, were recorded as follows: University Variety 1 2 3 4 5 6 7 8 9 1 38 23 35 41 44 29 37 31 38 2 45 25 31 38 50 33 36 40 43 Find a 95% confidence interval for the mean difference between the yields of the two varieties, assuming the differences of yields to be approximately normally dis- tributed. Explain why pairing is necessary in this prob- lem. 9.46 The following data represent the running times of films produced by two motion-picture companies. Company Time (minutes) I 103 94 110 87 98 II 97 82 123 92 175 88 118 Compute a 90% confidence interval for the difference between the average running times of films produced by the two companies. Assume that the running-time dif- ferences are approximately normally distributed with unequal variances. 9.47 Fortune magazine (March 1997) reported the to- tal returns to investors for the 10 years prior to 1996 and also for 1996 for 431 companies. The total returns for 10 of the companies are listed below. Find a 95% confidence interval for the mean change in percent re- turn to investors. Total Return to Investors Company 1986–96 1996 Coca-Cola 29.8% 43.3% Mirage Resorts 27.9% 25.4% Merck 22.1% 24.0% Microsoft 44.5% 88.3% Johnson Johnson 22.2% 18.1% Intel 43.8% 131.2% Pfizer 21.7% 34.0% Procter Gamble 21.9% 32.1% Berkshire Hathaway 28.3% 6.2% SP 500 11.8% 20.3% 9.48 An automotive company is considering two types of batteries for its automobile. Sample infor- mation on battery life is collected for 20 batteries of type A and 20 batteries of type B. The summary statistics are x̄A = 32.91, x̄B = 30.47, sA = 1.57, and sB = 1.74. Assume the data on each battery are normally distributed and assume σA = σB. (a) Find a 95% confidence interval on μA − μB. (b) Draw a conclusion from (a) that provides insight into whether A or B should be adopted. 9.49 Two different brands of latex paint are being considered for use. Fifteen specimens of each type of
  • 317. 296 Chapter 9 One- and Two-Sample Estimation Problems paint were selected, and the drying times, in hours, were as follows: Paint A Paint B 3.5 2.7 3.9 4.2 3.6 4.7 3.9 4.5 5.5 4.0 2.7 3.3 5.2 4.2 2.9 5.3 4.3 6.0 5.2 3.7 4.4 5.2 4.0 4.1 3.4 5.5 6.2 5.1 5.4 4.8 Assume the drying time is normally distributed with σA = σB. Find a 95% confidence interval on μB − μA, where μA and μB are the mean drying times. 9.50 Two levels (low and high) of insulin doses are given to two groups of diabetic rats to check the insulin- binding capacity, yielding the following data: Low dose: n1 = 8 x̄1 = 1.98 s1 = 0.51 High dose: n2 = 13 x̄2 = 1.30 s2 = 0.35 Assume that the variances are equal. Give a 95% con- fidence interval for the difference in the true average insulin-binding capacity between the two samples. 9.10 Single Sample: Estimating a Proportion A point estimator of the proportion p in a binomial experiment is given by the statistic + P = X/n, where X represents the number of successes in n trials. There- fore, the sample proportion p̂ = x/n will be used as the point estimate of the parameter p. If the unknown proportion p is not expected to be too close to 0 or 1, we can establish a confidence interval for p by considering the sampling distribution of + P. Designating a failure in each binomial trial by the value 0 and a success by the value 1, the number of successes, x, can be interpreted as the sum of n values consisting only of 0 and 1s, and p̂ is just the sample mean of these n values. Hence, by the Central Limit Theorem, for n sufficiently large, + P is approximately normally distributed with mean μ P = E( + P) = E X n = np n = p and variance σ2 P = σ2 X/n = σ2 X n2 = npq n2 = pq n . Therefore, we can assert that P(−zα/2 Z zα/2) = 1 − α, with Z = + P − p pq/n , and zα/2 is the value above which we find an area of α/2 under the standard normal curve. Substituting for Z, we write P −zα/2 + P − p pq/n zα/2 = 1 − α. When n is large, very little error is introduced by substituting the point estimate p̂ = x/n for the p under the radical sign. Then we can write P + P − zα/2 p̂q̂ n p + P + zα/2 p̂q̂ n ≈ 1 − α.
  • 318. 9.10 Single Sample: Estimating a Proportion 297 On the other hand, by solving for p in the quadratic inequality above, −zα/2 + P − p pq/n zα/2, we obtain another form of the confidence interval for p with limits p̂ + z2 α/2 2n 1 + z2 α/2 n ± zα/2 1 + z2 α/2 n % p̂q̂ n + z2 α/2 4n2 . For a random sample of size n, the sample proportion p̂ = x/n is computed, and the following approximate 100(1 − α)% confidence intervals for p can be obtained. Large-Sample Confidence Intervals for p If p̂ is the proportion of successes in a random sample of size n and q̂ = 1 − p̂, an approximate 100(1 − α)% confidence interval, for the binomial parameter p is given by (method 1) p̂ − zα/2 p̂q̂ n p p̂ + zα/2 p̂q̂ n or by (method 2) p̂ + z2 α/2 2n 1 + z2 α/2 n − zα/2 1 + z2 α/2 n % p̂q̂ n + z2 α/2 4n2 p p̂ + z2 α/2 2n 1 + z2 α/2 n + zα/2 1 + z2 α/2 n % p̂q̂ n + z2 α/2 4n2 , where zα/2 is the z-value leaving an area of α/2 to the right. When n is small and the unknown proportion p is believed to be close to 0 or to 1, the confidence-interval procedure established here is unreliable and, therefore, should not be used. To be on the safe side, one should require both np̂ and nq̂ to be greater than or equal to 5. The methods for finding a confidence interval for the binomial parameter p are also applicable when the binomial distribution is being used to approximate the hypergeometric distribution, that is, when n is small relative to N, as illustrated by Example 9.14. Note that although method 2 yields more accurate results, it is more compli- cated to calculate, and the gain in accuracy that it provides diminishes when the sample size is large enough. Hence, method 1 is commonly used in practice. Example 9.14: In a random sample of n = 500 families owning television sets in the city of Hamil- ton, Canada, it is found that x = 340 subscribe to HBO. Find a 95% confidence interval for the actual proportion of families with television sets in this city that subscribe to HBO. Solution: The point estimate of p is p̂ = 340/500 = 0.68. Using Table A.3, we find that z0.025 = 1.96. Therefore, using method 1, the 95% confidence interval for p is 0.68 − 1.96 (0.68)(0.32) 500 p 0.68 + 1.96 (0.68)(0.32) 500 , which simplifies to 0.6391 p 0.7209.
  • 319. 298 Chapter 9 One- and Two-Sample Estimation Problems If we use method 2, we can obtain 0.68 + 1.962 (2)(500) 1 + 1.962 500 ± 1.96 1 + 1.962 500 % (0.68)(0.32) 500 + 1.962 (4)(5002) = 0.6786 ± 0.0408, which simplifies to 0.6378 p 0.7194. Apparently, when n is large (500 here), both methods yield very similar results. If p is the center value of a 100(1 − α)% confidence interval, then p̂ estimates p without error. Most of the time, however, p̂ will not be exactly equal to p and the point estimate will be in error. The size of this error will be the positive difference that separates p and p̂, and we can be 100(1 − α)% confident that this difference will not exceed zα/2 p̂q̂/n. We can readily see this if we draw a diagram of a typical confidence interval, as in Figure 9.6. Here we use method 1 to estimate the error. p ^ p Error ^ ^ p z ^ ^ p q/n p z ^ ^ p q/n /2 α /2 α Figure 9.6: Error in estimating p by p̂. Theorem 9.3: If p̂ is used as an estimate of p, we can be 100(1 − α)% confident that the error will not exceed zα/2 p̂q̂/n. In Example 9.14, we are 95% confident that the sample proportion p̂ = 0.68 differs from the true proportion p by an amount not exceeding 0.04. Choice of Sample Size Let us now determine how large a sample is necessary to ensure that the error in estimating p will be less than a specified amount e. By Theorem 9.3, we must choose n such that zα/2 p̂q̂/n = e. Theorem 9.4: If p̂ is used as an estimate of p, we can be 100(1 − α)% confident that the error will be less than a specified amount e when the sample size is approximately n = z2 α/2p̂q̂ e2 . Theorem 9.4 is somewhat misleading in that we must use p̂ to determine the sample size n, but p̂ is computed from the sample. If a crude estimate of p can be made without taking a sample, this value can be used to determine n. Lacking such an estimate, we could take a preliminary sample of size n ≥ 30 to provide an estimate of p. Using Theorem 9.4, we could determine approximately how many observations are needed to provide the desired degree of accuracy. Note that fractional values of n are rounded up to the next whole number.
  • 320. 9.10 Single Sample: Estimating a Proportion 299 Example 9.15: How large a sample is required if we want to be 95% confident that our estimate of p in Example 9.14 is within 0.02 of the true value? Solution: Let us treat the 500 families as a preliminary sample, providing an estimate p̂ = 0.68. Then, by Theorem 9.4, n = (1.96)2 (0.68)(0.32) (0.02)2 = 2089.8 ≈ 2090. Therefore, if we base our estimate of p on a random sample of size 2090, we can be 95% confident that our sample proportion will not differ from the true proportion by more than 0.02. Occasionally, it will be impractical to obtain an estimate of p to be used for determining the sample size for a specified degree of confidence. If this happens, an upper bound for n is established by noting that p̂q̂ = p̂(1 − p̂), which must be at most 1/4, since p̂ must lie between 0 and 1. This fact may be verified by completing the square. Hence p̂(1 − p̂) = −(p̂2 − p̂) = 1 4 − p̂2 − p̂ + 1 4 = 1 4 − p̂ − 1 2 2 , which is always less than 1/4 except when p̂ = 1/2, and then p̂q̂ = 1/4. Therefore, if we substitute p̂ = 1/2 into the formula for n in Theorem 9.4 when, in fact, p actually differs from l/2, n will turn out to be larger than necessary for the specified degree of confidence; as a result, our degree of confidence will increase. Theorem 9.5: If p̂ is used as an estimate of p, we can be at least 100(1 − α)% confident that the error will not exceed a specified amount e when the sample size is n = z2 α/2 4e2 . Example 9.16: How large a sample is required if we want to be at least 95% confident that our estimate of p in Example 9.14 is within 0.02 of the true value? Solution: Unlike in Example 9.15, we shall now assume that no preliminary sample has been taken to provide an estimate of p. Consequently, we can be at least 95% confident that our sample proportion will not differ from the true proportion by more than 0.02 if we choose a sample of size n = (1.96)2 (4)(0.02)2 = 2401. Comparing the results of Examples 9.15 and 9.16, we see that information concern- ing p, provided by a preliminary sample or from experience, enables us to choose a smaller sample while maintaining our required degree of accuracy.
  • 321. 300 Chapter 9 One- and Two-Sample Estimation Problems 9.11 Two Samples: Estimating the Difference between Two Proportions Consider the problem where we wish to estimate the difference between two bino- mial parameters p1 and p2. For example, p1 might be the proportion of smokers with lung cancer and p2 the proportion of nonsmokers with lung cancer, and the problem is to estimate the difference between these two proportions. First, we select independent random samples of sizes n1 and n2 from the two binomial pop- ulations with means n1p1 and n2p2 and variances n1p1q1 and n2p2q2, respectively; then we determine the numbers x1 and x2 of people in each sample with lung can- cer and form the proportions p̂1 = x1/n and p̂2 = x2/n. A point estimator of the difference between the two proportions, p1 − p2, is given by the statistic + P1 − + P2. Therefore, the difference of the sample proportions, p̂1 − p̂2, will be used as the point estimate of p1 − p2. A confidence interval for p1 − p2 can be established by considering the sam- pling distribution of + P1 − + P2. From Section 9.10 we know that + P1 and + P2 are each approximately normally distributed, with means p1 and p2 and variances p1q1/n1 and p2q2/n2, respectively. Choosing independent samples from the two popula- tions ensures that the variables + P1 and + P2 will be independent, and then by the reproductive property of the normal distribution established in Theorem 7.11, we conclude that + P1 − + P2 is approximately normally distributed with mean μ P1− P2 = p1 − p2 and variance σ2 P1− P2 = p1q1 n1 + p2q2 n2 . Therefore, we can assert that P(−zα/2 Z zα/2) = 1 − α, where Z = ( + P1 − + P2) − (p1 − p2) p1q1/n1 + p2q2/n2 and zα/2 is the value above which we find an area of α/2 under the standard normal curve. Substituting for Z, we write P −zα/2 ( + P1 − + P2) − (p1 − p2) p1q1/n1 + p2q2/n2 zα/2 = 1 − α. After performing the usual mathematical manipulations, we replace p1, p2, q1, and q2 under the radical sign by their estimates p̂1 = x1/n1, p̂2 = x2/n2, q̂1 = 1 − p̂1, and q̂2 = 1 − p̂2, provided that n1p̂1, n1q̂1, n2p̂2, and n2q̂2 are all greater than or equal to 5, and the following approximate 100(1 − α)% confidence interval for p1 − p2 is obtained.
  • 322. 9.11 Two Samples: Estimating the Difference between Two Proportions 301 Large-Sample Confidence Interval for p1 − p2 If p̂1 and p̂2 are the proportions of successes in random samples of sizes n1 and n2, respectively, q̂1 = 1 − p̂1, and q̂2 = 1 − p̂2, an approximate 100(1 − α)% confidence interval for the difference of two binomial parameters, p1 − p2, is given by (p̂1 − p̂2) − zα/2 p̂1q̂1 n1 + p̂2q̂2 n2 p1 − p2 (p̂1 − p̂2) + zα/2 p̂1q̂1 n1 + p̂2q̂2 n2 , where zα/2 is the z-value leaving an area of α/2 to the right. Example 9.17: A certain change in a process for manufacturing component parts is being con- sidered. Samples are taken under both the existing and the new process so as to determine if the new process results in an improvement. If 75 of 1500 items from the existing process are found to be defective and 80 of 2000 items from the new process are found to be defective, find a 90% confidence interval for the true difference in the proportion of defectives between the existing and the new process. Solution: Let p1 and p2 be the true proportions of defectives for the existing and new pro- cesses, respectively. Hence, p̂1 = 75/1500 = 0.05 and p̂2 = 80/2000 = 0.04, and the point estimate of p1 − p2 is p̂1 − p̂2 = 0.05 − 0.04 = 0.01. Using Table A.3, we find z0.05 = 1.645. Therefore, substituting into the formula, with 1.645 (0.05)(0.95) 1500 + (0.04)(0.96) 2000 = 0.0117, we find the 90% confidence interval to be −0.0017 p1 − p2 0.0217. Since the interval contains the value 0, there is no reason to believe that the new process produces a significant decrease in the proportion of defectives over the existing method. Up to this point, all confidence intervals presented were of the form point estimate ± K s.e.(point estimate), where K is a constant (either t or normal percent point). This form is valid when the parameter is a mean, a difference between means, a proportion, or a difference between proportions, due to the symmetry of the t- and Z-distributions. However, it does not extend to variances and ratios of variances, which will be discussed in Sections 9.12 and 9.13.
  • 323. / / 302 Chapter 9 One- and Two-Sample Estimation Problems Exercises In this set of exercises, for estimation concern- ing one proportion, use only method 1 to obtain the confidence intervals, unless instructed oth- erwise. 9.51 In a random sample of 1000 homes in a certain city, it is found that 228 are heated by oil. Find 99% confidence intervals for the proportion of homes in this city that are heated by oil using both methods pre- sented on page 297. 9.52 Compute 95% confidence intervals, using both methods on page 297, for the proportion of defective items in a process when it is found that a sample of size 100 yields 8 defectives. 9.53 (a) A random sample of 200 voters in a town is selected, and 114 are found to support an annexa- tion suit. Find the 96% confidence interval for the fraction of the voting population favoring the suit. (b) What can we assert with 96% confidence about the possible size of our error if we estimate the fraction of voters favoring the annexation suit to be 0.57? 9.54 A manufacturer of MP3 players conducts a set of comprehensive tests on the electrical functions of its product. All MP3 players must pass all tests prior to being sold. Of a random sample of 500 MP3 players, 15 failed one or more tests. Find a 90% confidence interval for the proportion of MP3 players from the population that pass all tests. 9.55 A new rocket-launching system is being consid- ered for deployment of small, short-range rockets. The existing system has p = 0.8 as the probability of a suc- cessful launch. A sample of 40 experimental launches is made with the new system, and 34 are successful. (a) Construct a 95% confidence interval for p. (b) Would you conclude that the new system is better? 9.56 A geneticist is interested in the proportion of African males who have a certain minor blood disor- der. In a random sample of 100 African males, 24 are found to be afflicted. (a) Compute a 99% confidence interval for the propor- tion of African males who have this blood disorder. (b) What can we assert with 99% confidence about the possible size of our error if we estimate the propor- tion of African males with this blood disorder to be 0.24? 9.57 (a) According to a report in the Roanoke Times World-News, approximately 2/3 of 1600 adults polled by telephone said they think the space shut- tle program is a good investment for the country. Find a 95% confidence interval for the proportion of American adults who think the space shuttle pro- gram is a good investment for the country. (b) What can we assert with 95% confidence about the possible size of our error if we estimate the propor- tion of American adults who think the space shuttle program is a good investment to be 2/3? 9.58 In the newspaper article referred to in Exercise 9.57, 32% of the 1600 adults polled said the U.S. space program should emphasize scientific exploration. How large a sample of adults is needed for the poll if one wishes to be 95% confident that the estimated per- centage will be within 2% of the true percentage? 9.59 How large a sample is needed if we wish to be 96% confident that our sample proportion in Exercise 9.53 will be within 0.02 of the true fraction of the vot- ing population? 9.60 How large a sample is needed if we wish to be 99% confident that our sample proportion in Exercise 9.51 will be within 0.05 of the true proportion of homes in the city that are heated by oil? 9.61 How large a sample is needed in Exercise 9.52 if we wish to be 98% confident that our sample propor- tion will be within 0.05 of the true proportion defec- tive? 9.62 A conjecture by a faculty member in the micro- biology department at Washington University School of Dental Medicine in St. Louis, Missouri, states that a couple of cups of either green or oolong tea each day will provide sufficient fluoride to protect your teeth from decay. How large a sample is needed to estimate the percentage of citizens in a certain town who favor having their water fluoridated if one wishes to be at least 99% confident that the estimate is within 1% of the true percentage? 9.63 A study is to be made to estimate the percent- age of citizens in a town who favor having their water fluoridated. How large a sample is needed if one wishes to be at least 95% confident that the estimate is within 1% of the true percentage? 9.64 A study is to be made to estimate the propor- tion of residents of a certain city and its suburbs who favor the construction of a nuclear power plant near the city. How large a sample is needed if one wishes to be at least 95% confident that the estimate is within 0.04 of the true proportion of residents who favor the construction of the nuclear power plant?
  • 324. 9.12 Single Sample: Estimating the Variance 303 9.65 A certain geneticist is interested in the propor- tion of males and females in the population who have a minor blood disorder. In a random sample of 1000 males, 250 are found to be afflicted, whereas 275 of 1000 females tested appear to have the disorder. Com- pute a 95% confidence interval for the difference be- tween the proportions of males and females who have the blood disorder. 9.66 Ten engineering schools in the United States were surveyed. The sample contained 250 electrical engineers, 80 being women; 175 chemical engineers, 40 being women. Compute a 90% confidence interval for the difference between the proportions of women in these two fields of engineering. Is there a significant difference between the two proportions? 9.67 A clinical trial was conducted to determine if a certain type of inoculation has an effect on the inci- dence of a certain disease. A sample of 1000 rats was kept in a controlled environment for a period of 1 year, and 500 of the rats were given the inoculation. In the group not inoculated, there were 120 incidences of the disease, while 98 of the rats in the inoculated group contracted it. If p1 is the probability of incidence of the disease in uninoculated rats and p2 the probability of incidence in inoculated rats, compute a 90% confi- dence interval for p1 − p2. 9.68 In the study Germination and Emergence of Broccoli, conducted by the Department of Horticulture at Virginia Tech, a researcher found that at 5◦ C, 10 broccoli seeds out of 20 germinated, while at 15◦ C, 15 out of 20 germinated. Compute a 95% confidence in- terval for the difference between the proportions of ger- mination at the two different temperatures and decide if there is a significant difference. 9.69 A survey of 1000 students found that 274 chose professional baseball team A as their favorite team. In a similar survey involving 760 students, 240 of them chose team A as their favorite. Compute a 95% con- fidence interval for the difference between the propor- tions of students favoring team A in the two surveys. Is there a significant difference? 9.70 According to USA Today (March 17, 1997), women made up 33.7% of the editorial staff at local TV stations in the United States in 1990 and 36.2% in 1994. Assume 20 new employees were hired as editorial staff. (a) Estimate the number that would have been women in 1990 and 1994, respectively. (b) Compute a 95% confidence interval to see if there is evidence that the proportion of women hired as editorial staff was higher in 1994 than in 1990. 9.12 Single Sample: Estimating the Variance If a sample of size n is drawn from a normal population with variance σ2 and the sample variance s2 is computed, we obtain a value of the statistic S2 . This computed sample variance is used as a point estimate of σ2 . Hence, the statistic S2 is called an estimator of σ2 . An interval estimate of σ2 can be established by using the statistic X2 = (n − 1)S2 σ2 . According to Theorem 8.4, the statistic X2 has a chi-squared distribution with n − 1 degrees of freedom when samples are chosen from a normal population. We may write (see Figure 9.7) P(χ2 1−α/2 X2 χ2 α/2) = 1 − α, where χ2 1−α/2 and χ2 α/2 are values of the chi-squared distribution with n−1 degrees of freedom, leaving areas of 1−α/2 and α/2, respectively, to the right. Substituting for X2 , we write P χ2 1−α/2 (n − 1)S2 σ2 χ2 α/2 = 1 − α.
  • 325. 304 Chapter 9 One- and Two-Sample Estimation Problems 0 2 1 2 2 /2 1 α α /2 α /2 α /2 α Figure 9.7: P(χ2 1−α/2 X2 χ2 α/2) = 1 − α. Dividing each term in the inequality by (n − 1)S2 and then inverting each term (thereby changing the sense of the inequalities), we obtain P (n − 1)S2 χ2 α/2 σ2 (n − 1)S2 χ2 1−α/2 = 1 − α. For a random sample of size n from a normal population, the sample variance s2 is computed, and the following 100(1 − α)% confidence interval for σ2 is obtained. Confidence Interval for σ2 If s2 is the variance of a random sample of size n from a normal population, a 100(1 − α)% confidence interval for σ2 is (n − 1)s2 χ2 α/2 σ2 (n − 1)s2 χ2 1−α/2 , where χ2 α/2 and χ2 1−α/2 are χ2 -values with v = n−1 degrees of freedom, leaving areas of α/2 and 1 − α/2, respectively, to the right. An approximate 100(1 − α)% confidence interval for σ is obtained by taking the square root of each endpoint of the interval for σ2 . Example 9.18: The following are the weights, in decagrams, of 10 packages of grass seed distributed by a certain company: 46.4, 46.1, 45.8, 47.0, 46.1, 45.9, 45.8, 46.9, 45.2, and 46.0. Find a 95% confidence interval for the variance of the weights of all such packages of grass seed distributed by this company, assuming a normal population. Solution: First we find s2 = n n i=1 x2 i − n i=1 xi 2 n(n − 1) = (10)(21, 273.12) − (461.2)2 (10)(9) = 0.286.
  • 326. 9.13 Two Samples: Estimating the Ratio of Two Variances 305 To obtain a 95% confidence interval, we choose α = 0.05. Then, using Table A.5 with v = 9 degrees of freedom, we find χ2 0.025 = 19.023 and χ2 0.975 = 2.700. Therefore, the 95% confidence interval for σ2 is (9)(0.286) 19.023 σ2 (9)(0.286) 2.700 , or simply 0.135 σ2 0.953. 9.13 Two Samples: Estimating the Ratio of Two Variances A point estimate of the ratio of two population variances σ2 1/σ2 2 is given by the ratio s2 1/s2 2 of the sample variances. Hence, the statistic S2 1 /S2 2 is called an estimator of σ2 1/σ2 2. If σ2 1 and σ2 2 are the variances of normal populations, we can establish an interval estimate of σ2 1/σ2 2 by using the statistic F = σ2 2S2 1 σ2 1S2 2 . According to Theorem 8.8, the random variable F has an F-distribution with v1 = n1 − 1 and v2 = n2 − 1 degrees of freedom. Therefore, we may write (see Figure 9.8) P[f1−α/2(v1, v2) F fα/2(v1, v2)] = 1 − α, where f1−α/2(v1, v2) and fα/2(v1, v2) are the values of the F-distribution with v1 and v2 degrees of freedom, leaving areas of 1 − α/2 and α/2, respectively, to the right. f f1 f 0 /2 1 α α /2 α /2 α /2 α Figure 9.8: P[f1−α/2(v1, v2) F fα/2(v1, v2)] = 1 − α.
  • 327. 306 Chapter 9 One- and Two-Sample Estimation Problems Substituting for F, we write P f1−α/2(v1, v2) σ2 2S2 1 σ2 1S2 2 fα/2(v1, v2) = 1 − α. Multiplying each term in the inequality by S2 2 /S2 1 and then inverting each term, we obtain P S2 1 S2 2 1 fα/2(v1, v2) σ2 1 σ2 2 S2 1 S2 2 1 f1−α/2(v1, v2) = 1 − α. The results of Theorem 8.7 enable us to replace the quantity f1−α/2(v1, v2) by 1/fα/2(v2, v1). Therefore, P S2 1 S2 2 1 fα/2(v1, v2) σ2 1 σ2 2 S2 1 S2 2 fα/2(v2, v1) = 1 − α. For any two independent random samples of sizes n1 and n2 selected from two normal populations, the ratio of the sample variances s2 1/s2 2 is computed, and the following 100(1 − α)% confidence interval for σ2 1/σ2 2 is obtained. Confidence Interval for σ2 1/σ2 2 If s2 1 and s2 2 are the variances of independent samples of sizes n1 and n2, re- spectively, from normal populations, then a 100(1 − α)% confidence interval for σ2 1/σ2 2 is s2 1 s2 2 1 fα/2(v1, v2) σ2 1 σ2 2 s2 1 s2 2 fα/2(v2, v1), where fα/2(v1, v2) is an f-value with v1 = n1 − 1 and v2 = n2 − 1 degrees of freedom, leaving an area of α/2 to the right, and fα/2(v2, v1) is a similar f-value with v2 = n2 − 1 and v1 = n1 − 1 degrees of freedom. As in Section 9.12, an approximate 100(1 − α)% confidence interval for σ1/σ2 is obtained by taking the square root of each endpoint of the interval for σ2 1/σ2 2. Example 9.19: A confidence interval for the difference in the mean orthophosphorus contents, measured in milligrams per liter, at two stations on the James River was con- structed in Example 9.12 on page 290 by assuming the normal population variance to be unequal. Justify this assumption by constructing 98% confidence intervals for σ2 1/σ2 2 and for σ1/σ2, where σ2 1 and σ2 2 are the variances of the populations of orthophosphorus contents at station 1 and station 2, respectively. Solution: From Example 9.12, we have n1 = 15, n2 = 12, s1 = 3.07, and s2 = 0.80. For a 98% confidence interval, α = 0.02. Interpolating in Table A.6, we find f0.01(14, 11) ≈ 4.30 and f0.01(11, 14) ≈ 3.87. Therefore, the 98% confidence interval for σ2 1/σ2 2 is 3.072 0.802 1 4.30 σ2 1 σ2 2 3.072 0.802 (3.87),
  • 328. 9.14 Maximum Likelihood Estimation (Optional) 307 which simplifies to 3.425 σ2 1 σ2 2 56.991. Taking square roots of the confidence limits, we find that a 98% confidence interval for σ1/σ2 is 1.851 σ1 σ2 7.549. Since this interval does not allow for the possibility of σ1/σ2 being equal to 1, we were correct in assuming that σ1 = σ2 or σ2 1 = σ2 2 in Example 9.12. Exercises 9.71 A manufacturer of car batteries claims that the batteries will last, on average, 3 years with a variance of 1 year. If 5 of these batteries have lifetimes of 1.9, 2.4, 3.0, 3.5, and 4.2 years, construct a 95% confidence interval for σ2 and decide if the manufacturer’s claim that σ2 = 1 is valid. Assume the population of battery lives to be approximately normally distributed. 9.72 A random sample of 20 students yielded a mean of x̄ = 72 and a variance of s2 = 16 for scores on a college placement test in mathematics. Assuming the scores to be normally distributed, construct a 98% con- fidence interval for σ2 . 9.73 Construct a 95% confidence interval for σ2 in Exercise 9.9 on page 283. 9.74 Construct a 99% confidence interval for σ2 in Exercise 9.11 on page 283. 9.75 Construct a 99% confidence interval for σ in Ex- ercise 9.12 on page 283. 9.76 Construct a 90% confidence interval for σ in Ex- ercise 9.13 on page 283. 9.77 Construct a 98% confidence interval for σ1/σ2 in Exercise 9.42 on page 295, where σ1 and σ2 are, respectively, the standard deviations for the distances traveled per liter of fuel by the Volkswagen and Toyota mini-trucks. 9.78 Construct a 90% confidence interval for σ2 1/σ2 2 in Exercise 9.43 on page 295. Were we justified in assum- ing that σ2 1 = σ2 2 when we constructed the confidence interval for μ1 − μ2? 9.79 Construct a 90% confidence interval for σ2 1/σ2 2 in Exercise 9.46 on page 295. Should we have assumed σ2 1 = σ2 2 in constructing our confidence interval for μI − μII ? 9.80 Construct a 95% confidence interval for σ2 A/σ2 B in Exercise 9.49 on page 295. Should the equal-variance assumption be used? 9.14 Maximum Likelihood Estimation (Optional) Often the estimators of parameters have been those that appeal to intuition. The estimator X̄ certainly seems reasonable as an estimator of a population mean μ. The virtue of S2 as an estimator of σ2 is underscored through the discussion of unbiasedness in Section 9.3. The estimator for a binomial parameter p is merely a sample proportion, which, of course, is an average and appeals to common sense. But there are many situations in which it is not at all obvious what the proper estimator should be. As a result, there is much to be learned by the student of statistics concerning different philosophies that produce different methods of estimation. In this section, we deal with the method of maximum likelihood. Maximum likelihood estimation is one of the most important approaches to estimation in all of statistical inference. We will not give a thorough development of the method. Rather, we will attempt to communicate the philosophy of maximum likelihood and illustrate with examples that relate to other estimation problems discussed in this chapter.
  • 329. 308 Chapter 9 One- and Two-Sample Estimation Problems The Likelihood Function As the name implies, the method of maximum likelihood is that for which the like- lihood function is maximized. The likelihood function is best illustrated through the use of an example with a discrete distribution and a single parameter. Denote by X1, X2, . . . , Xn the independent random variables taken from a discrete prob- ability distribution represented by f(x, θ), where θ is a single parameter of the distribution. Now L(x1, x2, . . . , xn; θ) = f(x1, x2, . . . , xn; θ) = f(x1, θ)f(x2, θ) · · · f(xn, θ) is the joint distribution of the random variables, often referred to as the likelihood function. Note that the variable of the likelihood function is θ, not x. Denote by x1, x2, . . . , xn the observed values in a sample. In the case of a discrete random variable, the interpretation is very clear. The quantity L(x1, x2, . . . , xn; θ), the likelihood of the sample, is the following joint probability: P(X1 = x1, X2 = x2, . . . , Xn = xn | θ), which is the probability of obtaining the sample values x1, x2, . . . , xn. For the discrete case, the maximum likelihood estimator is one that results in a maximum value for this joint probability or maximizes the likelihood of the sample. Consider a fictitious example where three items from an assembly line are inspected. The items are ruled either defective or nondefective, and thus the Bernoulli process applies. Testing the three items results in two nondefective items followed by a defective item. It is of interest to estimate p, the proportion non- defective in the process. The likelihood of the sample for this illustration is given by p · p · q = p2 q = p2 − p3 , where q = 1 − p. Maximum likelihood estimation would give an estimate of p for which the likelihood is maximized. It is clear that if we differentiate the likelihood with respect to p, set the derivative to zero, and solve, we obtain the value p̂ = 2 3 . Now, of course, in this situation p̂ = 2/3 is the sample proportion defective and is thus a reasonable estimator of the probability of a defective. The reader should attempt to understand that the philosophy of maximum likelihood estima- tion evolves from the notion that the reasonable estimator of a parameter based on sample information is that parameter value that produces the largest probability of obtaining the sample. This is, indeed, the interpretation for the discrete case, since the likelihood is the probability of jointly observing the values in the sample. Now, while the interpretation of the likelihood function as a joint probability is confined to the discrete case, the notion of maximum likelihood extends to the estimation of parameters of a continuous distribution. We now present a formal definition of maximum likelihood estimation.
  • 330. 9.14 Maximum Likelihood Estimation (Optional) 309 Definition 9.3: Given independent observations x1, x2, . . . , xn from a probability density func- tion (continuous case) or probability mass function (discrete case) f(x; θ), the maximum likelihood estimator θ̂ is that which maximizes the likelihood function L(x1, x2, . . . , xn; θ) = f(x; θ) = f(x1, θ)f(x2, θ) · · · f(xn, θ). Quite often it is convenient to work with the natural log of the likelihood function in finding the maximum of that function. Consider the following example dealing with the parameter μ of a Poisson distribution. Example 9.20: Consider a Poisson distribution with probability mass function f(x|μ) = e−μ μx x! , x = 0, 1, 2, . . . . Suppose that a random sample x1, x2, . . . , xn is taken from the distribution. What is the maximum likelihood estimate of μ? Solution: The likelihood function is L(x1, x2, . . . , xn; μ) = n , i=1 f(xi|μ) = e−nμ μ n i=1 xi -n i=1 xi! . Now consider ln L(x1, x2, . . . , xn; μ) = −nμ + n i=1 xi ln μ − ln n , i=1 xi! ∂ ln L(x1, x2, . . . , xn; μ) ∂μ = −n + n i=1 xi μ . Solving for μ̂, the maximum likelihood estimator, involves setting the derivative to zero and solving for the parameter. Thus, μ̂ = n i=1 xi n = x̄. The second derivative of the log-likelihood function is negative, which implies that the solution above indeed is a maximum. Since μ is the mean of the Poisson distribution (Chapter 5), the sample average would certainly seem like a reasonable estimator. The following example shows the use of the method of maximum likelihood for finding estimates of two parameters. We simply find the values of the parameters that maximize (jointly) the likelihood function. Example 9.21: Consider a random sample x1, x2, . . . , xn from a normal distribution N(μ, σ). Find the maximum likelihood estimators for μ and σ2 .
  • 331. 310 Chapter 9 One- and Two-Sample Estimation Problems Solution: The likelihood function for the normal distribution is L(x1, x2, . . . , xn; μ, σ2 ) = 1 (2π)n/2(σ2)n/2 exp − 1 2 n i=1 xi − μ σ 2 . Taking logarithms gives us ln L(x1, x2, . . . , xn; μ, σ2 ) = − n 2 ln(2π) − n 2 ln σ2 − 1 2 n i=1 xi − μ σ 2 . Hence, ∂ ln L ∂μ = n i=1 xi − μ σ2 and ∂ ln L ∂σ2 = − n 2σ2 + 1 2(σ2)2 n i=1 (xi − μ)2 . Setting both derivatives equal to 0, we obtain n i=1 xi − nμ = 0 and nσ2 = n i=1 (xi − μ)2 . Thus, the maximum likelihood estimator of μ is given by μ̂ = 1 n n i=1 xi = x̄, which is a pleasing result since x̄ has played such an important role in this chapter as a point estimate of μ. On the other hand, the maximum likelihood estimator of σ2 is σ̂2 = 1 n n i=1 (xi − x̄)2 . Checking the second-order partial derivative matrix confirms that the solution results in a maximum of the likelihood function. It is interesting to note the distinction between the maximum likelihood esti- mator of σ2 and the unbiased estimator S2 developed earlier in this chapter. The numerators are identical, of course, and the denominator is the degrees of freedom n−1 for the unbiased estimator and n for the maximum likelihood estimator. Max- imum likelihood estimators do not necessarily enjoy the property of unbiasedness. However, they do have very important asymptotic properties. Example 9.22: Suppose 10 rats are used in a biomedical study where they are injected with cancer cells and then given a cancer drug that is designed to increase their survival rate. The survival times, in months, are 14, 17, 27, 18, 12, 8, 22, 13, 19, and 12. Assume
  • 332. 9.14 Maximum Likelihood Estimation (Optional) 311 that the exponential distribution applies. Give a maximum likelihood estimate of the mean survival time. Solution: From Chapter 6, we know that the probability density function for the exponential random variable X is f(x, β) = 1 β e−x/β , x 0, 0, elsewhere. Thus, the log-likelihood function for the data, given n = 10, is ln L(x1, x2, . . . , x10; β) = −10 ln β − 1 β 10 i=1 xi. Setting ∂ ln L ∂β = − 10 β + 1 β2 10 i=1 xi = 0 implies that β̂ = 1 10 10 i=1 xi = x̄ = 16.2. Evaluating the second derivative of the log-likelihood function at the value β̂ above yields a negative value. As a result, the estimator of the parameter β, the popula- tion mean, is the sample average x̄. The following example shows the maximum likelihood estimator for a distribu- tion that does not appear in previous chapters. Example 9.23: It is known that a sample consisting of the values 12, 11.2, 13.5, 12.3, 13.8, and 11.9 comes from a population with the density function f(x; θ) = θ xθ+1 , x 1, 0, elsewhere, where θ 0. Find the maximum likelihood estimate of θ. Solution: The likelihood function of n observations from this population can be written as L(x1, x2, . . . , x10; θ) = n , i=1 θ xθ+1 i = θn ( -n i=1 xi)θ+1 , which implies that ln L(x1, x2, . . . , x10; θ) = n ln(θ) − (θ + 1) n i=1 ln(xi).
  • 333. / / 312 Chapter 9 One- and Two-Sample Estimation Problems Setting 0 = ∂ ln L ∂θ = n θ − n i=1 ln(xi) results in θ̂ = n n i=1 ln(xi) = 6 ln(12) + ln(11.2) + ln(13.5) + ln(12.3) + ln(13.8) + ln(11.9) = 0.3970. Since the second derivative of L is −n/θ2 , which is always negative, the likelihood function does achieve its maximum value at θ̂. Additional Comments Concerning Maximum Likelihood Estimation A thorough discussion of the properties of maximum likelihood estimation is be- yond the scope of this book and is usually a major topic of a course in the theory of statistical inference. The method of maximum likelihood allows the analyst to make use of knowledge of the distribution in determining an appropriate estima- tor. The method of maximum likelihood cannot be applied without knowledge of the underlying distribution. We learned in Example 9.21 that the maximum likelihood estimator is not necessarily unbiased. The maximum likelihood estimator is unbi- ased asymptotically or in the limit; that is, the amount of bias approaches zero as the sample size becomes large. Earlier in this chapter the notion of efficiency was discussed, efficiency being linked to the variance property of an estimator. Maxi- mum likelihood estimators possess desirable variance properties in the limit. The reader should consult Lehmann and D’Abrera (1998) for details. Exercises 9.81 Suppose that there are n trials x1, x2, . . . , xn from a Bernoulli process with parameter p, the prob- ability of a success. That is, the probability of r suc- cesses is given by n r pr (1 − p)n−r . Work out the max- imum likelihood estimator for the parameter p. 9.82 Consider the lognormal distribution with the density function given in Section 6.9. Suppose we have a random sample x1, x2, . . . , xn from a lognormal dis- tribution. (a) Write out the likelihood function. (b) Develop the maximum likelihood estimators of μ and σ2 . 9.83 Consider a random sample of x1, . . . , xn coming from the gamma distribution discussed in Section 6.6. Suppose the parameter α is known, say 5, and deter- mine the maximum likelihood estimation for parameter β. 9.84 Consider a random sample of x1, x2, . . . , xn ob- servations from a Weibull distribution with parameters α and β and density function f(x) = αβxβ−1 e−αxβ , x 0, 0, elsewhere, for α, β 0. (a) Write out the likelihood function. (b) Write out the equations that, when solved, give the maximum likelihood estimators of α and β. 9.85 Consider a random sample of x1, . . . , xn from a uniform distribution U(0, θ) with unknown parameter θ, where θ 0. Determine the maximum likelihood estimator of θ. 9.86 Consider the independent observations x1, x2, . . . , xn from the gamma distribution discussed in Section 6.6. (a) Write out the likelihood function.
  • 334. / / Review Exercises 313 (b) Write out a set of equations that, when solved, give the maximum likelihood estimators of α and β. 9.87 Consider a hypothetical experiment where a man with a fungus uses an antifungal drug and is cured. Consider this, then, a sample of one from a Bernoulli distribution with probability function f(x) = px q1−x , x = 0, 1, where p is the probability of a success (cure) and q = 1 − p. Now, of course, the sample information gives x = 1. Write out a development that shows that p̂ = 1.0 is the maximum likelihood estimator of the probability of a cure. 9.88 Consider the observation X from the negative binomial distribution given in Section 5.4. Find the maximum likelihood estimator for p, assuming k is known. Review Exercises 9.89 Consider two estimators of σ2 for a sample x1, x2, . . . , xn, which is drawn from a normal distri- bution with mean μ and variance σ2 . The estimators are the unbiased estimator s2 = 1 n−1 n i=1 (xi − x̄)2 and the maximum likelihood estimator σ̂2 = 1 n n i=1 (xi −x̄)2 . Discuss the variance properties of these two estimators. 9.90 According to the Roanoke Times, McDonald’s sold 42.1% of the market share of hamburgers. A ran- dom sample of 75 burgers sold resulted in 28 of them being from McDonald’s. Use material in Section 9.10 to determine if this information supports the claim in the Roanoke Times. 9.91 It is claimed that a new diet will reduce a per- son’s weight by 4.5 kilograms on average in a period of 2 weeks. The weights of 7 women who followed this diet were recorded before and after the 2-week period. Woman Weight Before Weight After 1 58.5 60.0 2 60.3 54.9 3 61.7 58.1 4 69.0 62.1 5 64.0 58.5 6 62.6 59.9 7 56.7 54.4 Test the claim about the diet by computing a 95% con- fidence interval for the mean difference in weights. As- sume the differences of weights to be approximately normally distributed. 9.92 A study was undertaken at Virginia Tech to de- termine if fire can be used as a viable management tool to increase the amount of forage available to deer dur- ing the critical months in late winter and early spring. Calcium is a required element for plants and animals. The amount taken up and stored in plants is closely correlated to the amount present in the soil. It was hypothesized that a fire may change the calcium levels present in the soil and thus affect the amount avail- able to deer. A large tract of land in the Fishburn Forest was selected for a prescribed burn. Soil samples were taken from 12 plots of equal area just prior to the burn and analyzed for calcium. Postburn calcium lev- els were analyzed from the same plots. These values, in kilograms per plot, are presented in the following table: Calcium Level (kg/plot) Plot Preburn Postburn 1 2 3 4 5 6 7 8 9 10 11 12 50 50 82 64 82 73 77 54 23 45 36 54 9 18 45 18 18 9 32 9 18 9 9 9 Construct a 95% confidence interval for the mean dif- ference in calcium levels in the soil prior to and after the prescribed burn. Assume the distribution of differ- ences in calcium levels to be approximately normal. 9.93 A health spa claims that a new exercise pro- gram will reduce a person’s waist size by 2 centimeters on average over a 5-day period. The waist sizes, in centimeters, of 6 men who participated in this exercise program are recorded before and after the 5-day period in the following table: Man Waist Size Before Waist Size After 1 2 3 4 5 6 90.4 95.5 98.7 115.9 104.0 85.6 91.7 93.9 97.4 112.8 101.3 84.0
  • 335. / / 314 Chapter 9 One- and Two-Sample Estimation Problems By computing a 95% confidence interval for the mean reduction in waist size, determine whether the health spa’s claim is valid. Assume the distribution of differ- ences in waist sizes before and after the program to be approximately normal. 9.94 The Department of Civil Engineering at Virginia Tech compared a modified (M-5 hr) assay technique for recovering fecal coliforms in stormwater runoff from an urban area to a most probable number (MPN) tech- nique. A total of 12 runoff samples were collected and analyzed by the two techniques. Fecal coliform counts per 100 milliliters are recorded in the following table. Sample MPN Count M-5 hr Count 1 2 3 4 5 6 7 8 9 10 11 12 2300 1200 450 210 270 450 154 179 192 230 340 194 2010 930 400 436 4100 2090 219 169 194 174 274 183 Construct a 90% confidence interval for the difference in the mean fecal coliform counts between the M-5 hr and the MPN techniques. Assume that the count dif- ferences are approximately normally distributed. 9.95 An experiment was conducted to determine whether surface finish has an effect on the endurance limit of steel. There is a theory that polishing in- creases the average endurance limit (for reverse bend- ing). From a practical point of view, polishing should not have any effect on the standard deviation of the endurance limit, which is known from numerous en- durance limit experiments to be 4000 psi. An ex- periment was performed on 0.4% carbon steel using both unpolished and polished smooth-turned speci- mens. The data are as follows: Endurance Limit (psi) Polished Unpolished 0.4% Carbon 0.4% Carbon 85,500 82,600 91,900 82,400 89,400 81,700 84,000 79,500 89,900 79,400 78,700 69,800 87,500 79,900 83,100 83,400 Find a 95% confidence interval for the difference be- tween the population means for the two methods, as- suming that the populations are approximately nor- mally distributed. 9.96 An anthropologist is interested in the proportion of individuals in two Indian tribes with double occipi- tal hair whorls. Suppose that independent samples are taken from each of the two tribes, and it is found that 24 of 100 Indians from tribe A and 36 of 120 Indians from tribe B possess this characteristic. Construct a 95% confidence interval for the difference pB − pA be- tween the proportions of these two tribes with occipital hair whorls. 9.97 A manufacturer of electric irons produces these items in two plants. Both plants have the same suppli- ers of small parts. A saving can be made by purchasing thermostats for plant B from a local supplier. A sin- gle lot was purchased from the local supplier, and a test was conducted to see whether or not these new thermostats were as accurate as the old. The ther- mostats were tested on tile irons on the 550◦ F setting, and the actual temperatures were read to the nearest 0.1◦ F with a thermocouple. The data are as follows: New Supplier (◦ F) 530.3 559.3 549.4 544.0 551.7 566.3 549.9 556.9 536.7 558.8 538.8 543.3 559.1 555.0 538.6 551.1 565.4 554.9 550.0 554.9 554.7 536.1 569.1 Old Supplier (◦ F) 559.7 534.7 554.8 545.0 544.6 538.0 550.7 563.1 551.1 553.8 538.8 564.6 554.5 553.0 538.4 548.3 552.9 535.1 555.0 544.8 558.4 548.7 560.3 Find 95% confidence intervals for σ2 1/σ2 2 and for σ1/σ2, where σ2 1 and σ2 2 are the population variances of the thermostat readings for the new and old suppliers, re- spectively. 9.98 It is argued that the resistance of wire A is greater than the resistance of wire B. An experiment on the wires shows the following results (in ohms): Wire A Wire B 0.140 0.135 0.138 0.140 0.143 0.136 0.142 0.142 0.144 0.138 0.137 0.140 Assuming equal variances, what conclusions do you draw? Justify your answer. 9.99 An alternative form of estimation is accom- plished through the method of moments. This method involves equating the population mean and variance to the corresponding sample mean x̄ and sample variance
  • 336. / / Review Exercises 315 s2 and solving for the parameters, the results being the moment estimators. In the case of a single pa- rameter, only the means are used. Give an argument that in the case of the Poisson distribution the maxi- mum likelihood estimator and moment estimators are the same. 9.100 Specify the moment estimators for μ and σ2 for the normal distribution. 9.101 Specify the moment estimators for μ and σ2 for the lognormal distribution. 9.102 Specify the moment estimators for α and β for the gamma distribution. 9.103 A survey was done with the hope of comparing salaries of chemical plant managers employed in two areas of the country, the northern and west central re- gions. An independent random sample of 300 plant managers was selected from each of the two regions. These managers were asked their annual salaries. The results are as follows North West Central x̄1 = $102, 300 x̄2 = $98, 500 s1 = $5700 s2 = $3800 (a) Construct a 99% confidence interval for μ1 − μ2, the difference in the mean salaries. (b) What assumption did you make in (a) about the distribution of annual salaries for the two regions? Is the assumption of normality necessary? Why or why not? (c) What assumption did you make about the two vari- ances? Is the assumption of equality of variances reasonable? Explain! 9.104 Consider Review Exercise 9.103. Let us assume that the data have not been collected yet and that pre- vious statistics suggest that σ1 = σ2 = $4000. Are the sample sizes in Review Exercise 9.103 sufficient to produce a 95% confidence interval on μ1 − μ2 having a width of only $1000? Show all work. 9.105 A labor union is becoming defensive about gross absenteeism by its members. The union lead- ers had always claimed that, in a typical month, 95% of its members were absent less than 10 hours. The union decided to check this by monitoring a random sample of 300 of its members. The number of hours absent was recorded for each of the 300 members. The results were x̄ = 6.5 hours and s = 2.5 hours. Use the data to respond to this claim, using a one-sided toler- ance limit and choosing the confidence level to be 99%. Be sure to interpret what you learn from the tolerance limit calculation. 9.106 A random sample of 30 firms dealing in wireless products was selected to determine the proportion of such firms that have implemented new software to im- prove productivity. It turned out that 8 of the 30 had implemented such software. Find a 95% confidence in- terval on p, the true proportion of such firms that have implemented new software. 9.107 Refer to Review Exercise 9.106. Suppose there is concern about whether the point estimate p̂ = 8/30 is accurate enough because the confidence interval around p is not sufficiently narrow. Using p̂ as the estimate of p, how many companies would need to be sampled in order to have a 95% confidence interval with a width of only 0.05? 9.108 A manufacturer turns out a product item that is labeled either “defective” or “not defective.” In order to estimate the proportion defective, a random sam- ple of 100 items is taken from production, and 10 are found to be defective. Following implementation of a quality improvement program, the experiment is con- ducted again. A new sample of 100 is taken, and this time only 6 are found to be defective. (a) Give a 95% confidence interval on p1 − p2, where p1 is the population proportion defective before im- provement and p2 is the proportion defective after improvement. (b) Is there information in the confidence interval found in (a) that would suggest that p1 p2? Ex- plain. 9.109 A machine is used to fill boxes with product in an assembly line operation. Much concern centers around the variability in the number of ounces of prod- uct in a box. The standard deviation in weight of prod- uct is known to be 0.3 ounce. An improvement is im- plemented, after which a random sample of 20 boxes is selected and the sample variance is found to be 0.045 ounce2 . Find a 95% confidence interval on the variance in the weight of the product. Does it appear from the range of the confidence interval that the improvement of the process enhanced quality as far as variability is concerned? Assume normality on the distribution of weights of product. 9.110 A consumer group is interested in comparing operating costs for two different types of automobile engines. The group is able to find 15 owners whose cars have engine type A and 15 whose cars have engine type B. All 30 owners bought their cars at roughly the same time, and all have kept good records for a cer- tain 12-month period. In addition, these owners drove roughly the same number of miles. The cost statistics are ȳA = $87.00/1000 miles, ȳB = $75.00/1000 miles, sA = $5.99, and sB = $4.85. Compute a 95% confi- dence interval to estimate μA − μB, the difference in
  • 337. 316 Chapter 9 One- and Two-Sample Estimation Problems the mean operating costs. Assume normality and equal variances. 9.111 Consider the statistic S2 p, the pooled estimate of σ2 discussed in Section 9.8. It is used when one is willing to assume that σ2 1 = σ2 2 = σ2 . Show that the es- timator is unbiased for σ2 [i.e., show that E(S2 p) = σ2 ]. You may make use of results from any theorem or ex- ample in this chapter. 9.112 A group of human factor researchers are con- cerned about reaction to a stimulus by airplane pilots in a certain cockpit arrangement. An experiment was conducted in a simulation laboratory, and 15 pilots were used with average reaction time of 3.2 seconds with a sample standard deviation of 0.6 second. It is of interest to characterize the extreme (i.e., worst case scenario). To that end, do the following: (a) Give a particular important one-sided 99% confi- dence bound on the mean reaction time. What assumption, if any, must you make on the distribu- tion of reaction times? (b) Give a 99% one-sided prediction interval and give an interpretation of what it means. Must you make an assumption about the distribution of reaction times to compute this bound? (c) Compute a one-sided tolerance bound with 99% confidence that involves 95% of reaction times. Again, give an interpretation and assumptions about the distribution, if any. (Note: The one- sided tolerance limit values are also included in Ta- ble A.7.) 9.113 A certain supplier manufactures a type of rub- ber mat that is sold to automotive companies. The material used to produce the mats must have certain hardness characteristics. Defective mats are occasion- ally discovered and rejected. The supplier claims that the proportion defective is 0.05. A challenge was made by one of the clients who purchased the mats, so an ex- periment was conducted in which 400 mats are tested and 17 were found defective. (a) Compute a 95% two-sided confidence interval on the proportion defective. (b) Compute an appropriate 95% one-sided confidence interval on the proportion defective. (c) Interpret both intervals from (a) and (b) and com- ment on the claim made by the supplier. 9.15 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters The concept of a large-sample confidence interval on a population is often confusing to the beginning student. It is based on the notion that even when σ is unknown and one is not convinced that the distribution being sampled is normal, a confidence interval on μ can be computed from x̄ ± zα/2 s √ n . In practice, this formula is often used when the sample is too small. The genesis of this large sample interval is, of course, the Central Limit Theorem (CLT), under which normality is not necessary. Here the CLT requires a known σ, of which s is only an estimate. Thus, n must be at least as large as 30 and the underly- ing distribution must be close to symmetric, in which case the interval is still an approximation. There are instances in which the appropriateness of the practical application of material in this chapter depends very much on the specific context. One very important illustration is the use of the t-distribution for the confidence interval on μ when σ is unknown. Strictly speaking, the use of the t-distribution requires that the distribution sampled from be normal. However, it is well known that any application of the t-distribution is reasonably insensitive (i.e., robust) to the normality assumption. This represents one of those fortunate situations which
  • 338. 9.15 Potential Misconceptions and Hazards 317 occur often in the field of statistics in which a basic assumption does not hold and yet “everything turns out all right!” However, one population from which the sample is drawn cannot deviate substantially from normal. Thus, the normal probability plots discussed in Chapter 8 and the goodness-of-fit tests introduced in Chapter 10 often need be called upon to ascertain some sense of “nearness to normality.” This idea of “robustness to normality” will reappear in Chapter 10. It is our experience that one of the most serious “misuses of statistics” in prac- tice evolves from confusion about distinctions in the interpretation of the types of statistical intervals. Thus, the subsection in this chapter where differences among the three types of intervals are discussed is important. It is very likely that in practice the confidence interval is heavily overused. That is, it is used when there is really no interest in the mean; rather, the question is “Where is the next observation going to fall?” or often, more importantly, “Where is the large bulk of the distribution?” These are crucial questions that are not answered by comput- ing an interval on the mean. The interpretation of a confidence interval is often misunderstood. It is tempting to conclude that the parameter falls inside the in- terval with probability 0.95. While this is a correct interpretation of a Bayesian posterior interval (readers are referred to Chapter 18 for more information on Bayesian inference), it is not the proper frequency interpretation. A confidence interval merely suggests that if the experiment is conducted and data are observed again and again, about 95% of such intervals will contain the true parameter. Any beginning student of practical statistics should be very clear on the difference among these statistical intervals. Another potential serious misuse of statistics centers around the use of the χ2 -distribution for a confidence interval on a single variance. Again, normality of the distribution from which the sample is drawn is assumed. Unlike the use of the t-distribution, the use of the χ2 test for this application is not robust to the nor- mality assumption (i.e., the sampling distribution of (n−1)S2 σ2 deviates far from χ2 if the underlying distribution is not normal). Thus, strict use of goodness-of-fit (Chapter 10) tests and/or normal probability plotting can be extremely important in such contexts. More information about this general issue will be given in future chapters.
  • 339. This page intentionally left blank
  • 340. Chapter 10 One- and Two-Sample Tests of Hypotheses 10.1 Statistical Hypotheses: General Concepts Often, the problem confronting the scientist or engineer is not so much the es- timation of a population parameter, as discussed in Chapter 9, but rather the formation of a data-based decision procedure that can produce a conclusion about some scientific system. For example, a medical researcher may decide on the basis of experimental evidence whether coffee drinking increases the risk of cancer in humans; an engineer might have to decide on the basis of sample data whether there is a difference between the accuracy of two kinds of gauges; or a sociologist might wish to collect appropriate data to enable him or her to decide whether a person’s blood type and eye color are independent variables. In each of these cases, the scientist or engineer postulates or conjectures something about a system. In addition, each must make use of experimental data and make a decision based on the data. In each case, the conjecture can be put in the form of a statistical hypothesis. Procedures that lead to the acceptance or rejection of statistical hy- potheses such as these comprise a major area of statistical inference. First, let us define precisely what we mean by a statistical hypothesis. Definition 10.1: A statistical hypothesis is an assertion or conjecture concerning one or more populations. The truth or falsity of a statistical hypothesis is never known with absolute certainty unless we examine the entire population. This, of course, would be im- practical in most situations. Instead, we take a random sample from the population of interest and use the data contained in this sample to provide evidence that either supports or does not support the hypothesis. Evidence from the sample that is inconsistent with the stated hypothesis leads to a rejection of the hypothesis. 319
  • 341. 320 Chapter 10 One- and Two-Sample Tests of Hypotheses The Role of Probability in Hypothesis Testing It should be made clear to the reader that the decision procedure must include an awareness of the probability of a wrong conclusion. For example, suppose that the hypothesis postulated by the engineer is that the fraction defective p in a certain process is 0.10. The experiment is to observe a random sample of the product in question. Suppose that 100 items are tested and 12 items are found defective. It is reasonable to conclude that this evidence does not refute the condition that the binomial parameter p = 0.10, and thus it may lead one not to reject the hypothesis. However, it also does not refute p = 0.12 or perhaps even p = 0.15. As a result, the reader must be accustomed to understanding that rejection of a hypothesis implies that the sample evidence refutes it. Put another way, rejection means that there is a small probability of obtaining the sample information observed when, in fact, the hypothesis is true. For example, for our proportion-defective hypothesis, a sample of 100 revealing 20 defective items is certainly evidence for rejection. Why? If, indeed, p = 0.10, the probability of obtaining 20 or more defectives is approximately 0.002. With the resulting small risk of a wrong conclusion, it would seem safe to reject the hypothesis that p = 0.10. In other words, rejection of a hypothesis tends to all but “rule out” the hypothesis. On the other hand, it is very important to emphasize that acceptance or, rather, failure to reject does not rule out other possibilities. As a result, the firm conclusion is established by the data analyst when a hypothesis is rejected. The formal statement of a hypothesis is often influenced by the structure of the probability of a wrong conclusion. If the scientist is interested in strongly supporting a contention, he or she hopes to arrive at the contention in the form of rejection of a hypothesis. If the medical researcher wishes to show strong evidence in favor of the contention that coffee drinking increases the risk of cancer, the hypothesis tested should be of the form “there is no increase in cancer risk produced by drinking coffee.” As a result, the contention is reached via a rejection. Similarly, to support the claim that one kind of gauge is more accurate than another, the engineer tests the hypothesis that there is no difference in the accuracy of the two kinds of gauges. The foregoing implies that when the data analyst formalizes experimental evi- dence on the basis of hypothesis testing, the formal statement of the hypothesis is very important. The Null and Alternative Hypotheses The structure of hypothesis testing will be formulated with the use of the term null hypothesis, which refers to any hypothesis we wish to test and is denoted by H0. The rejection of H0 leads to the acceptance of an alternative hypoth- esis, denoted by H1. An understanding of the different roles played by the null hypothesis (H0) and the alternative hypothesis (H1) is crucial to one’s understand- ing of the rudiments of hypothesis testing. The alternative hypothesis H1 usually represents the question to be answered or the theory to be tested, and thus its spec- ification is crucial. The null hypothesis H0 nullifies or opposes H1 and is often the logical complement to H1. As the reader gains more understanding of hypothesis testing, he or she should note that the analyst arrives at one of the two following
  • 342. 10.2 Testing a Statistical Hypothesis 321 conclusions: reject H0 in favor of H1 because of sufficient evidence in the data or fail to reject H0 because of insufficient evidence in the data. Note that the conclusions do not involve a formal and literal “accept H0.” The statement of H0 often represents the “status quo” in opposition to the new idea, conjecture, and so on, stated in H1, while failure to reject H0 represents the proper conclusion. In our binomial example, the practical issue may be a concern that the historical defective probability of 0.10 no longer is true. Indeed, the conjecture may be that p exceeds 0.10. We may then state H0: p = 0.10, H1: p 0.10. Now 12 defective items out of 100 does not refute p = 0.10, so the conclusion is “fail to reject H0.” However, if the data produce 20 out of 100 defective items, then the conclusion is “reject H0” in favor of H1: p 0.10. Though the applications of hypothesis testing are quite abundant in scientific and engineering work, perhaps the best illustration for a novice lies in the predica- ment encountered in a jury trial. The null and alternative hypotheses are H0: defendant is innocent, H1: defendant is guilty. The indictment comes because of suspicion of guilt. The hypothesis H0 (the status quo) stands in opposition to H1 and is maintained unless H1 is supported by evidence “beyond a reasonable doubt.” However, “failure to reject H0” in this case does not imply innocence, but merely that the evidence was insufficient to convict. So the jury does not necessarily accept H0 but fails to reject H0. 10.2 Testing a Statistical Hypothesis To illustrate the concepts used in testing a statistical hypothesis about a popula- tion, we present the following example. A certain type of cold vaccine is known to be only 25% effective after a period of 2 years. To determine if a new and some- what more expensive vaccine is superior in providing protection against the same virus for a longer period of time, suppose that 20 people are chosen at random and inoculated. (In an actual study of this type, the participants receiving the new vaccine might number several thousand. The number 20 is being used here only to demonstrate the basic steps in carrying out a statistical test.) If more than 8 of those receiving the new vaccine surpass the 2-year period without contracting the virus, the new vaccine will be considered superior to the one presently in use. The requirement that the number exceed 8 is somewhat arbitrary but appears reason- able in that it represents a modest gain over the 5 people who could be expected to receive protection if the 20 people had been inoculated with the vaccine already in use. We are essentially testing the null hypothesis that the new vaccine is equally effective after a period of 2 years as the one now commonly used. The alternative
  • 343. 322 Chapter 10 One- and Two-Sample Tests of Hypotheses hypothesis is that the new vaccine is in fact superior. This is equivalent to testing the hypothesis that the binomial parameter for the probability of a success on a given trial is p = 1/4 against the alternative that p 1/4. This is usually written as follows: H0: p = 0.25, H1: p 0.25. The Test Statistic The test statistic on which we base our decision is X, the number of individuals in our test group who receive protection from the new vaccine for a period of at least 2 years. The possible values of X, from 0 to 20, are divided into two groups: those numbers less than or equal to 8 and those greater than 8. All possible scores greater than 8 constitute the critical region. The last number that we observe in passing into the critical region is called the critical value. In our illustration, the critical value is the number 8. Therefore, if x 8, we reject H0 in favor of the alternative hypothesis H1. If x ≤ 8, we fail to reject H0. This decision criterion is illustrated in Figure 10.1. 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 x Do not reject H0 (p 0.25) Reject H0 (p 0.25) Figure 10.1: Decision criterion for testing p = 0.25 versus p 0.25. The Probability of a Type I Error The decision procedure just described could lead to either of two wrong conclusions. For instance, the new vaccine may be no better than the one now in use (H0 true) and yet, in this particular randomly selected group of individuals, more than 8 surpass the 2-year period without contracting the virus. We would be committing an error by rejecting H0 in favor of H1 when, in fact, H0 is true. Such an error is called a type I error. Definition 10.2: Rejection of the null hypothesis when it is true is called a type I error. A second kind of error is committed if 8 or fewer of the group surpass the 2-year period successfully and we are unable to conclude that the vaccine is better when it actually is better (H1 true). Thus, in this case, we fail to reject H0 when in fact H0 is false. This is called a type II error. Definition 10.3: Nonrejection of the null hypothesis when it is false is called a type II error. In testing any statistical hypothesis, there are four possible situations that determine whether our decision is correct or in error. These four situations are
  • 344. 10.2 Testing a Statistical Hypothesis 323 summarized in Table 10.1. Table 10.1: Possible Situations for Testing a Statistical Hypothesis H0 is true H0 is false Do not reject H0 Correct decision Type II error Reject H0 Type I error Correct decision The probability of committing a type I error, also called the level of signif- icance, is denoted by the Greek letter α. In our illustration, a type I error will occur when more than 8 individuals inoculated with the new vaccine surpass the 2-year period without contracting the virus and researchers conclude that the new vaccine is better when it is actually equivalent to the one in use. Hence, if X is the number of individuals who remain free of the virus for at least 2 years, α = P(type I error) = P X 8 when p = 1 4 = 20 x=9 b x; 20, 1 4 = 1 − 8 x=0 b x; 20, 1 4 = 1 − 0.9591 = 0.0409. We say that the null hypothesis, p = 1/4, is being tested at the α = 0.0409 level of significance. Sometimes the level of significance is called the size of the test. A critical region of size 0.0409 is very small, and therefore it is unlikely that a type I error will be committed. Consequently, it would be most unusual for more than 8 individuals to remain immune to a virus for a 2-year period using a new vaccine that is essentially equivalent to the one now on the market. The Probability of a Type II Error The probability of committing a type II error, denoted by β, is impossible to com- pute unless we have a specific alternative hypothesis. If we test the null hypothesis that p = 1/4 against the alternative hypothesis that p = 1/2, then we are able to compute the probability of not rejecting H0 when it is false. We simply find the probability of obtaining 8 or fewer in the group that surpass the 2-year period when p = 1/2. In this case, β = P(type II error) = P X ≤ 8 when p = 1 2 = 8 x=0 b x; 20, 1 2 = 0.2517. This is a rather high probability, indicating a test procedure in which it is quite likely that we shall reject the new vaccine when, in fact, it is superior to what is now in use. Ideally, we like to use a test procedure for which the type I and type II error probabilities are both small. It is possible that the director of the testing program is willing to make a type II error if the more expensive vaccine is not significantly superior. In fact, the only
  • 345. 324 Chapter 10 One- and Two-Sample Tests of Hypotheses time he wishes to guard against the type II error is when the true value of p is at least 0.7. If p = 0.7, this test procedure gives β = P(type II error) = P(X ≤ 8 when p = 0.7) = 8 x=0 b(x; 20, 0.7) = 0.0051. With such a small probability of committing a type II error, it is extremely unlikely that the new vaccine would be rejected when it was 70% effective after a period of 2 years. As the alternative hypothesis approaches unity, the value of β diminishes to zero. The Role of α, β, and Sample Size Let us assume that the director of the testing program is unwilling to commit a type II error when the alternative hypothesis p = 1/2 is true, even though we have found the probability of such an error to be β = 0.2517. It is always possible to reduce β by increasing the size of the critical region. For example, consider what happens to the values of α and β when we change our critical value to 7 so that all scores greater than 7 fall in the critical region and those less than or equal to 7 fall in the nonrejection region. Now, in testing p = 1/4 against the alternative hypothesis that p = 1/2, we find that α = 20 x=8 b x; 20, 1 4 = 1 − 7 x=0 b x; 20, 1 4 = 1 − 0.8982 = 0.1018 and β = 7 x=0 b x; 20, 1 2 = 0.1316. By adopting a new decision procedure, we have reduced the probability of com- mitting a type II error at the expense of increasing the probability of committing a type I error. For a fixed sample size, a decrease in the probability of one error will usually result in an increase in the probability of the other error. Fortunately, the probability of committing both types of error can be reduced by increasing the sample size. Consider the same problem using a random sample of 100 individuals. If more than 36 of the group surpass the 2-year period, we reject the null hypothesis that p = 1/4 and accept the alternative hypothesis that p 1/4. The critical value is now 36. All possible scores above 36 constitute the critical region, and all possible scores less than or equal to 36 fall in the acceptance region. To determine the probability of committing a type I error, we shall use the normal curve approximation with μ = np = (100) 1 4 = 25 and σ = √ npq = (100)(1/4)(3/4) = 4.33. Referring to Figure 10.2, we need the area under the normal curve to the right of x = 36.5. The corresponding z-value is z = 36.5 − 25 4.33 = 2.66.
  • 346. 10.2 Testing a Statistical Hypothesis 325 x 36.5 4.33 25 α μ σ Figure 10.2: Probability of a type I error. From Table A.3 we find that α = P(type I error) = P X 36 when p = 1 4 ≈ P(Z 2.66) = 1 − P(Z 2.66) = 1 − 0.9961 = 0.0039. If H0 is false and the true value of H1 is p = 1/2, we can determine the probability of a type II error using the normal curve approximation with μ = np = (100)(1/2) = 50 and σ = √ npq = (100)(1/2)(1/2) = 5. The probability of a value falling in the nonrejection region when H0 is true is given by the area of the shaded region to the left of x = 36.5 in Figure 10.3. The z-value corresponding to x = 36.5 is z = 36.5 − 50 5 = −2.7. x 25 36.5 50 4.33 5 H0 H1 σ σ Figure 10.3: Probability of a type II error. Therefore, β = P(type II error) = P X ≤ 36 when p = 1 2 ≈ P(Z −2.7) = 0.0035.
  • 347. 326 Chapter 10 One- and Two-Sample Tests of Hypotheses Obviously, the type I and type II errors will rarely occur if the experiment consists of 100 individuals. The illustration above underscores the strategy of the scientist in hypothesis testing. After the null and alternative hypotheses are stated, it is important to consider the sensitivity of the test procedure. By this we mean that there should be a determination, for a fixed α, of a reasonable value for the probability of wrongly accepting H0 (i.e., the value of β) when the true situation represents some important deviation from H0. A value for the sample size can usually be determined for which there is a reasonable balance between the values of α and β computed in this fashion. The vaccine problem provides an illustration. Illustration with a Continuous Random Variable The concepts discussed here for a discrete population can be applied equally well to continuous random variables. Consider the null hypothesis that the average weight of male students in a certain college is 68 kilograms against the alternative hypothesis that it is unequal to 68. That is, we wish to test H0: μ = 68, H1: μ = 68. The alternative hypothesis allows for the possibility that μ 68 or μ 68. A sample mean that falls close to the hypothesized value of 68 would be consid- ered evidence in favor of H0. On the other hand, a sample mean that is considerably less than or more than 68 would be evidence inconsistent with H0 and therefore favoring H1. The sample mean is the test statistic in this case. A critical region for the test statistic might arbitrarily be chosen to be the two intervals x̄ 67 and x̄ 69. The nonrejection region will then be the interval 67 ≤ x̄ ≤ 69. This decision criterion is illustrated in Figure 10.4. 67 68 69 x Reject H0 ( 68) Reject H0 ( 68) Do not reject H0 ( 68) μ μ μ Figure 10.4: Critical region (in blue). Let us now use the decision criterion of Figure 10.4 to calculate the probabilities of committing type I and type II errors when testing the null hypothesis that μ = 68 kilograms against the alternative that μ = 68 kilograms. Assume the standard deviation of the population of weights to be σ = 3.6. For large samples, we may substitute s for σ if no other estimate of σ is available. Our decision statistic, based on a random sample of size n = 36, will be X̄, the most efficient estimator of μ. From the Central Limit Theorem, we know that the sampling distribution of X̄ is approximately normal with standard deviation σX̄ = σ/ √ n = 3.6/6 = 0.6.
  • 348. 10.2 Testing a Statistical Hypothesis 327 The probability of committing a type I error, or the level of significance of our test, is equal to the sum of the areas that have been shaded in each tail of the distribution in Figure 10.5. Therefore, α = P(X̄ 67 when μ = 68) + P(X̄ 69 when μ = 68). x 67 68 69 /2 μ α /2 α Figure 10.5: Critical region for testing μ = 68 versus μ = 68. The z-values corresponding to x̄1 = 67 and x̄2 = 69 when H0 is true are z1 = 67 − 68 0.6 = −1.67 and z2 = 69 − 68 0.6 = 1.67. Therefore, α = P(Z −1.67) + P(Z 1.67) = 2P(Z −1.67) = 0.0950. Thus, 9.5% of all samples of size 36 would lead us to reject μ = 68 kilograms when, in fact, it is true. To reduce α, we have a choice of increasing the sample size or widening the fail-to-reject region. Suppose that we increase the sample size to n = 64. Then σX̄ = 3.6/8 = 0.45. Now z1 = 67 − 68 0.45 = −2.22 and z2 = 69 − 68 0.45 = 2.22. Hence, α = P(Z −2.22) + P(Z 2.22) = 2P(Z −2.22) = 0.0264. The reduction in α is not sufficient by itself to guarantee a good testing proce- dure. We must also evaluate β for various alternative hypotheses. If it is important to reject H0 when the true mean is some value μ ≥ 70 or μ ≤ 66, then the prob- ability of committing a type II error should be computed and examined for the alternatives μ = 66 and μ = 70. Because of symmetry, it is only necessary to consider the probability of not rejecting the null hypothesis that μ = 68 when the alternative μ = 70 is true. A type II error will result when the sample mean x̄ falls between 67 and 69 when H1 is true. Therefore, referring to Figure 10.6, we find that β = P(67 ≤ X̄ ≤ 69 when μ = 70).
  • 349. 328 Chapter 10 One- and Two-Sample Tests of Hypotheses 67 68 69 70 71 x H0 H1 Figure 10.6: Probability of type II error for testing μ = 68 versus μ = 70. The z-values corresponding to x̄1 = 67 and x̄2 = 69 when H1 is true are z1 = 67 − 70 0.45 = −6.67 and z2 = 69 − 70 0.45 = −2.22. Therefore, β = P(−6.67 Z −2.22) = P(Z −2.22) − P(Z −6.67) = 0.0132 − 0.0000 = 0.0132. If the true value of μ is the alternative μ = 66, the value of β will again be 0.0132. For all possible values of μ 66 or μ 70, the value of β will be even smaller when n = 64, and consequently there would be little chance of not rejecting H0 when it is false. The probability of committing a type II error increases rapidly when the true value of μ approaches, but is not equal to, the hypothesized value. Of course, this is usually the situation where we do not mind making a type II error. For example, if the alternative hypothesis μ = 68.5 is true, we do not mind committing a type II error by concluding that the true answer is μ = 68. The probability of making such an error will be high when n = 64. Referring to Figure 10.7, we have β = P(67 ≤ X̄ ≤ 69 when μ = 68.5). The z-values corresponding to x̄1 = 67 and x̄2 = 69 when μ = 68.5 are z1 = 67 − 68.5 0.45 = −3.33 and z2 = 69 − 68.5 0.45 = 1.11. Therefore, β = P(−3.33 Z 1.11) = P(Z 1.11) − P(Z −3.33) = 0.8665 − 0.0004 = 0.8661. The preceding examples illustrate the following important properties:
  • 350. 10.2 Testing a Statistical Hypothesis 329 67 68 69 68.5 x H0 H1 Figure 10.7: Type II error for testing μ = 68 versus μ = 68.5. Important Properties of a Test of Hypothesis 1. The type I error and type II error are related. A decrease in the probability of one generally results in an increase in the probability of the other. 2. The size of the critical region, and therefore the probability of committing a type I error, can always be reduced by adjusting the critical value(s). 3. An increase in the sample size n will reduce α and β simultaneously. 4. If the null hypothesis is false, β is a maximum when the true value of a parameter approaches the hypothesized value. The greater the distance between the true value and the hypothesized value, the smaller β will be. One very important concept that relates to error probabilities is the notion of the power of a test. Definition 10.4: The power of a test is the probability of rejecting H0 given that a specific alter- native is true. The power of a test can be computed as 1 − β. Often different types of tests are compared by contrasting power properties. Consider the previous illustration, in which we were testing H0 : μ = 68 and H1 : μ = 68. As before, suppose we are interested in assessing the sensitivity of the test. The test is gov- erned by the rule that we do not reject H0 if 67 ≤ x̄ ≤ 69. We seek the capability of the test to properly reject H0 when indeed μ = 68.5. We have seen that the probability of a type II error is given by β = 0.8661. Thus, the power of the test is 1 − 0.8661 = 0.1339. In a sense, the power is a more succinct measure of how sensitive the test is for detecting differences between a mean of 68 and a mean of 68.5. In this case, if μ is truly 68.5, the test as described will properly reject H0 only 13.39% of the time. As a result, the test would not be a good one if it was important that the analyst have a reasonable chance of truly distinguishing between a mean of 68.0 (specified by H0) and a mean of 68.5. From the foregoing, it is clear that to produce a desirable power (say, greater than 0.8), one must either increase α or increase the sample size. So far in this chapter, much of the discussion of hypothesis testing has focused on foundations and definitions. In the sections that follow, we get more specific
  • 351. 330 Chapter 10 One- and Two-Sample Tests of Hypotheses and put hypotheses in categories as well as discuss tests of hypotheses on various parameters of interest. We begin by drawing the distinction between a one-sided and a two-sided hypothesis. One- and Two-Tailed Tests A test of any statistical hypothesis where the alternative is one sided, such as H0: θ = θ0, H1: θ θ0 or perhaps H0: θ = θ0, H1: θ θ0, is called a one-tailed test. Earlier in this section, we referred to the test statistic for a hypothesis. Generally, the critical region for the alternative hypothesis θ θ0 lies in the right tail of the distribution of the test statistic, while the critical region for the alternative hypothesis θ θ0 lies entirely in the left tail. (In a sense, the inequality symbol points in the direction of the critical region.) A one-tailed test was used in the vaccine experiment to test the hypothesis p = 1/4 against the one-sided alternative p 1/4 for the binomial distribution. The one-tailed critical region is usually obvious; the reader should visualize the behavior of the test statistic and notice the obvious signal that would produce evidence supporting the alternative hypothesis. A test of any statistical hypothesis where the alternative is two sided, such as H0: θ = θ0, H1: θ = θ0, is called a two-tailed test, since the critical region is split into two parts, often having equal probabilities, in each tail of the distribution of the test statistic. The alternative hypothesis θ = θ0 states that either θ θ0 or θ θ0. A two-tailed test was used to test the null hypothesis that μ = 68 kilograms against the two- sided alternative μ = 68 kilograms in the example of the continuous population of student weights. How Are the Null and Alternative Hypotheses Chosen? The null hypothesis H0 will often be stated using the equality sign. With this approach, it is clear how the probability of type I error is controlled. However, there are situations in which “do not reject H0” implies that the parameter θ might be any value defined by the natural complement to the alternative hypothesis. For example, in the vaccine example, where the alternative hypothesis is H1: p 1/4, it is quite possible that nonrejection of H0 cannot rule out a value of p less than 1/4. Clearly though, in the case of one-tailed tests, the statement of the alternative is the most important consideration.
  • 352. 10.3 The Use of P-Values for Decision Making in Testing Hypotheses 331 Whether one sets up a one-tailed or a two-tailed test will depend on the con- clusion to be drawn if H0 is rejected. The location of the critical region can be determined only after H1 has been stated. For example, in testing a new drug, one sets up the hypothesis that it is no better than similar drugs now on the market and tests this against the alternative hypothesis that the new drug is superior. Such an alternative hypothesis will result in a one-tailed test with the critical region in the right tail. However, if we wish to compare a new teaching technique with the conventional classroom procedure, the alternative hypothesis should allow for the new approach to be either inferior or superior to the conventional procedure. Hence, the test is two-tailed with the critical region divided equally so as to fall in the extreme left and right tails of the distribution of our statistic. Example 10.1: A manufacturer of a certain brand of rice cereal claims that the average saturated fat content does not exceed 1.5 grams per serving. State the null and alternative hypotheses to be used in testing this claim and determine where the critical region is located. Solution: The manufacturer’s claim should be rejected only if μ is greater than 1.5 milligrams and should not be rejected if μ is less than or equal to 1.5 milligrams. We test H0: μ = 1.5, H1: μ 1.5. Nonrejection of H0 does not rule out values less than 1.5 milligrams. Since we have a one-tailed test, the greater than symbol indicates that the critical region lies entirely in the right tail of the distribution of our test statistic X̄. Example 10.2: A real estate agent claims that 60% of all private residences being built today are 3-bedroom homes. To test this claim, a large sample of new residences is inspected; the proportion of these homes with 3 bedrooms is recorded and used as the test statistic. State the null and alternative hypotheses to be used in this test and determine the location of the critical region. Solution: If the test statistic were substantially higher or lower than p = 0.6, we would reject the agent’s claim. Hence, we should make the hypothesis H0: p = 0.6, H1: p = 0.6. The alternative hypothesis implies a two-tailed test with the critical region divided equally in both tails of the distribution of + P, our test statistic. 10.3 The Use of P -Values for Decision Making in Testing Hypotheses In testing hypotheses in which the test statistic is discrete, the critical region may be chosen arbitrarily and its size determined. If α is too large, it can be reduced by making an adjustment in the critical value. It may be necessary to increase the
  • 353. 332 Chapter 10 One- and Two-Sample Tests of Hypotheses sample size to offset the decrease that occurs automatically in the power of the test. Over a number of generations of statistical analysis, it had become customary to choose an α of 0.05 or 0.01 and select the critical region accordingly. Then, of course, strict rejection or nonrejection of H0 would depend on that critical region. For example, if the test is two tailed and α is set at the 0.05 level of significance and the test statistic involves, say, the standard normal distribution, then a z-value is observed from the data and the critical region is z 1.96 or z −1.96, where the value 1.96 is found as z0.025 in Table A.3. A value of z in the critical region prompts the statement “The value of the test statistic is significant,” which we can then translate into the user’s language. For example, if the hypothesis is given by H0: μ = 10, H1: μ = 10, one might say, “The mean differs significantly from the value 10.” Preselection of a Significance Level This preselection of a significance level α has its roots in the philosophy that the maximum risk of making a type I error should be controlled. However, this approach does not account for values of test statistics that are “close” to the critical region. Suppose, for example, in the illustration with H0 : μ = 10 versus H1: μ = 10, a value of z = 1.87 is observed; strictly speaking, with α = 0.05, the value is not significant. But the risk of committing a type I error if one rejects H0 in this case could hardly be considered severe. In fact, in a two-tailed scenario, one can quantify this risk as P = 2P(Z 1.87 when μ = 10) = 2(0.0307) = 0.0614. As a result, 0.0614 is the probability of obtaining a value of z as large as or larger (in magnitude) than 1.87 when in fact μ = 10. Although this evidence against H0 is not as strong as that which would result from rejection at an α = 0.05 level, it is important information to the user. Indeed, continued use of α = 0.05 or 0.01 is only a result of what standards have been passed down through the generations. The P-value approach has been adopted extensively by users of applied statistics. The approach is designed to give the user an alternative (in terms of a probability) to a mere “reject” or “do not reject” conclusion. The P-value computation also gives the user important information when the z-value falls well into the ordinary critical region. For example, if z is 2.73, it is informative for the user to observe that P = 2(0.0032) = 0.0064, and thus the z-value is significant at a level considerably less than 0.05. It is important to know that under the condition of H0, a value of z = 2.73 is an extremely rare event. That is, a value at least that large in magnitude would only occur 64 times in 10,000 experiments.
  • 354. 10.3 The Use of P-Values for Decision Making in Testing Hypotheses 333 A Graphical Demonstration of a P-Value One very simple way of explaining a P-value graphically is to consider two distinct samples. Suppose that two materials are being considered for coating a particular type of metal in order to inhibit corrosion. Specimens are obtained, and one collection is coated with material 1 and one collection coated with material 2. The sample sizes are n1 = n2 = 10, and corrosion is measured in percent of surface area affected. The hypothesis is that the samples came from common distributions with mean μ = 10. Let us assume that the population variance is 1.0. Then we are testing H0: μ1 = μ2 = 10. Let Figure 10.8 represent a point plot of the data; the data are placed on the distribution stated by the null hypothesis. Let us assume that the “×” data refer to material 1 and the “◦” data refer to material 2. Now it seems clear that the data do refute the null hypothesis. But how can this be summarized in one number? The P-value can be viewed as simply the probability of obtaining these data given that both samples come from the same distribution. Clearly, this probability is quite small, say 0.00000001! Thus, the small P-value clearly refutes H0, and the conclusion is that the population means are significantly different. 10 μ Figure 10.8: Data that are likely generated from populations having two different means. Use of the P-value approach as an aid in decision-making is quite natural, and nearly all computer packages that provide hypothesis-testing computation print out P-values along with values of the appropriate test statistic. The following is a formal definition of a P-value. Definition 10.5: A P -value is the lowest level (of significance) at which the observed value of the test statistic is significant. How Does the Use of P-Values Differ from Classic Hypothesis Testing? It is tempting at this point to summarize the procedures associated with testing, say, H0 : θ = θ0. However, the student who is a novice in this area should un- derstand that there are differences in approach and philosophy between the classic
  • 355. / / 334 Chapter 10 One- and Two-Sample Tests of Hypotheses fixed α approach that is climaxed with either a “reject H0” or a “do not reject H0” conclusion and the P-value approach. In the latter, no fixed α is determined and conclusions are drawn on the basis of the size of the P-value in harmony with the subjective judgment of the engineer or scientist. While modern computer software will output P-values, nevertheless it is important that readers understand both approaches in order to appreciate the totality of the concepts. Thus, we offer a brief list of procedural steps for both the classical and the P-value approach. Approach to Hypothesis Testing with Fixed Probability of Type I Error 1. State the null and alternative hypotheses. 2. Choose a fixed significance level α. 3. Choose an appropriate test statistic and establish the critical region based on α. 4. Reject H0 if the computed test statistic is in the critical region. Otherwise, do not reject. 5. Draw scientific or engineering conclusions. Significance Testing (P-Value Approach) 1. State null and alternative hypotheses. 2. Choose an appropriate test statistic. 3. Compute the P-value based on the computed value of the test statistic. 4. Use judgment based on the P-value and knowledge of the scientific system. In later sections of this chapter and chapters that follow, many examples and exercises emphasize the P-value approach to drawing scientific conclusions. Exercises 10.1 Suppose that an allergist wishes to test the hy- pothesis that at least 30% of the public is allergic to some cheese products. Explain how the allergist could commit (a) a type I error; (b) a type II error. 10.2 A sociologist is concerned about the effective- ness of a training course designed to get more drivers to use seat belts in automobiles. (a) What hypothesis is she testing if she commits a type I error by erroneously concluding that the training course is ineffective? (b) What hypothesis is she testing if she commits a type II error by erroneously concluding that the training course is effective? 10.3 A large manufacturing firm is being charged with discrimination in its hiring practices. (a) What hypothesis is being tested if a jury commits a type I error by finding the firm guilty? (b) What hypothesis is being tested if a jury commits a type II error by finding the firm guilty? 10.4 A fabric manufacturer believes that the propor- tion of orders for raw material arriving late is p = 0.6. If a random sample of 10 orders shows that 3 or fewer arrived late, the hypothesis that p = 0.6 should be rejected in favor of the alternative p 0.6. Use the binomial distribution. (a) Find the probability of committing a type I error if the true proportion is p = 0.6. (b) Find the probability of committing a type II error for the alternatives p = 0.3, p = 0.4, and p = 0.5. 10.5 Repeat Exercise 10.4 but assume that 50 orders are selected and the critical region is defined to be x ≤ 24, where x is the number of orders in the sample that arrived late. Use the normal approximation. 10.6 The proportion of adults living in a small town who are college graduates is estimated to be p = 0.6. To test this hypothesis, a random sample of 15 adults is selected. If the number of college graduates in the sample is anywhere from 6 to 12, we shall not reject the null hypothesis that p = 0.6; otherwise, we shall conclude that p = 0.6. (a) Evaluate α assuming that p = 0.6. Use the bino- mial distribution.
  • 356. / / Exercises 335 (b) Evaluate β for the alternatives p = 0.5 and p = 0.7. (c) Is this a good test procedure? 10.7 Repeat Exercise 10.6 but assume that 200 adults are selected and the fail-to-reject region is defined to be 110 ≤ x ≤ 130, where x is the number of college graduates in our sample. Use the normal approxima- tion. 10.8 In Relief from Arthritis published by Thorsons Publishers, Ltd., John E. Croft claims that over 40% of those who suffer from osteoarthritis receive measur- able relief from an ingredient produced by a particular species of mussel found off the coast of New Zealand. To test this claim, the mussel extract is to be given to a group of 7 osteoarthritic patients. If 3 or more of the patients receive relief, we shall not reject the null hypothesis that p = 0.4; otherwise, we conclude that p 0.4. (a) Evaluate α, assuming that p = 0.4. (b) Evaluate β for the alternative p = 0.3. 10.9 A dry cleaning establishment claims that a new spot remover will remove more than 70% of the spots to which it is applied. To check this claim, the spot remover will be used on 12 spots chosen at random. If fewer than 11 of the spots are removed, we shall not reject the null hypothesis that p = 0.7; otherwise, we conclude that p 0.7. (a) Evaluate α, assuming that p = 0.7. (b) Evaluate β for the alternative p = 0.9. 10.10 Repeat Exercise 10.9 but assume that 100 spots are treated and the critical region is defined to be x 82, where x is the number of spots removed. 10.11 Repeat Exercise 10.8 but assume that 70 pa- tients are given the mussel extract and the critical re- gion is defined to be x 24, where x is the number of osteoarthritic patients who receive relief. 10.12 A random sample of 400 voters in a certain city are asked if they favor an additional 4% gasoline sales tax to provide badly needed revenues for street repairs. If more than 220 but fewer than 260 favor the sales tax, we shall conclude that 60% of the voters are for it. (a) Find the probability of committing a type I error if 60% of the voters favor the increased tax. (b) What is the probability of committing a type II er- ror using this test procedure if actually only 48% of the voters are in favor of the additional gasoline tax? 10.13 Suppose, in Exercise 10.12, we conclude that 60% of the voters favor the gasoline sales tax if more than 214 but fewer than 266 voters in our sample fa- vor it. Show that this new critical region results in a smaller value for α at the expense of increasing β. 10.14 A manufacturer has developed a new fishing line, which the company claims has a mean breaking strength of 15 kilograms with a standard deviation of 0.5 kilogram. To test the hypothesis that μ = 15 kilo- grams against the alternative that μ 15 kilograms, a random sample of 50 lines will be tested. The critical region is defined to be x̄ 14.9. (a) Find the probability of committing a type I error when H0 is true. (b) Evaluate β for the alternatives μ = 14.8 and μ = 14.9 kilograms. 10.15 A soft-drink machine at a steak house is reg- ulated so that the amount of drink dispensed is ap- proximately normally distributed with a mean of 200 milliliters and a standard deviation of 15 milliliters. The machine is checked periodically by taking a sam- ple of 9 drinks and computing the average content. If x̄ falls in the interval 191 x̄ 209, the machine is thought to be operating satisfactorily; otherwise, we conclude that μ = 200 milliliters. (a) Find the probability of committing a type I error when μ = 200 milliliters. (b) Find the probability of committing a type II error when μ = 215 milliliters. 10.16 Repeat Exercise 10.15 for samples of size n = 25. Use the same critical region. 10.17 A new curing process developed for a certain type of cement results in a mean compressive strength of 5000 kilograms per square centimeter with a stan- dard deviation of 120 kilograms. To test the hypothesis that μ = 5000 against the alternative that μ 5000, a random sample of 50 pieces of cement is tested. The critical region is defined to be x̄ 4970. (a) Find the probability of committing a type I error when H0 is true. (b) Evaluate β for the alternatives μ = 4970 and μ = 4960. 10.18 If we plot the probabilities of failing to reject H0 corresponding to various alternatives for μ (includ- ing the value specified by H0) and connect all the points by a smooth curve, we obtain the operating characteristic curve of the test criterion, or simply the OC curve. Note that the probability of failing to reject H0 when it is true is simply 1 − α. Operating characteristic curves are widely used in industrial ap- plications to provide a visual display of the merits of the test criterion. With reference to Exercise 10.15, find the probabilities of failing to reject H0 for the fol- lowing 9 values of μ and plot the OC curve: 184, 188, 192, 196, 200, 204, 208, 212, and 216.
  • 357. 336 Chapter 10 One- and Two-Sample Tests of Hypotheses 10.4 Single Sample: Tests Concerning a Single Mean In this section, we formally consider tests of hypotheses on a single population mean. Many of the illustrations from previous sections involved tests on the mean, so the reader should already have insight into some of the details that are outlined here. Tests on a Single Mean (Variance Known) We should first describe the assumptions on which the experiment is based. The model for the underlying situation centers around an experiment with X1, X2, . . . , Xn representing a random sample from a distribution with mean μ and variance σ2 0. Consider first the hypothesis H0: μ = μ0, H1: μ = μ0. The appropriate test statistic should be based on the random variable X̄. In Chapter 8, the Central Limit Theorem was introduced, which essentially states that despite the distribution of X, the random variable X̄ has approximately a normal distribution with mean μ and variance σ2 /n for reasonably large sample sizes. So, μX̄ = μ and σ2 X̄ = σ2 /n. We can then determine a critical region based on the computed sample average, x̄. It should be clear to the reader by now that there will be a two-tailed critical region for the test. Standardization of X̄ It is convenient to standardize X̄ and formally involve the standard normal random variable Z, where Z = X̄ − μ σ/ √ n . We know that under H0, that is, if μ = μ0, √ n(X̄ − μ0)/σ follows an n(x; 0, 1) distribution, and hence the expression P −zα/2 X̄ − μ0 σ/ √ n zα/2 = 1 − α can be used to write an appropriate nonrejection region. The reader should keep in mind that, formally, the critical region is designed to control α, the probability of type I error. It should be obvious that a two-tailed signal of evidence is needed to support H1. Thus, given a computed value x̄, the formal test involves rejecting H0 if the computed test statistic z falls in the critical region described next.
  • 358. 10.4 Single Sample: Tests Concerning a Single Mean 337 Test Procedure for a Single Mean (Variance Known) z = x̄ − μ0 σ/ √ n zα/2 or z = x̄ − μ0 σ/ √ n −zα/2 If −zα/2 z zα/2, do not reject H0. Rejection of H0, of course, implies acceptance of the alternative hypothesis μ = μ0. With this definition of the critical region, it should be clear that there will be probability α of rejecting H0 (falling into the critical region) when, indeed, μ = μ0. Although it is easier to understand the critical region written in terms of z, we can write the same critical region in terms of the computed average x̄. The following can be written as an identical decision procedure: reject H0 if x̄ a or x̄ b, where a = μ0 − zα/2 σ √ n , b = μ0 + zα/2 σ √ n . Hence, for a significance level α, the critical values of the random variable z and x̄ are both depicted in Figure 10.9. x a μ b 1 /2 α α /2 α Figure 10.9: Critical region for the alternative hypothesis μ = μ0. Tests of one-sided hypotheses on the mean involve the same statistic described in the two-sided case. The difference, of course, is that the critical region is only in one tail of the standard normal distribution. For example, suppose that we seek to test H0: μ = μ0, H1: μ μ0. The signal that favors H1 comes from large values of z. Thus, rejection of H0 results when the computed z zα. Obviously, if the alternative is H1: μ μ0, the critical region is entirely in the lower tail and thus rejection results from z −zα. Although in a one-sided testing case the null hypothesis can be written as H0 : μ ≤ μ0 or H0: μ ≥ μ0, it is usually written as H0: μ = μ0. The following two examples illustrate tests on means for the case in which σ is known.
  • 359. 338 Chapter 10 One- and Two-Sample Tests of Hypotheses Example 10.3: A random sample of 100 recorded deaths in the United States during the past year showed an average life span of 71.8 years. Assuming a population standard deviation of 8.9 years, does this seem to indicate that the mean life span today is greater than 70 years? Use a 0.05 level of significance. Solution: 1. H0: μ = 70 years. 2. H1: μ 70 years. 3. α = 0.05. 4. Critical region: z 1.645, where z = x̄−μ0 σ/ √ n . 5. Computations: x̄ = 71.8 years, σ = 8.9 years, and hence z = 71.8−70 8.9/ √ 100 = 2.02. 6. Decision: Reject H0 and conclude that the mean life span today is greater than 70 years. The P-value corresponding to z = 2.02 is given by the area of the shaded region in Figure 10.10. Using Table A.3, we have P = P(Z 2.02) = 0.0217. As a result, the evidence in favor of H1 is even stronger than that suggested by a 0.05 level of significance. Example 10.4: A manufacturer of sports equipment has developed a new synthetic fishing line that the company claims has a mean breaking strength of 8 kilograms with a standard deviation of 0.5 kilogram. Test the hypothesis that μ = 8 kilograms against the alternative that μ = 8 kilograms if a random sample of 50 lines is tested and found to have a mean breaking strength of 7.8 kilograms. Use a 0.01 level of significance. Solution: 1. H0: μ = 8 kilograms. 2. H1: μ = 8 kilograms. 3. α = 0.01. 4. Critical region: z −2.575 and z 2.575, where z = x̄−μ0 σ/ √ n . 5. Computations: x̄ = 7.8 kilograms, n = 50, and hence z = 7.8−8 0.5/ √ 50 = −2.83. 6. Decision: Reject H0 and conclude that the average breaking strength is not equal to 8 but is, in fact, less than 8 kilograms. Since the test in this example is two tailed, the desired P-value is twice the area of the shaded region in Figure 10.11 to the left of z = −2.83. Therefore, using Table A.3, we have P = P(|Z| 2.83) = 2P(Z −2.83) = 0.0046, which allows us to reject the null hypothesis that μ = 8 kilograms at a level of significance smaller than 0.01.
  • 360. 10.4 Single Sample: Tests Concerning a Single Mean 339 z 0 2.02 P Figure 10.10: P-value for Example 10.3. z −2.83 0 2.83 P/2 P/2 Figure 10.11: P-value for Example 10.4. Relationship to Confidence Interval Estimation The reader should realize by now that the hypothesis-testing approach to statistical inference in this chapter is very closely related to the confidence interval approach in Chapter 9. Confidence interval estimation involves computation of bounds within which it is “reasonable” for the parameter in question to lie. For the case of a single population mean μ with σ2 known, the structure of both hypothesis testing and confidence interval estimation is based on the random variable Z = X̄ − μ σ/ √ n . It turns out that the testing of H0: μ = μ0 against H1: μ = μ0 at a significance level α is equivalent to computing a 100(1 − α)% confidence interval on μ and rejecting H0 if μ0 is outside the confidence interval. If μ0 is inside the confidence interval, the hypothesis is not rejected. The equivalence is very intuitive and quite simple to illustrate. Recall that with an observed value x̄, failure to reject H0 at significance level α implies that −zα/2 ≤ x̄ − μ0 σ/ √ n ≤ zα/2, which is equivalent to x̄ − zα/2 σ √ n ≤ μ0 ≤ x̄ + zα/2 σ √ n . The equivalence of confidence interval estimation to hypothesis testing extends to differences between two means, variances, ratios of variances, and so on. As a result, the student of statistics should not consider confidence interval estimation and hypothesis testing as separate forms of statistical inference. For example, consider Example 9.2 on page 271. The 95% confidence interval on the mean is given by the bounds (2.50, 2.70). Thus, with the same sample information, a two- sided hypothesis on μ involving any hypothesized value between 2.50 and 2.70 will not be rejected. As we turn to different areas of hypothesis testing, the equivalence to the confidence interval estimation will continue to be exploited.
  • 361. 340 Chapter 10 One- and Two-Sample Tests of Hypotheses Tests on a Single Sample (Variance Unknown) One would certainly suspect that tests on a population mean μ with σ2 unknown, like confidence interval estimation, should involve the use of Student t-distribution. Strictly speaking, the application of Student t for both confidence intervals and hypothesis testing is developed under the following assumptions. The random variables X1, X2, . . . , Xn represent a random sample from a normal distribution with unknown μ and σ2 . Then the random variable √ n(X̄ − μ)/S has a Student t-distribution with n−1 degrees of freedom. The structure of the test is identical to that for the case of σ known, with the exception that the value σ in the test statistic is replaced by the computed estimate S and the standard normal distribution is replaced by a t-distribution. The t-Statistic for a Test on a Single Mean (Variance Unknown) For the two-sided hypothesis H0: μ = μ0, H1: μ = μ0, we reject H0 at significance level α when the computed t-statistic t = x̄ − μ0 s/ √ n exceeds tα/2,n−1 or is less than −tα/2,n−1. The reader should recall from Chapters 8 and 9 that the t-distribution is symmetric around the value zero. Thus, this two-tailed critical region applies in a fashion similar to that for the case of known σ. For the two-sided hypothesis at significance level α, the two-tailed critical regions apply. For H1: μ μ0, rejection results when t tα,n−1. For H1: μ μ0, the critical region is given by t −tα,n−1. Example 10.5: The Edison Electric Institute has published figures on the number of kilowatt hours used annually by various home appliances. It is claimed that a vacuum cleaner uses an average of 46 kilowatt hours per year. If a random sample of 12 homes included in a planned study indicates that vacuum cleaners use an average of 42 kilowatt hours per year with a standard deviation of 11.9 kilowatt hours, does this suggest at the 0.05 level of significance that vacuum cleaners use, on average, less than 46 kilowatt hours annually? Assume the population of kilowatt hours to be normal. Solution: 1. H0: μ = 46 kilowatt hours. 2. H1: μ 46 kilowatt hours. 3. α = 0.05. 4. Critical region: t −1.796, where t = x̄−μ0 s/ √ n with 11 degrees of freedom. 5. Computations: x̄ = 42 kilowatt hours, s = 11.9 kilowatt hours, and n = 12. Hence, t = 42 − 46 11.9/ √ 12 = −1.16, P = P(T −1.16) ≈ 0.135.
  • 362. 10.4 Single Sample: Tests Concerning a Single Mean 341 6. Decision: Do not reject H0 and conclude that the average number of kilowatt hours used annually by home vacuum cleaners is not significantly less than 46. Comment on the Single-Sample t-Test The reader has probably noticed that the equivalence of the two-tailed t-test for a single mean and the computation of a confidence interval on μ with σ replaced by s is maintained. For example, consider Example 9.5 on page 275. Essentially, we can view that computation as one in which we have found all values of μ0, the hypothesized mean volume of containers of sulfuric acid, for which the hypothesis H0: μ = μ0 will not be rejected at α = 0.05. Again, this is consistent with the statement “Based on the sample information, values of the population mean volume between 9.74 and 10.26 liters are not unreasonable.” Comments regarding the normality assumption are worth emphasizing at this point. We have indicated that when σ is known, the Central Limit Theorem allows for the use of a test statistic or a confidence interval which is based on Z, the standard normal random variable. Strictly speaking, of course, the Central Limit Theorem, and thus the use of the standard normal distribution, does not apply unless σ is known. In Chapter 8, the development of the t-distribution was given. There we pointed out that normality on X1, X2, . . . , Xn was an underlying assumption. Thus, strictly speaking, the Student’s t-tables of percentage points for tests or confidence intervals should not be used unless it is known that the sample comes from a normal population. In practice, σ can rarely be assumed to be known. However, a very good estimate may be available from previous experiments. Many statistics textbooks suggest that one can safely replace σ by s in the test statistic z = x̄ − μ0 σ/ √ n when n ≥ 30 with a bell-shaped population and still use the Z-tables for the appropriate critical region. The implication here is that the Central Limit Theorem is indeed being invoked and one is relying on the fact that s ≈ σ. Obviously, when this is done, the results must be viewed as approximate. Thus, a computed P- value (from the Z-distribution) of 0.15 may be 0.12 or perhaps 0.17, or a computed confidence interval may be a 93% confidence interval rather than a 95% interval as desired. Now what about situations where n ≤ 30? The user cannot rely on s being close to σ, and in order to take into account the inaccuracy of the estimate, the confidence interval should be wider or the critical value larger in magnitude. The t-distribution percentage points accomplish this but are correct only when the sample is from a normal distribution. Of course, normal probability plots can be used to ascertain some sense of the deviation of normality in a data set. For small samples, it is often difficult to detect deviations from a normal dis- tribution. (Goodness-of-fit tests are discussed in a later section of this chapter.) For bell-shaped distributions of the random variables X1, X2, . . . , Xn, the use of the t-distribution for tests or confidence intervals is likely to produce quite good results. When in doubt, the user should resort to nonparametric procedures, which are presented in Chapter 16.
  • 363. 342 Chapter 10 One- and Two-Sample Tests of Hypotheses Annotated Computer Printout for Single-Sample t-Test It should be of interest for the reader to see an annotated computer printout showing the result of a single-sample t-test. Suppose that an engineer is interested in testing the bias in a pH meter. Data are collected on a neutral substance (pH = 7.0). A sample of the measurements were taken with the data as follows: 7.07 7.00 7.10 6.97 7.00 7.03 7.01 7.01 6.98 7.08 It is, then, of interest to test H0: μ = 7.0, H1: μ = 7.0. In this illustration, we use the computer package MINITAB to illustrate the anal- ysis of the data set above. Notice the key components of the printout shown in Figure 10.12. Of course, the mean ȳ is 7.0250, StDev is simply the sample standard deviation s = 0.044, and SE Mean is the estimated standard error of the mean and is computed as s/ √ n = 0.0139. The t-value is the ratio (7.0250 − 7)/0.0139 = 1.80. pH-meter 7.07 7.00 7.10 6.97 7.00 7.03 7.01 7.01 6.98 7.08 MTB Onet ’pH-meter’; SUBC Test 7. One-Sample T: pH-meter Test of mu = 7 vs not = 7 Variable N Mean StDev SE Mean 95% CI T P pH-meter 10 7.02500 0.04403 0.01392 (6.99350, 7.05650) 1.80 0.106 Figure 10.12: MINITAB printout for one sample t-test for pH meter. The P-value of 0.106 suggests results that are inconclusive. There is no evi- dence suggesting a strong rejection of H0 (based on an α of 0.05 or 0.10), yet one certainly cannot truly conclude that the pH meter is unbiased. Notice that the sample size of 10 is rather small. An increase in sample size (perhaps an- other experiment) may sort things out. A discussion regarding appropriate sample size appears in Section 10.6. 10.5 Two Samples: Tests on Two Means The reader should now understand the relationship between tests and confidence intervals, and can only heavily rely on details supplied by the confidence interval material in Chapter 9. Tests concerning two means represent a set of very impor- tant analytical tools for the scientist or engineer. The experimental setting is very much like that described in Section 9.8. Two independent random samples of sizes
  • 364. 10.5 Two Samples: Tests on Two Means 343 n1 and n2, respectively, are drawn from two populations with means μ1 and μ2 and variances σ2 1 and σ2 2. We know that the random variable Z = (X̄1 − X̄2) − (μ1 − μ2) σ2 1/n1 + σ2 2/n2 has a standard normal distribution. Here we are assuming that n1 and n2 are sufficiently large that the Central Limit Theorem applies. Of course, if the two populations are normal, the statistic above has a standard normal distribution even for small n1 and n2. Obviously, if we can assume that σ1 = σ2 = σ, the statistic above reduces to Z = (X̄1 − X̄2) − (μ1 − μ2) σ 1/n1 + 1/n2 . The two statistics above serve as a basis for the development of the test procedures involving two means. The equivalence between tests and confidence intervals, along with the technical detail involving tests on one mean, allow a simple transition to tests on two means. The two-sided hypothesis on two means can be written generally as H0: μ1 − μ2 = d0. Obviously, the alternative can be two sided or one sided. Again, the distribu- tion used is the distribution of the test statistic under H0. Values x̄1 and x̄2 are computed and, for σ1 and σ2 known, the test statistic is given by z = (x̄1 − x̄2) − d0 σ2 1/n1 + σ2 2/n2 , with a two-tailed critical region in the case of a two-sided alternative. That is, reject H0 in favor of H1: μ1 − μ2 = d0 if z zα/2 or z −zα/2. One-tailed critical regions are used in the case of the one-sided alternatives. The reader should, as before, study the test statistic and be satisfied that for, say, H1: μ1 − μ2 d0, the signal favoring H1 comes from large values of z. Thus, the upper-tailed critical region applies. Unknown But Equal Variances The more prevalent situations involving tests on two means are those in which variances are unknown. If the scientist involved is willing to assume that both distributions are normal and that σ1 = σ2 = σ, the pooled t-test (often called the two-sample t-test) may be used. The test statistic (see Section 9.8) is given by the following test procedure.
  • 365. 344 Chapter 10 One- and Two-Sample Tests of Hypotheses Two-Sample Pooled t-Test For the two-sided hypothesis H0: μ1 = μ2, H1: μ1 = μ2, we reject H0 at significance level α when the computed t-statistic t = (x̄1 − x̄2) − d0 sp 1/n1 + 1/n2 , where s2 p = s2 1(n1 − 1) + s2 2(n2 − 1) n1 + n2 − 2 exceeds tα/2,n1+n2−2 or is less than −tα/2,n1+n2−2. Recall from Chapter 9 that the degrees of freedom for the t-distribution are a result of pooling of information from the two samples to estimate σ2 . One-sided alternatives suggest one-sided critical regions, as one might expect. For example, for H1: μ1 − μ2 d0, reject H1: μ1 − μ2 = d0 when t tα,n1+n2−2. Example 10.6: An experiment was performed to compare the abrasive wear of two different lami- nated materials. Twelve pieces of material 1 were tested by exposing each piece to a machine measuring wear. Ten pieces of material 2 were similarly tested. In each case, the depth of wear was observed. The samples of material 1 gave an average (coded) wear of 85 units with a sample standard deviation of 4, while the samples of material 2 gave an average of 81 with a sample standard deviation of 5. Can we conclude at the 0.05 level of significance that the abrasive wear of material 1 exceeds that of material 2 by more than 2 units? Assume the populations to be approximately normal with equal variances. Solution: Let μ1 and μ2 represent the population means of the abrasive wear for material 1 and material 2, respectively. 1. H0: μ1 − μ2 = 2. 2. H1: μ1 − μ2 2. 3. α = 0.05. 4. Critical region: t 1.725, where t = (x̄1−x̄2)−d0 sp √ 1/n1+1/n2 with v = 20 degrees of freedom. 5. Computations: x̄1 = 85, s1 = 4, n1 = 12, x̄2 = 81, s2 = 5, n2 = 10.
  • 366. 10.5 Two Samples: Tests on Two Means 345 Hence sp = (11)(16) + (9)(25) 12 + 10 − 2 = 4.478, t = (85 − 81) − 2 4.478 1/12 + 1/10 = 1.04, P = P(T 1.04) ≈ 0.16. (See Table A.4.) 6. Decision: Do not reject H0. We are unable to conclude that the abrasive wear of material 1 exceeds that of material 2 by more than 2 units. Unknown But Unequal Variances There are situations where the analyst is not able to assume that σ1 = σ2. Recall from Section 9.8 that, if the populations are normal, the statistic T = (X̄1 − X̄2) − d0 s2 1/n1 + s2 2/n2 has an approximate t-distribution with approximate degrees of freedom v = (s2 1/n1 + s2 2/n2)2 (s2 1/n1)2/(n1 − 1) + (s2 2/n2)2/(n2 − 1) . As a result, the test procedure is to not reject H0 when −tα/2,v t tα/2,v, with v given as above. Again, as in the case of the pooled t-test, one-sided alter- natives suggest one-sided critical regions. Paired Observations A study of the two-sample t-test or confidence interval on the difference between means should suggest the need for experimental design. Recall the discussion of experimental units in Chapter 9, where it was suggested that the conditions of the two populations (often referred to as the two treatments) should be assigned randomly to the experimental units. This is done to avoid biased results due to systematic differences between experimental units. In other words, in hypothesis- testing jargon, it is important that any significant difference found between means be due to the different conditions of the populations and not due to the exper- imental units in the study. For example, consider Exercise 9.40 in Section 9.9. The 20 seedlings play the role of the experimental units. Ten of them are to be treated with nitrogen and 10 with no nitrogen. It may be very important that this assignment to the “nitrogen” and “no-nitrogen” treatments be random to en- sure that systematic differences between the seedlings do not interfere with a valid comparison between the means. In Example 10.6, time of measurement is the most likely choice for the experi- mental unit. The 22 pieces of material should be measured in random order. We
  • 367. 346 Chapter 10 One- and Two-Sample Tests of Hypotheses need to guard against the possibility that wear measurements made close together in time might tend to give similar results. Systematic (nonrandom) differences in experimental units are not expected. However, random assignments guard against the problem. References to planning of experiments, randomization, choice of sample size, and so on, will continue to influence much of the development in Chapters 13, 14, and 15. Any scientist or engineer whose interest lies in analysis of real data should study this material. The pooled t-test is extended in Chapter 13 to cover more than two means. Testing of two means can be accomplished when data are in the form of paired observations, as discussed in Chapter 9. In this pairing structure, the conditions of the two populations (treatments) are assigned randomly within homogeneous units. Computation of the confidence interval for μ1 − μ2 in the situation with paired observations is based on the random variable T = D̄ − μD Sd/ √ n , where D̄ and Sd are random variables representing the sample mean and standard deviation of the differences of the observations in the experimental units. As in the case of the pooled t-test, the assumption is that the observations from each popu- lation are normal. This two-sample problem is essentially reduced to a one-sample problem by using the computed differences d1, d2, . . . , dn. Thus, the hypothesis reduces to H0: μD = d0. The computed test statistic is then given by t = d − d0 sd/ √ n . Critical regions are constructed using the t-distribution with n − 1 degrees of free- dom. Problem of Interaction in a Paired t-Test Not only will the case study that follows illustrate the use of the paired t-test but the discussion will shed considerable light on the difficulties that arise when there is an interaction between the treatments and the experimental units in the paired t structure. Recall that interaction between factors was introduced in Section 1.7 in a discussion of general types of statistical studies. The concept of interaction will be an important issue from Chapter 13 through Chapter 15. There are some types of statistical tests in which the existence of interaction results in difficulty. The paired t-test is one such example. In Section 9.9, the paired structure was used in the computation of a confidence interval on the difference between two means, and the advantage in pairing was revealed for situations in which the experimental units are homogeneous. The pairing results in a reduction in σD, the standard deviation of a difference Di = X1i − X2i, as discussed in
  • 368. 10.5 Two Samples: Tests on Two Means 347 Section 9.9. If interaction exists between treatments and experimental units, the advantage gained in pairing may be substantially reduced. Thus, in Example 9.13 on page 293, the no interaction assumption allowed the difference in mean TCDD levels (plasma vs. fat tissue) to be the same across veterans. A quick glance at the data would suggest that there is no significant violation of the assumption of no interaction. In order to demonstrate how interaction influences Var(D) and hence the quality of the paired t-test, it is instructive to revisit the ith difference given by Di = X1i − X2i = (μ1 − μ2) + (1 − 2), where X1i and X2i are taken on the ith experimental unit. If the pairing unit is homogeneous, the errors in X1i and in X2i should be similar and not independent. We noted in Chapter 9 that the positive covariance between the errors results in a reduced Var(D). Thus, the size of the difference in the treatments and the relationship between the errors in X1i and X2i contributed by the experimental unit will tend to allow a significant difference to be detected. What Conditions Result in Interaction? Let us consider a situation in which the experimental units are not homogeneous. Rather, consider the ith experimental unit with random variables X1i and X2i that are not similar. Let 1i and 2i be random variables representing the errors in the values X1i and X2i, respectively, at the ith unit. Thus, we may write X1i = μ1 + 1i and X2i = μ2 + 2i. The errors with expectation zero may tend to cause the response values X1i and X2i to move in opposite directions, resulting in a negative value for Cov(1i, 2i) and hence negative Cov(X1i, X2i). In fact, the model may be complicated even more by the fact that σ2 1 = Var(1i) = σ2 2 = Var(2i). The variance and covari- ance parameters may vary among the n experimental units. Thus, unlike in the homogeneous case, Di will tend to be quite different across experimental units due to the heterogeneous nature of the difference in 1 − 2 among the units. This produces the interaction between treatments and units. In addition, for a specific experimental unit (see Theorem 4.9), σ2 D = Var(D) = Var(1) + Var(2) − 2 Cov(1, 2) is inflated by the negative covariance term, and thus the advantage gained in pairing in the homogeneous unit case is lost in the case described here. While the inflation in Var(D) will vary from case to case, there is a danger in some cases that the increase in variance may neutralize any difference that exists between μ1 and μ2. Of course, a large value of ¯ d in the t-statistic may reflect a treatment difference that overcomes the inflated variance estimate, s2 d. Case Study 10.1: Blood Sample Data: In a study conducted in the Forestry and Wildlife De- partment at Virginia Tech, J. A. Wesson examined the influence of the drug suc- cinylcholine on the circulation levels of androgens in the blood. Blood samples were taken from wild, free-ranging deer immediately after they had received an intramuscular injection of succinylcholine administered using darts and a capture gun. A second blood sample was obtained from each deer 30 minutes after the
  • 369. 348 Chapter 10 One- and Two-Sample Tests of Hypotheses first sample, after which the deer was released. The levels of androgens at time of capture and 30 minutes later, measured in nanograms per milliliter (ng/mL), for 15 deer are given in Table 10.2. Assuming that the populations of androgen levels at time of injection and 30 minutes later are normally distributed, test at the 0.05 level of significance whether the androgen concentrations are altered after 30 minutes. Table 10.2: Data for Case Study 10.1 Androgen (ng/mL) Deer At Time of Injection 30 Minutes after Injection di 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 2.76 5.18 2.68 3.05 4.10 7.05 6.60 4.79 7.39 7.30 11.78 3.90 26.00 67.48 17.04 7.02 3.10 5.44 3.99 5.21 10.26 13.91 18.53 7.91 4.85 11.10 3.74 94.03 94.03 41.70 4.26 −2.08 2.76 0.94 1.11 3.21 7.31 13.74 0.52 −2.45 −0.68 −0.16 68.03 26.55 24.66 Solution: Let μ1 and μ2 be the average androgen concentration at the time of injection and 30 minutes later, respectively. We proceed as follows: 1. H0: μ1 = μ2 or μD = μ1 − μ2 = 0. 2. H1: μ1 = μ2 or μD = μ1 − μ2 = 0. 3. α = 0.05. 4. Critical region: t −2.145 and t 2.145, where t = d−d0 sD/ √ n with v = 14 degrees of freedom. 5. Computations: The sample mean and standard deviation for the di are d = 9.848 and sd = 18.474. Therefore, t = 9.848 − 0 18.474/ √ 15 = 2.06. 6. Though the t-statistic is not significant at the 0.05 level, from Table A.4, P = P(|T| 2.06) ≈ 0.06. As a result, there is some evidence that there is a difference in mean circulating levels of androgen.
  • 370. 10.6 Choice of Sample Size for Testing Means 349 The assumption of no interaction would imply that the effect on androgen levels of the deer is roughly the same in the data for both treatments, i.e., at the time of injection of succinylcholine and 30 minutes following injection. This can be expressed with the two factors switching roles; for example, the difference in treatments is roughly the same across the units (i.e., the deer). There certainly are some deer/treatment combinations for which the no interaction assumption seems to hold, but there is hardly any strong evidence that the experimental units are homogeneous. However, the nature of the interaction and the resulting increase in Var(D̄) appear to be dominated by a substantial difference in the treatments. This is further demonstrated by the fact that 11 of the 15 deer exhibited positive signs for the computed di and the negative di (for deer 2, 10, 11, and 12) are small in magnitude compared to the 12 positive ones. Thus, it appears that the mean level of androgen is significantly higher 30 minutes following injection than at injection, and the conclusions may be stronger than p = 0.06 would suggest. Annotated Computer Printout for Paired t-Test Figure 10.13 displays a SAS computer printout for a paired t-test using the data of Case Study 10.1. Notice that the printout looks like that for a single sample t-test and, of course, that is exactly what is accomplished, since the test seeks to determine if d is significantly different from zero. Analysis Variable : Diff N Mean Std Error t Value Pr |t| --------------------------------------------------------- 15 9.8480000 4.7698699 2.06 0.0580 --------------------------------------------------------- Figure 10.13: SAS printout of paired t-test for data of Case Study 10.1. Summary of Test Procedures As we complete the formal development of tests on population means, we offer Table 10.3, which summarizes the test procedure for the cases of a single mean and two means. Notice the approximate procedure when distributions are normal and variances are unknown but not assumed to be equal. This statistic was introduced in Chapter 9. 10.6 Choice of Sample Size for Testing Means In Section 10.2, we demonstrated how the analyst can exploit relationships among the sample size, the significance level α, and the power of the test to achieve a certain standard of quality. In most practical circumstances, the experiment should be planned, with a choice of sample size made prior to the data-taking process if possible. The sample size is usually determined to achieve good power for a fixed α and fixed specific alternative. This fixed alternative may be in the
  • 371. 350 Chapter 10 One- and Two-Sample Tests of Hypotheses Table 10.3: Tests Concerning Means H0 Value of Test Statistic H1 Critical Region μ = μ0 z = x̄ − μ0 σ/ √ n ; σ known μ μ0 μ μ0 μ = μ0 z −zα z zα z −zα/2 or z zα/2 μ = μ0 t = x̄ − μ0 s/ √ n ; v = n − 1, σ unknown μ μ0 μ μ0 μ = μ0 t −tα t tα t −tα/2 or t tα/2 μ1 − μ2 = d0 z = (x̄1 − x̄2) − d0 σ2 1/n1 + σ2 2/n2 ; σ1 and σ2 known μ1 − μ2 d0 μ1 − μ2 d0 μ1 − μ2 = d0 z −zα z zα z −zα/2 or z zα/2 μ1 − μ2 = d0 t = (x̄1 − x̄2) − d0 sp 1/n1 + 1/n2 ; v = n1 + n2 − 2, σ1 = σ2 but unknown, s2 p = (n1 − 1)s2 1 + (n2 − 1)s2 2 n1 + n2 − 2 μ1 − μ2 d0 μ1 − μ2 d0 μ1 − μ2 = d0 t −tα t tα t −tα/2 or t tα/2 μ1 − μ2 = d0 t = (x̄1 − x̄2) − d0 s2 1/n1 + s2 2/n2 ; v = (s2 1/n1 + s2 2/n2)2 (s2 1/n1)2 n1−1 + (s2 2/n2)2 n2−1 , σ1 = σ2 and unknown μ1 − μ2 d0 μ1 − μ2 d0 μ1 − μ2 = d0 t −tα t tα t −tα/2 or t tα/2 μD = d0 paired observations t = d − d0 sd/ √ n ; v = n − 1 μD d0 μD d0 μD = d0 t −tα t tα t −tα/2 or t tα/2 form of μ − μ0 in the case of a hypothesis involving a single mean or μ1 − μ2 in the case of a problem involving two means. Specific cases will provide illustrations. Suppose that we wish to test the hypothesis H0 : μ = μ0, H1 : μ μ0, with a significance level α, when the variance σ2 is known. For a specific alternative, say μ = μ0 + δ, the power of our test is shown in Figure 10.14 to be 1 − β = P(X̄ a when μ = μ0 + δ). Therefore, β = P(X̄ a when μ = μ0 + δ) = P X̄ − (μ0 + δ) σ/ √ n a − (μ0 + δ) σ/ √ n when μ = μ0 + δ .
  • 372. 10.6 Choice of Sample Size for Testing Means 351 x a + μ0 μ0 δ α β Figure 10.14: Testing μ = μ0 versus μ = μ0 + δ. Under the alternative hypothesis μ = μ0 + δ, the statistic X̄ − (μ0 + δ) σ/ √ n is the standard normal variable Z. So β = P Z a − μ0 σ/ √ n − δ σ/ √ n = P Z zα − δ σ/ √ n , from which we conclude that −zβ = zα − δ √ n σ , and hence Choice of sample size: n = (zα + zβ)2 σ2 δ2 , a result that is also true when the alternative hypothesis is μ μ0. In the case of a two-tailed test, we obtain the power 1 − β for a specified alternative when n ≈ (zα/2 + zβ)2 σ2 δ2 . Example 10.7: Suppose that we wish to test the hypothesis H0: μ = 68 kilograms, H1: μ 68 kilograms for the weights of male students at a certain college, using an α = 0.05 level of significance, when it is known that σ = 5. Find the sample size required if the power of our test is to be 0.95 when the true mean is 69 kilograms.
  • 373. 352 Chapter 10 One- and Two-Sample Tests of Hypotheses Solution: Since α = β = 0.05, we have zα = zβ = 1.645. For the alternative β = 69, we take δ = 1 and then n = (1.645 + 1.645)2 (25) 1 = 270.6. Therefore, 271 observations are required if the test is to reject the null hypothesis 95% of the time when, in fact, μ is as large as 69 kilograms. Two-Sample Case A similar procedure can be used to determine the sample size n = n1 = n2 required for a specific power of the test in which two population means are being compared. For example, suppose that we wish to test the hypothesis H0: μ1 − μ2 = d0, H1: μ1 − μ2 = d0, when σ1 and σ2 are known. For a specific alternative, say μ1 − μ2 = d0 + δ, the power of our test is shown in Figure 10.15 to be 1 − β = P(|X̄1 − X̄2| a when μ1 − μ2 = d0 + δ). x a + −a d0 d0 δ α 2 β α 2 Figure 10.15: Testing μ1 − μ2 = d0 versus μ1 − μ2 = d0 + δ. Therefore, β = P(−a X̄1 − X̄2 a when μ1 − μ2 = d0 + δ) = P −a − (d0 + δ) (σ2 1 + σ2 2)/n (X̄1 − X̄2) − (d0 + δ) (σ2 1 + σ2 2)/n a − (d0 + δ) (σ2 1 + σ2 2)/n when μ1 − μ2 = d0 + δ . Under the alternative hypothesis μ1 − μ2 = d0 + δ, the statistic X̄1 − X̄2 − (d0 + δ) (σ2 1 + σ2 2)/n
  • 374. 10.6 Choice of Sample Size for Testing Means 353 is the standard normal variable Z. Now, writing −zα/2 = −a − d0 (σ2 1 + σ2 2)/n and zα/2 = a − d0 (σ2 1 + σ2 2)/n , we have β = P −zα/2 − δ (σ2 1 + σ2 2)/n Z zα/2 − δ (σ2 1 + σ2 2)/n , from which we conclude that −zβ ≈ zα/2 − δ (σ2 1 + σ2 2)/n , and hence n ≈ (zα/2 + zβ)2 (σ2 1 + σ2 2) δ2 . For the one-tailed test, the expression for the required sample size when n = n1 = n2 is Choice of sample size: n = (zα + zβ)2 (σ2 1 + σ2 2) δ2 . When the population variance (or variances, in the two-sample situation) is un- known, the choice of sample size is not straightforward. In testing the hypothesis μ = μ0 when the true value is μ = μ0 + δ, the statistic X̄ − (μ0 + δ) S/ √ n does not follow the t-distribution, as one might expect, but instead follows the noncentral t-distribution. However, tables or charts based on the noncentral t-distribution do exist for determining the appropriate sample size if some estimate of σ is available or if δ is a multiple of σ. Table A.8 gives the sample sizes needed to control the values of α and β for various values of Δ = |δ| σ = |μ − μ0| σ for both one- and two-tailed tests. In the case of the two-sample t-test in which the variances are unknown but assumed equal, we obtain the sample sizes n = n1 = n2 needed to control the values of α and β for various values of Δ = |δ| σ = |μ1 − μ2 − d0| σ from Table A.9. Example 10.8: In comparing the performance of two catalysts on the effect of a reaction yield, a two-sample t-test is to be conducted with α = 0.05. The variances in the yields
  • 375. 354 Chapter 10 One- and Two-Sample Tests of Hypotheses are considered to be the same for the two catalysts. How large a sample for each catalyst is needed to test the hypothesis H0: μ1 = μ2, H1: μ1 = μ2 if it is essential to detect a difference of 0.8σ between the catalysts with probability 0.9? Solution: From Table A.9, with α = 0.05 for a two-tailed test, β = 0.1, and Δ = |0.8σ| σ = 0.8, we find the required sample size to be n = 34. In practical situations, it might be difficult to force a scientist or engineer to make a commitment on information from which a value of Δ can be found. The reader is reminded that the Δ-value quantifies the kind of difference between the means that the scientist considers important, that is, a difference considered significant from a scientific, not a statistical, point of view. Example 10.8 illustrates how this choice is often made, namely, by selecting a fraction of σ. Obviously, if the sample size is based on a choice of |δ| that is a small fraction of σ, the resulting sample size may be quite large compared to what the study allows. 10.7 Graphical Methods for Comparing Means In Chapter 1, considerable attention was directed to displaying data in graphical form, such as stem-and-leaf plots and box-and-whisker plots. In Section 8.8, quan- tile plots and quantile-quantile normal plots were used to provide a “picture” to summarize a set of experimental data. Many computer software packages produce graphical displays. As we proceed to other forms of data analysis (e.g., regression analysis and analysis of variance), graphical methods become even more informa- tive. Graphical aids cannot be used as a replacement for the test procedure itself. Certainly, the value of the test statistic indicates the proper type of evidence in support of H0 or H1. However, a pictorial display provides a good illustration and is often a better communicator of evidence to the beneficiary of the analysis. Also, a picture will often clarify why a significant difference was found. Failure of an important assumption may be exposed by a summary type of graphical tool. For the comparison of means, side-by-side box-and-whisker plots provide a telling display. The reader should recall that these plots display the 25th per- centile, 75th percentile, and the median in a data set. In addition, the whiskers display the extremes in a data set. Consider Exercise 10.40 at the end of this sec- tion. Plasma ascorbic acid levels were measured in two groups of pregnant women, smokers and nonsmokers. Figure 10.16 shows the box-and-whisker plots for both groups of women. Two things are very apparent. Taking into account variability, there appears to be a negligible difference in the sample means. In addition, the variability in the two groups appears to be somewhat different. Of course, the analyst must keep in mind the rather sizable differences between the sample sizes in this case.
  • 376. 10.7 Graphical Methods for Comparing Means 355 Nonsmoker Smoker 0.0 0.5 1.0 1.5 Ascorbic Acid Figure 10.16: Two box-and-whisker plots of plasma ascorbic acid in smokers and nonsmokers. No Nitrogen Nitrogen 0.3 0.4 0.5 0.6 0.7 0.8 Weight Figure 10.17: Two box-and-whisker plots of seedling data. Consider Exercise 9.40 in Section 9.9. Figure 10.17 shows the multiple box- and-whisker plot for the data on 10 seedlings, half given nitrogen and half given no nitrogen. The display reveals a smaller variability for the group containing no nitrogen. In addition, the lack of overlap of the box plots suggests a significant difference between the mean stem weights for the two groups. It would appear that the presence of nitrogen increases the stem weights and perhaps increases the variability in the weights. There are no certain rules of thumb regarding when two box-and-whisker plots give evidence of significant difference between the means. However, a rough guide- line is that if the 25th percentile line for one sample exceeds the median line for the other sample, there is strong evidence of a difference between means. More emphasis is placed on graphical methods in a real-life case study presented later in this chapter. Annotated Computer Printout for Two-Sample t-Test Consider once again Exercise 9.40 on page 294, where seedling data under condi- tions of nitrogen and no nitrogen were collected. Test H0: μNIT = μNON, H1: μNIT μNON, where the population means indicate mean weights. Figure 10.18 is an annotated computer printout generated using the SAS package. Notice that sample standard deviation and standard error are shown for both samples. The t-statistics under the assumption of equal variance and unequal variance are both given. From the box- and-whisker plot of Figure 10.17 it would certainly appear that the equal variance assumption is violated. A P-value of 0.0229 suggests a conclusion of unequal means. This concurs with the diagnostic information given in Figure 10.18. Incidentally, notice that t and t are equal in this case, since n1 = n2.
  • 377. / / 356 Chapter 10 One- and Two-Sample Tests of Hypotheses TTEST Procedure Variable Weight Mineral N Mean Std Dev Std Err No nitrogen 10 0.3990 0.0728 0.0230 Nitrogen 10 0.5650 0.1867 0.0591 Variances DF t Value Pr |t| Equal 18 2.62 0.0174 Unequal 11.7 2.62 0.0229 Test the Equality of Variances Variable Num DF Den DF F Value Pr F Weight 9 9 6.58 0.0098 Figure 10.18: SAS printout for two-sample t-test. Exercises 10.19 In a research report, Richard H. Weindruch of the UCLA Medical School claims that mice with an average life span of 32 months will live to be about 40 months old when 40% of the calories in their diet are replaced by vitamins and protein. Is there any reason to believe that μ 40 if 64 mice that are placed on this diet have an average life of 38 months with a stan- dard deviation of 5.8 months? Use a P-value in your conclusion. 10.20 A random sample of 64 bags of white ched- dar popcorn weighed, on average, 5.23 ounces with a standard deviation of 0.24 ounce. Test the hypothesis that μ = 5.5 ounces against the alternative hypothesis, μ 5.5 ounces, at the 0.05 level of significance. 10.21 An electrical firm manufactures light bulbs that have a lifetime that is approximately normally distributed with a mean of 800 hours and a standard deviation of 40 hours. Test the hypothesis that μ = 800 hours against the alternative, μ = 800 hours, if a ran- dom sample of 30 bulbs has an average life of 788 hours. Use a P-value in your answer. 10.22 In the American Heart Association journal Hy- pertension, researchers report that individuals who practice Transcendental Meditation (TM) lower their blood pressure significantly. If a random sample of 225 male TM practitioners meditate for 8.5 hours per week with a standard deviation of 2.25 hours, does that sug- gest that, on average, men who use TM meditate more than 8 hours per week? Quote a P-value in your con- clusion. 10.23 Test the hypothesis that the average content of containers of a particular lubricant is 10 liters if the contents of a random sample of 10 containers are 10.2, 9.7, 10.1, 10.3, 10.1, 9.8, 9.9, 10.4, 10.3, and 9.8 liters. Use a 0.01 level of significance and assume that the distribution of contents is normal. 10.24 The average height of females in the freshman class of a certain college has historically been 162.5 cen- timeters with a standard deviation of 6.9 centimeters. Is there reason to believe that there has been a change in the average height if a random sample of 50 females in the present freshman class has an average height of 165.2 centimeters? Use a P-value in your conclusion. Assume the standard deviation remains the same. 10.25 It is claimed that automobiles are driven on average more than 20,000 kilometers per year. To test this claim, 100 randomly selected automobile owners are asked to keep a record of the kilometers they travel. Would you agree with this claim if the random sample showed an average of 23,500 kilometers and a standard deviation of 3900 kilometers? Use a P-value in your conclusion. 10.26 According to a dietary study, high sodium in- take may be related to ulcers, stomach cancer, and migraine headaches. The human requirement for salt is only 220 milligrams per day, which is surpassed in most single servings of ready-to-eat cereals. If a ran- dom sample of 20 similar servings of a certain cereal has a mean sodium content of 244 milligrams and a standard deviation of 24.5 milligrams, does this sug- gest at the 0.05 level of significance that the average sodium content for a single serving of such cereal is greater than 220 milligrams? Assume the distribution of sodium contents to be normal.
  • 378. / / Exercises 357 10.27 A study at the University of Colorado at Boul- der shows that running increases the percent resting metabolic rate (RMR) in older women. The average RMR of 30 elderly women runners was 34.0% higher than the average RMR of 30 sedentary elderly women, and the standard deviations were reported to be 10.5 and 10.2%, respectively. Was there a significant in- crease in RMR of the women runners over the seden- tary women? Assume the populations to be approxi- mately normally distributed with equal variances. Use a P-value in your conclusions. 10.28 According to Chemical Engineering, an impor- tant property of fiber is its water absorbency. The aver- age percent absorbency of 25 randomly selected pieces of cotton fiber was found to be 20 with a standard de- viation of 1.5. A random sample of 25 pieces of acetate yielded an average percent of 12 with a standard devi- ation of 1.25. Is there strong evidence that the popula- tion mean percent absorbency is significantly higher for cotton fiber than for acetate? Assume that the percent absorbency is approximately normally distributed and that the population variances in percent absorbency for the two fibers are the same. Use a significance level of 0.05. 10.29 Past experience indicates that the time re- quired for high school seniors to complete a standard- ized test is a normal random variable with a mean of 35 minutes. If a random sample of 20 high school seniors took an average of 33.1 minutes to complete this test with a standard deviation of 4.3 minutes, test the hy- pothesis, at the 0.05 level of significance, that μ = 35 minutes against the alternative that μ 35 minutes. 10.30 A random sample of size n1 = 25, taken from a normal population with a standard deviation σ1 = 5.2, has a mean x̄1 = 81. A second random sample of size n2 = 36, taken from a different normal population with a standard deviation σ2 = 3.4, has a mean x̄2 = 76. Test the hypothesis that μ1 = μ2 against the alterna- tive, μ1 = μ2. Quote a P-value in your conclusion. 10.31 A manufacturer claims that the average ten- sile strength of thread A exceeds the average tensile strength of thread B by at least 12 kilograms. To test this claim, 50 pieces of each type of thread were tested under similar conditions. Type A thread had an aver- age tensile strength of 86.7 kilograms with a standard deviation of 6.28 kilograms, while type B thread had an average tensile strength of 77.8 kilograms with a standard deviation of 5.61 kilograms. Test the manu- facturer’s claim using a 0.05 level of significance. 10.32 Amstat News (December 2004) lists median salaries for associate professors of statistics at research institutions and at liberal arts and other institutions in the United States. Assume that a sample of 200 associate professors from research institutions has an average salary of $70,750 per year with a standard de- viation of $6000. Assume also that a sample of 200 as- sociate professors from other types of institutions has an average salary of $65,200 with a standard deviation of $5000. Test the hypothesis that the mean salary for associate professors in research institutions is $2000 higher than for those in other institutions. Use a 0.01 level of significance. 10.33 A study was conducted to see if increasing the substrate concentration has an appreciable effect on the velocity of a chemical reaction. With a substrate concentration of 1.5 moles per liter, the reaction was run 15 times, with an average velocity of 7.5 micro- moles per 30 minutes and a standard deviation of 1.5. With a substrate concentration of 2.0 moles per liter, 12 runs were made, yielding an average velocity of 8.8 micromoles per 30 minutes and a sample standard de- viation of 1.2. Is there any reason to believe that this increase in substrate concentration causes an increase in the mean velocity of the reaction of more than 0.5 micromole per 30 minutes? Use a 0.01 level of signifi- cance and assume the populations to be approximately normally distributed with equal variances. 10.34 A study was made to determine if the subject matter in a physics course is better understood when a lab constitutes part of the course. Students were ran- domly selected to participate in either a 3-semester- hour course without labs or a 4-semester-hour course with labs. In the section with labs, 11 students made an average grade of 85 with a standard deviation of 4.7, and in the section without labs, 17 students made an average grade of 79 with a standard deviation of 6.1. Would you say that the laboratory course increases the average grade by as much as 8 points? Use a P-value in your conclusion and assume the populations to be ap- proximately normally distributed with equal variances. 10.35 To find out whether a new serum will arrest leukemia, 9 mice, all with an advanced stage of the disease, are selected. Five mice receive the treatment and 4 do not. Survival times, in years, from the time the experiment commenced are as follows: Treatment 2.1 5.3 1.4 4.6 0.9 No Treatment 1.9 0.5 2.8 3.1 At the 0.05 level of significance, can the serum be said to be effective? Assume the two populations to be nor- mally distributed with equal variances. 10.36 Engineers at a large automobile manufactur- ing company are trying to decide whether to purchase brand A or brand B tires for the company’s new mod- els. To help them arrive at a decision, an experiment is conducted using 12 of each brand. The tires are run
  • 379. / / 358 Chapter 10 One- and Two-Sample Tests of Hypotheses until they wear out. The results are as follows: Brand A : x̄1 = 37,900 kilometers, s1 = 5100 kilometers. Brand B : x̄1 = 39,800 kilometers, s2 = 5900 kilometers. Test the hypothesis that there is no difference in the average wear of the two brands of tires. Assume the populations to be approximately normally distributed with equal variances. Use a P-value. 10.37 In Exercise 9.42 on page 295, test the hypoth- esis that the fuel economy of Volkswagen mini-trucks, on average, exceeds that of similarly equipped Toyota mini-trucks by 4 kilometers per liter. Use a 0.10 level of significance. 10.38 A UCLA researcher claims that the average life span of mice can be extended by as much as 8 months when the calories in their diet are reduced by approx- imately 40% from the time they are weaned. The re- stricted diets are enriched to normal levels by vitamins and protein. Suppose that a random sample of 10 mice is fed a normal diet and has an average life span of 32.1 months with a standard deviation of 3.2 months, while a random sample of 15 mice is fed the restricted diet and has an average life span of 37.6 months with a standard deviation of 2.8 months. Test the hypothesis, at the 0.05 level of significance, that the average life span of mice on this restricted diet is increased by 8 months against the alternative that the increase is less than 8 months. Assume the distributions of life spans for the regular and restricted diets are approximately normal with equal variances. 10.39 The following data represent the running times of films produced by two motion-picture companies: Company Time (minutes) 1 102 86 98 109 92 2 81 165 97 134 92 87 114 Test the hypothesis that the average running time of films produced by company 2 exceeds the average run- ning time of films produced by company 1 by 10 min- utes against the one-sided alternative that the differ- ence is less than 10 minutes. Use a 0.1 level of sig- nificance and assume the distributions of times to be approximately normal with unequal variances. 10.40 In a study conducted at Virginia Tech, the plasma ascorbic acid levels of pregnant women were compared for smokers versus nonsmokers. Thirty-two women in the last three months of pregnancy, free of major health disorders and ranging in age from 15 to 32 years, were selected for the study. Prior to the col- lection of 20 ml of blood, the participants were told to avoid breakfast, forgo their vitamin supplements, and avoid foods high in ascorbic acid content. From the blood samples, the following plasma ascorbic acid val- ues were determined, in milligrams per 100 milliliters: Plasma Ascorbic Acid Values Nonsmokers Smokers 0.97 1.16 0.48 0.72 0.86 0.71 1.00 0.85 0.98 0.81 0.58 0.68 0.62 0.57 1.18 1.32 0.64 1.36 1.24 0.98 0.78 0.99 1.09 1.64 0.90 0.92 0.74 0.78 0.88 1.24 0.94 1.18 Is there sufficient evidence to conclude that there is a difference between plasma ascorbic acid levels of smok- ers and nonsmokers? Assume that the two sets of data came from normal populations with unequal variances. Use a P-value. 10.41 A study was conducted by the Department of Zoology at Virginia Tech to determine if there is a significant difference in the density of organisms at two different stations located on Cedar Run, a sec- ondary stream in the Roanoke River drainage basin. Sewage from a sewage treatment plant and overflow from the Federal Mogul Corporation settling pond en- ter the stream near its headwaters. The following data give the density measurements, in number of organisms per square meter, at the two collecting stations: Number of Organisms per Square Meter Station 1 Station 2 5030 4980 2800 2810 13,700 11,910 4670 1330 10,730 8130 6890 3320 11,400 26,850 7720 1230 860 17,660 7030 2130 2200 22,800 7330 2190 4250 1130 15,040 1690 Can we conclude, at the 0.05 level of significance, that the average densities at the two stations are equal? Assume that the observations come from normal pop- ulations with different variances. 10.42 Five samples of a ferrous-type substance were used to determine if there is a difference between a laboratory chemical analysis and an X-ray fluorescence analysis of the iron content. Each sample was split into two subsamples and the two types of analysis were ap- plied. Following are the coded data showing the iron content analysis:
  • 380. / / Exercises 359 Sample Analysis 1 2 3 4 5 X-ray 2.0 2.0 2.3 2.1 2.4 Chemical 2.2 1.9 2.5 2.3 2.4 Assuming that the populations are normal, test at the 0.05 level of significance whether the two methods of analysis give, on the average, the same result. 10.43 According to published reports, practice un- der fatigued conditions distorts mechanisms that gov- ern performance. An experiment was conducted using 15 college males, who were trained to make a continu- ous horizontal right-to-left arm movement from a mi- croswitch to a barrier, knocking over the barrier co- incident with the arrival of a clock sweephand to the 6 o’clock position. The absolute value of the differ- ence between the time, in milliseconds, that it took to knock over the barrier and the time for the sweephand to reach the 6 o’clock position (500 msec) was recorded. Each participant performed the task five times under prefatigue and postfatigue conditions, and the sums of the absolute differences for the five performances were recorded. Absolute Time Differences Subject Prefatigue Postfatigue 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 158 92 65 98 33 89 148 58 142 117 74 66 109 57 85 91 59 215 226 223 91 92 177 134 116 153 219 143 164 100 An increase in the mean absolute time difference when the task is performed under postfatigue conditions would support the claim that practice under fatigued conditions distorts mechanisms that govern perfor- mance. Assuming the populations to be normally dis- tributed, test this claim. 10.44 In a study conducted by the Department of Human Nutrition and Foods at Virginia Tech, the fol- lowing data were recorded on sorbic acid residuals, in parts per million, in ham immediately after dipping in a sorbate solution and after 60 days of storage: Sorbic Acid Residuals in Ham Slice Before Storage After Storage 1 2 3 4 5 6 7 8 224 270 400 444 590 660 1400 680 116 96 239 329 437 597 689 576 Assuming the populations to be normally distributed, is there sufficient evidence, at the 0.05 level of signifi- cance, to say that the length of storage influences sorbic acid residual concentrations? 10.45 A taxi company manager is trying to decide whether the use of radial tires instead of regular belted tires improves fuel economy. Twelve cars were equipped with radial tires and driven over a prescribed test course. Without changing drivers, the same cars were then equipped with regular belted tires and driven once again over the test course. The gasoline consump- tion, in kilometers per liter, was recorded as follows: Kilometers per Liter Car Radial Tires Belted Tires 1 4.2 4.1 2 4.7 4.9 3 6.6 6.2 4 7.0 6.9 5 6.7 6.8 6 4.5 4.4 7 5.7 5.7 8 6.0 5.8 9 7.4 6.9 10 4.9 4.7 11 6.1 6.0 12 5.2 4.9 Can we conclude that cars equipped with radial tires give better fuel economy than those equipped with belted tires? Assume the populations to be normally distributed. Use a P-value in your conclusion. 10.46 In Review Exercise 9.91 on page 313, use the t- distribution to test the hypothesis that the diet reduces a woman’s weight by 4.5 kilograms on average against the alternative hypothesis that the mean difference in weight is less than 4.5 kilograms. Use a P-value. 10.47 How large a sample is required in Exercise 10.20 if the power of the test is to be 0.90 when the true mean is 5.20? Assume that σ = 0.24. 10.48 If the distribution of life spans in Exercise 10.19 is approximately normal, how large a sample is re- quired in order that the probability of committing a type II error be 0.1 when the true mean is 35.9 months? Assume that σ = 5.8 months.
  • 381. 360 Chapter 10 One- and Two-Sample Tests of Hypotheses 10.49 How large a sample is required in Exercise 10.24 if the power of the test is to be 0.95 when the true average height differs from 162.5 by 3.1 centime- ters? Use α = 0.02. 10.50 How large should the samples be in Exercise 10.31 if the power of the test is to be 0.95 when the true difference between thread types A and B is 8 kilo- grams? 10.51 How large a sample is required in Exercise 10.22 if the power of the test is to be 0.8 when the true mean meditation time exceeds the hypothesized value by 1.2σ? Use α = 0.05. 10.52 For testing H0: μ = 14, H1: μ = 14, an α = 0.05 level t-test is being considered. What sam- ple size is necessary in order for the probability to be 0.1 of falsely failing to reject H0 when the true popula- tion mean differs from 14 by 0.5? From a preliminary sample we estimate σ to be 1.25. 10.53 A study was conducted at the Department of Veterinary Medicine at Virginia Tech to determine if the “strength” of a wound from surgical incision is af- fected by the temperature of the knife. Eight dogs were used in the experiment. “Hot” and “cold” in- cisions were made on the abdomen of each dog, and the strength was measured. The resulting data appear below. Dog Knife Strength 1 1 2 2 3 3 4 4 Hot Cold Hot Cold Hot Cold Hot Cold 5120 8200 10, 000 8600 10, 000 9200 10, 000 6200 Dog Knife Strength 5 5 6 6 7 7 8 8 Hot Cold Hot Cold Hot Cold Hot Cold 10, 000 10, 000 7900 5200 510 885 1020 460 (a) Write an appropriate hypothesis to determine if there is a significant difference in strength between the hot and cold incisions. (b) Test the hypothesis using a paired t-test. Use a P-value in your conclusion. 10.54 Nine subjects were used in an experiment to determine if exposure to carbon monoxide has an im- pact on breathing capability. The data were collected by personnel in the Health and Physical Education De- partment at Virginia Tech and were analyzed in the Statistics Consulting Center at Hokie Land. The sub- jects were exposed to breathing chambers, one of which contained a high concentration of CO. Breathing fre- quency measures were made for each subject for each chamber. The subjects were exposed to the breath- ing chambers in random sequence. The data give the breathing frequency, in number of breaths taken per minute. Make a one-sided test of the hypothesis that mean breathing frequency is the same for the two en- vironments. Use α = 0.05. Assume that breathing frequency is approximately normal. Subject With CO Without CO 1 30 30 2 45 40 3 26 25 4 25 23 5 34 30 6 51 49 7 46 41 8 32 35 9 30 28 10.8 One Sample: Test on a Single Proportion Tests of hypotheses concerning proportions are required in many areas. Politicians are certainly interested in knowing what fraction of the voters will favor them in the next election. All manufacturing firms are concerned about the proportion of defective items when a shipment is made. Gamblers depend on a knowledge of the proportion of outcomes that they consider favorable. We shall consider the problem of testing the hypothesis that the proportion of successes in a binomial experiment equals some specified value. That is, we are testing the null hypothesis H0 that p = p0, where p is the parameter of the binomial distribution. The alternative hypothesis may be one of the usual one-sided
  • 382. 10.8 One Sample: Test on a Single Proportion 361 or two-sided alternatives: p p0, p p0, or p = p0. The appropriate random variable on which we base our decision criterion is the binomial random variable X, although we could just as well use the statistic p̂ = X/n. Values of X that are far from the mean μ = np0 will lead to the rejection of the null hypothesis. Because X is a discrete binomial variable, it is unlikely that a critical region can be established whose size is exactly equal to a prespecified value of α. For this reason it is preferable, in dealing with small samples, to base our decisions on P-values. To test the hypothesis H0: p = p0, H1: p p0, we use the binomial distribution to compute the P-value P = P(X ≤ x when p = p0). The value x is the number of successes in our sample of size n. If this P-value is less than or equal to α, our test is significant at the α level and we reject H0 in favor of H1. Similarly, to test the hypothesis H0: p = p0, H1: p p0, at the α-level of significance, we compute P = P(X ≥ x when p = p0) and reject H0 in favor of H1 if this P-value is less than or equal to α. Finally, to test the hypothesis H0: p = p0, H1: p = p0, at the α-level of significance, we compute P = 2P(X ≤ x when p = p0) if x np0 or P = 2P(X ≥ x when p = p0) if x np0 and reject H0 in favor of H1 if the computed P-value is less than or equal to α. The steps for testing a null hypothesis about a proportion against various al- ternatives using the binomial probabilities of Table A.1 are as follows: Testing a Proportion (Small Samples) 1. H0: p = p0. 2. One of the alternatives H1: p p0, p p0, or p = p0. 3. Choose a level of significance equal to α. 4. Test statistic: Binomial variable X with p = p0. 5. Computations: Find x, the number of successes, and compute the appropri- ate P-value. 6. Decision: Draw appropriate conclusions based on the P-value.
  • 383. 362 Chapter 10 One- and Two-Sample Tests of Hypotheses Example 10.9: A builder claims that heat pumps are installed in 70% of all homes being con- structed today in the city of Richmond, Virginia. Would you agree with this claim if a random survey of new homes in this city showed that 8 out of 15 had heat pumps installed? Use a 0.10 level of significance. Solution: 1. H0: p = 0.7. 2. H1: p = 0.7. 3. α = 0.10. 4. Test statistic: Binomial variable X with p = 0.7 and n = 15. 5. Computations: x = 8 and np0 = (15)(0.7) = 10.5. Therefore, from Table A.1, the computed P-value is P = 2P(X ≤ 8 when p = 0.7) = 2 8 x=0 b(x; 15, 0.7) = 0.2622 0.10. 6. Decision: Do not reject H0. Conclude that there is insufficient reason to doubt the builder’s claim. In Section 5.2, we saw that binomial probabilities can be obtained from the actual binomial formula or from Table A.1 when n is small. For large n, approxi- mation procedures are required. When the hypothesized value p0 is very close to 0 or 1, the Poisson distribution, with parameter μ = np0, may be used. However, the normal curve approximation, with parameters μ = np0 and σ2 = np0q0, is usually preferred for large n and is very accurate as long as p0 is not extremely close to 0 or to 1. If we use the normal approximation, the z-value for testing p = p0 is given by z = x − np0 √ np0q0 = p̂ − p0 p0q0/n , which is a value of the standard normal variable Z. Hence, for a two-tailed test at the α-level of significance, the critical region is z −zα/2 or z zα/2. For the one-sided alternative p p0, the critical region is z −zα, and for the alternative p p0, the critical region is z zα. Example 10.10: A commonly prescribed drug for relieving nervous tension is believed to be only 60% effective. Experimental results with a new drug administered to a random sample of 100 adults who were suffering from nervous tension show that 70 received relief. Is this sufficient evidence to conclude that the new drug is superior to the one commonly prescribed? Use a 0.05 level of significance. Solution: 1. H0: p = 0.6. 2. H1: p 0.6. 3. α = 0.05. 4. Critical region: z 1.645.
  • 384. 10.9 Two Samples: Tests on Two Proportions 363 5. Computations: x = 70, n = 100, p̂ = 70/100 = 0.7, and z = 0.7 − 0.6 (0.6)(0.4)/100 = 2.04, P = P(Z 2.04) 0.0207. 6. Decision: Reject H0 and conclude that the new drug is superior. 10.9 Two Samples: Tests on Two Proportions Situations often arise where we wish to test the hypothesis that two proportions are equal. For example, we might want to show evidence that the proportion of doctors who are pediatricians in one state is equal to the proportion in another state. A person may decide to give up smoking only if he or she is convinced that the proportion of smokers with lung cancer exceeds the proportion of nonsmokers with lung cancer. In general, we wish to test the null hypothesis that two proportions, or bino- mial parameters, are equal. That is, we are testing p1 = p2 against one of the alternatives p1 p2, p1 p2, or p1 = p2. Of course, this is equivalent to testing the null hypothesis that p1 − p2 = 0 against one of the alternatives p1 − p2 0, p1 − p2 0, or p1 − p2 = 0. The statistic on which we base our decision is the random variable + P1 − + P2. Independent samples of sizes n1 and n2 are selected at random from two binomial populations and the proportions of successes + P1 and + P2 for the two samples are computed. In our construction of confidence intervals for p1 and p2 we noted, for n1 and n2 sufficiently large, that the point estimator + P1 minus + P2 was approximately normally distributed with mean μ P1− P2 = p1 − p2 and variance σ2 P1− P2 = p1q1 n1 + p2q2 n2 . Therefore, our critical region(s) can be established by using the standard normal variable Z = ( + P1 − + P2) − (p1 − p2) p1q1/n1 + p2q2/n2 . When H0 is true, we can substitute p1 = p2 = p and q1 = q2 = q (where p and q are the common values) in the preceding formula for Z to give the form Z = + P1 − + P2 pq(1/n1 + 1/n2) . To compute a value of Z, however, we must estimate the parameters p and q that appear in the radical. Upon pooling the data from both samples, the pooled estimate of the proportion p is p̂ = x1 + x2 n1 + n2 ,
  • 385. 364 Chapter 10 One- and Two-Sample Tests of Hypotheses where x1 and x2 are the numbers of successes in each of the two samples. Substi- tuting p̂ for p and q̂ = 1 − p̂ for q, the z-value for testing p1= p2 is determined from the formula z = p̂1 − p̂2 p̂q̂(1/n1 + 1/n2) . The critical regions for the appropriate alternative hypotheses are set up as before, using critical points of the standard normal curve. Hence, for the alternative p1 = p2 at the α-level of significance, the critical region is z −zα/2 or z zα/2. For a test where the alternative is p1 p2, the critical region is z −zα, and when the alternative is p1 p2, the critical region is z zα. Example 10.11: A vote is to be taken among the residents of a town and the surrounding county to determine whether a proposed chemical plant should be constructed. The con- struction site is within the town limits, and for this reason many voters in the county believe that the proposal will pass because of the large proportion of town voters who favor the construction. To determine if there is a significant difference in the proportions of town voters and county voters favoring the proposal, a poll is taken. If 120 of 200 town voters favor the proposal and 240 of 500 county residents favor it, would you agree that the proportion of town voters favoring the proposal is higher than the proportion of county voters? Use an α = 0.05 level of significance. Solution: Let p1 and p2 be the true proportions of voters in the town and county, respectively, favoring the proposal. 1. H0: p1 = p2. 2. H1: p1 p2. 3. α = 0.05. 4. Critical region: z 1.645. 5. Computations: p̂1 = x1 n1 = 120 200 = 0.60, p̂2 = x2 n2 = 240 500 = 0.48, and p̂ = x1 + x2 n1 + n2 = 120 + 240 200 + 500 = 0.51. Therefore, z = 0.60 − 0.48 (0.51)(0.49)(1/200 + 1/500) = 2.9, P = P(Z 2.9) = 0.0019. 6. Decision: Reject H0 and agree that the proportion of town voters favoring the proposal is higher than the proportion of county voters.
  • 386. Exercises 365 Exercises 10.55 A marketing expert for a pasta-making com- pany believes that 40% of pasta lovers prefer lasagna. If 9 out of 20 pasta lovers choose lasagna over other pas- tas, what can be concluded about the expert’s claim? Use a 0.05 level of significance. 10.56 Suppose that, in the past, 40% of all adults favored capital punishment. Do we have reason to believe that the proportion of adults favoring capital punishment has increased if, in a random sample of 15 adults, 8 favor capital punishment? Use a 0.05 level of significance. 10.57 A new radar device is being considered for a certain missile defense system. The system is checked by experimenting with aircraft in which a kill or a no kill is simulated. If, in 300 trials, 250 kills occur, accept or reject, at the 0.04 level of significance, the claim that the probability of a kill with the new system does not exceed the 0.8 probability of the existing device. 10.58 It is believed that at least 60% of the residents in a certain area favor an annexation suit by a neigh- boring city. What conclusion would you draw if only 110 in a sample of 200 voters favored the suit? Use a 0.05 level of significance. 10.59 A fuel oil company claims that one-fifth of the homes in a certain city are heated by oil. Do we have reason to believe that fewer than one-fifth are heated by oil if, in a random sample of 1000 homes in this city, 136 are heated by oil? Use a P-value in your conclu- sion. 10.60 At a certain college, it is estimated that at most 25% of the students ride bicycles to class. Does this seem to be a valid estimate if, in a random sample of 90 college students, 28 are found to ride bicycles to class? Use a 0.05 level of significance. 10.61 In a winter of an epidemic flu, the parents of 2000 babies were surveyed by researchers at a well- known pharmaceutical company to determine if the company’s new medicine was effective after two days. Among 120 babies who had the flu and were given the medicine, 29 were cured within two days. Among 280 babies who had the flu but were not given the medicine, 56 recovered within two days. Is there any significant indication that supports the company’s claim of the effectiveness of the medicine? 10.62 In a controlled laboratory experiment, scien- tists at the University of Minnesota discovered that 25% of a certain strain of rats subjected to a 20% coffee bean diet and then force-fed a powerful cancer-causing chemical later developed cancerous tumors. Would we have reason to believe that the proportion of rats devel- oping tumors when subjected to this diet has increased if the experiment were repeated and 16 of 48 rats de- veloped tumors? Use a 0.05 level of significance. 10.63 In a study to estimate the proportion of resi- dents in a certain city and its suburbs who favor the construction of a nuclear power plant, it is found that 63 of 100 urban residents favor the construction while only 59 of 125 suburban residents are in favor. Is there a significant difference between the proportions of ur- ban and suburban residents who favor construction of the nuclear plant? Make use of a P-value. 10.64 In a study on the fertility of married women conducted by Martin O’Connell and Carolyn C. Rogers for the Census Bureau in 1979, two groups of childless wives aged 25 to 29 were selected at random, and each was asked if she eventually planned to have a child. One group was selected from among wives married less than two years and the other from among wives married five years. Suppose that 240 of the 300 wives married less than two years planned to have children some day compared to 288 of the 400 wives married five years. Can we conclude that the proportion of wives married less than two years who planned to have children is significantly higher than the proportion of wives married five years? Make use of a P-value. 10.65 An urban community would like to show that the incidence of breast cancer is higher in their area than in a nearby rural area. (PCB levels were found to be higher in the soil of the urban community.) If it is found that 20 of 200 adult women in the urban com- munity have breast cancer and 10 of 150 adult women in the rural community have breast cancer, can we con- clude at the 0.05 level of significance that breast cancer is more prevalent in the urban community? 10.66 Group Project: The class should be divided into pairs of students for this project. Suppose it is conjectured that at least 25% of students at your uni- versity exercise for more than two hours a week. Col- lect data from a random sample of 50 students. Ask each student if he or she works out for at least two hours per week. Then do the computations that allow either rejection or nonrejection of the above conjecture. Show all work and quote a P-value in your conclusion.
  • 387. 366 Chapter 10 One- and Two-Sample Tests of Hypotheses 10.10 One- and Two-Sample Tests Concerning Variances In this section, we are concerned with testing hypotheses concerning population variances or standard deviations. Applications of one- and two-sample tests on variances are certainly not difficult to motivate. Engineers and scientists are con- fronted with studies in which they are required to demonstrate that measurements involving products or processes adhere to specifications set by consumers. The specifications are often met if the process variance is sufficiently small. Attention is also focused on comparative experiments between methods or processes, where inherent reproducibility or variability must formally be compared. In addition, to determine if the equal variance assumption is violated, a test comparing two variances is often applied prior to conducting a t-test on two means. Let us first consider the problem of testing the null hypothesis H0 that the population variance σ2 equals a specified value σ2 0 against one of the usual alter- natives σ2 σ2 0, σ2 σ2 0, or σ2 = σ2 0. The appropriate statistic on which to base our decision is the chi-squared statistic of Theorem 8.4, which was used in Chapter 9 to construct a confidence interval for σ2 . Therefore, if we assume that the distribution of the population being sampled is normal, the chi-squared value for testing σ2 = σ2 0 is given by χ2 = (n − 1)s2 σ2 0 , where n is the sample size, s2 is the sample variance, and σ2 0 is the value of σ2 given by the null hypothesis. If H0 is true, χ2 is a value of the chi-squared distribution with v = n − 1 degrees of freedom. Hence, for a two-tailed test at the α-level of significance, the critical region is χ2 χ2 1−α/2 or χ2 χ2 α/2. For the one- sided alternative σ2 σ2 0, the critical region is χ2 χ2 1−α, and for the one-sided alternative σ2 σ2 0, the critical region is χ2 χ2 α. Robustness of χ2 -Test to Assumption of Normality The reader may have discerned that various tests depend, at least theoretically, on the assumption of normality. In general, many procedures in applied statis- tics have theoretical underpinnings that depend on the normal distribution. These procedures vary in the degree of their dependency on the assumption of normality. A procedure that is reasonably insensitive to the assumption is called a robust procedure (i.e., robust to normality). The χ2 -test on a single variance is very nonrobust to normality (i.e., the practical success of the procedure depends on normality). As a result, the P-value computed may be appreciably different from the actual P-value if the population sampled is not normal. Indeed, it is quite feasible that a statistically significant P-value may not truly signal H1: σ = σ0; rather, a significant value may be a result of the violation of the normality assump- tions. Therefore, the analyst should approach the use of this particular χ2 -test with caution. Example 10.12: A manufacturer of car batteries claims that the life of the company’s batteries is approximately normally distributed with a standard deviation equal to 0.9 year.
  • 388. 10.10 One- and Two-Sample Tests Concerning Variances 367 If a random sample of 10 of these batteries has a standard deviation of 1.2 years, do you think that σ 0.9 year? Use a 0.05 level of significance. Solution: 1. H0: σ2 = 0.81. 2. H1: σ2 0.81. 3. α = 0.05. 4. Critical region: From Figure 10.19 we see that the null hypothesis is rejected when χ2 16.919, where χ2 = (n−1)s2 σ2 0 , with v = 9 degrees of freedom. 0 16.919 χ2 v = 9 0.05 Figure 10.19: Critical region for the alternative hypothesis σ 0.9. 5. Computations: s2 = 1.44, n = 10, and χ2 = (9)(1.44) 0.81 = 16.0, P ≈ 0.07. 6. Decision: The χ2 -statistic is not significant at the 0.05 level. However, based on the P-value 0.07, there is evidence that σ 0.9. Now let us consider the problem of testing the equality of the variances σ2 1 and σ2 2 of two populations. That is, we shall test the null hypothesis H0 that σ2 1 = σ2 2 against one of the usual alternatives σ2 1 σ2 2, σ2 1 σ2 2, or σ2 1 = σ2 2. For independent random samples of sizes n1 and n2, respectively, from the two populations, the f-value for testing σ2 1 = σ2 2 is the ratio f = s2 1 s2 2 , where s2 1 and s2 2 are the variances computed from the two samples. If the two populations are approximately normally distributed and the null hypothesis is true, according to Theorem 8.8 the ratio f = s2 1/s2 2 is a value of the F-distribution with v1 = n1 − 1 and v2 = n2 − 1 degrees of freedom. Therefore, the critical regions
  • 389. 368 Chapter 10 One- and Two-Sample Tests of Hypotheses of size α corresponding to the one-sided alternatives σ2 1 σ2 2 and σ2 1 σ2 2 are, respectively, f f1−α(v1, v2) and f fα(v1, v2). For the two-sided alternative σ2 1 = σ2 2, the critical region is f f1−α/2(v1, v2) or f fα/2(v1, v2). Example 10.13: In testing for the difference in the abrasive wear of the two materials in Example 10.6, we assumed that the two unknown population variances were equal. Were we justified in making this assumption? Use a 0.10 level of significance. Solution: Let σ2 1 and σ2 2 be the population variances for the abrasive wear of material 1 and material 2, respectively. 1. H0: σ2 1 = σ2 2. 2. H1: σ2 1 = σ2 2. 3. α = 0.10. 4. Critical region: From Figure 10.20, we see that f0.05(11, 9) = 3.11, and, by using Theorem 8.7, we find f0.95(11, 9) = 1 f0.05(9, 11) = 0.34. Therefore, the null hypothesis is rejected when f 0.34 or f 3.11, where f = s2 1/s2 2 with v1 = 11 and v2 = 9 degrees of freedom. 5. Computations: s2 1 = 16, s2 2 = 25, and hence f = 16 25 = 0.64. 6. Decision: Do not reject H0. Conclude that there is insufficient evidence that the variances differ. 0 0.34 3.11 f v1 = 11 and v2 = 9 0.05 0.05 Figure 10.20: Critical region for the alternative hypothesis σ2 1 = σ2 2. F-Test for Testing Variances in SAS Figure 10.18 on page 356 displays the printout of a two-sample t-test where two means from the seedling data in Exercise 9.40 were compared. Box-and-whisker plots in Figure 10.17 on page 355 suggest that variances are not homogeneous, and thus the t -statistic and its corresponding P-value are relevant. Note also that
  • 390. / / Exercises 369 the printout displays the F-statistic for H0: σ1 = σ2 with a P-value of 0.0098, additional evidence that more variability is to be expected when nitrogen is used than under the no-nitrogen condition. Exercises 10.67 The content of containers of a particular lubri- cant is known to be normally distributed with a vari- ance of 0.03 liter. Test the hypothesis that σ2 = 0.03 against the alternative that σ2 = 0.03 for the random sample of 10 containers in Exercise 10.23 on page 356. Use a P-value in your conclusion. 10.68 Past experience indicates that the time re- quired for high school seniors to complete a standard- ized test is a normal random variable with a standard deviation of 6 minutes. Test the hypothesis that σ = 6 against the alternative that σ 6 if a random sample of the test times of 20 high school seniors has a standard deviation s = 4.51. Use a 0.05 level of significance. 10.69 Aflotoxins produced by mold on peanut crops in Virginia must be monitored. A sample of 64 batches of peanuts reveals levels of 24.17 ppm, on average, with a variance of 4.25 ppm. Test the hypothesis that σ2 = 4.2 ppm against the alternative that σ2 = 4.2 ppm. Use a P-value in your conclusion. 10.70 Past data indicate that the amount of money contributed by the working residents of a large city to a volunteer rescue squad is a normal random variable with a standard deviation of $1.40. It has been sug- gested that the contributions to the rescue squad from just the employees of the sanitation department are much more variable. If the contributions of a random sample of 12 employees from the sanitation department have a standard deviation of $1.75, can we conclude at the 0.01 level of significance that the standard devi- ation of the contributions of all sanitation workers is greater than that of all workers living in the city? 10.71 A soft-drink dispensing machine is said to be out of control if the variance of the contents exceeds 1.15 deciliters. If a random sample of 25 drinks from this machine has a variance of 2.03 deciliters, does this indicate at the 0.05 level of significance that the ma- chine is out of control? Assume that the contents are approximately normally distributed. 10.72 Large-Sample Test of σ2 = σ2 0: When n ≥ 30, we can test the null hypothesis that σ2 = σ2 0, or σ = σ0, by computing z = s − σ0 σ0/ √ 2n , which is a value of a random variable whose sampling distribution is approximately the standard normal dis- tribution. (a) With reference to Example 10.4, test at the 0.05 level of significance whether σ = 10.0 years against the alternative that σ = 10.0 years. (b) It is suspected that the variance of the distribution of distances in kilometers traveled on 5 liters of fuel by a new automobile model equipped with a diesel engine is less than the variance of the distribution of distances traveled by the same model equipped with a six-cylinder gasoline engine, which is known to be σ2 = 6.25. If 72 test runs of the diesel model have a variance of 4.41, can we conclude at the 0.05 level of significance that the variance of the distances traveled by the diesel model is less than that of the gasoline model? 10.73 A study is conducted to compare the lengths of time required by men and women to assemble a certain product. Past experience indicates that the distribu- tion of times for both men and women is approximately normal but the variance of the times for women is less than that for men. A random sample of times for 11 men and 14 women produced the following data: Men Women n1 = 11 n2 = 14 s1 = 6.1 s2 = 5.3 Test the hypothesis that σ2 1 = σ2 2 against the alterna- tive that σ2 1 σ2 2. Use a P-value in your conclusion. 10.74 For Exercise 10.41 on page 358, test the hy- pothesis at the 0.05 level of significance that σ2 1 = σ2 2 against the alternative that σ2 1 = σ2 2, where σ2 1 and σ2 2 are the variances of the number of organisms per square meter of water at the two different locations on Cedar Run. 10.75 With reference to Exercise 10.39 on page 358, test the hypothesis that σ2 1 = σ2 2 against the alterna- tive that σ2 1 = σ2 2, where σ2 1 and σ2 2 are the variances for the running times of films produced by company 1 and company 2, respectively. Use a P-value. 10.76 Two types of instruments for measuring the amount of sulfur monoxide in the atmosphere are being compared in an air-pollution experiment. Researchers
  • 391. 370 Chapter 10 One- and Two-Sample Tests of Hypotheses wish to determine whether the two types of instruments yield measurements having the same variability. The readings in the following table were recorded for the two instruments. Sulfur Monoxide Instrument A Instrument B 0.86 0.87 0.82 0.74 0.75 0.63 0.61 0.55 0.89 0.76 0.64 0.70 0.81 0.69 0.68 0.57 0.65 0.53 Assuming the populations of measurements to be ap- proximately normally distributed, test the hypothesis that σA = σB against the alternative that σA = σB. Use a P-value. 10.77 An experiment was conducted to compare the alcohol content of soy sauce on two different produc- tion lines. Production was monitored eight times a day. The data are shown here. Production line 1: 0.48 0.39 0.42 0.52 0.40 0.48 0.52 0.52 Production line 2: 0.38 0.37 0.39 0.41 0.38 0.39 0.40 0.39 Assume both populations are normal. It is suspected that production line 1 is not producing as consistently as production line 2 in terms of alcohol content. Test the hypothesis that σ1 = σ2 against the alternative that σ1 = σ2. Use a P-value. 10.78 Hydrocarbon emissions from cars are known to have decreased dramatically during the 1980s. A study was conducted to compare the hydrocarbon emissions at idling speed, in parts per million (ppm), for automo- biles from 1980 and 1990. Twenty cars of each model year were randomly selected, and their hydrocarbon emission levels were recorded. The data are as follows: 1980 models: 141 359 247 940 882 494 306 210 105 880 200 223 188 940 241 190 300 435 241 380 1990 models: 140 160 20 20 223 60 20 95 360 70 220 400 217 58 235 380 200 175 85 65 Test the hypothesis that σ1 = σ2 against the alter- native that σ1 = σ2. Assume both populations are normal. Use a P-value. 10.11 Goodness-of-Fit Test Throughout this chapter, we have been concerned with the testing of statistical hypotheses about single population parameters such as μ, σ2 , and p. Now we shall consider a test to determine if a population has a specified theoretical distribution. The test is based on how good a fit we have between the frequency of occurrence of observations in an observed sample and the expected frequencies obtained from the hypothesized distribution. To illustrate, we consider the tossing of a die. We hypothesize that the die is honest, which is equivalent to testing the hypothesis that the distribution of outcomes is the discrete uniform distribution f(x) = 1 6 , x = 1, 2, . . . , 6. Suppose that the die is tossed 120 times and each outcome is recorded. Theoret- ically, if the die is balanced, we would expect each face to occur 20 times. The results are given in Table 10.4. Table 10.4: Observed and Expected Frequencies of 120 Tosses of a Die Face: 1 2 3 4 5 6 Observed 20 22 17 18 19 24 Expected 20 20 20 20 20 20
  • 392. 10.11 Goodness-of-Fit Test 371 By comparing the observed frequencies with the corresponding expected fre- quencies, we must decide whether these discrepancies are likely to occur as a result of sampling fluctuations and the die is balanced or whether the die is not honest and the distribution of outcomes is not uniform. It is common practice to refer to each possible outcome of an experiment as a cell. In our illustration, we have 6 cells. The appropriate statistic on which we base our decision criterion for an experiment involving k cells is defined by the following. A goodness-of-fit test between observed and expected frequencies is based on the quantity Goodness-of-Fit Test χ2 = k i=1 (oi − ei)2 ei , where χ2 is a value of a random variable whose sampling distribution is approx- imated very closely by the chi-squared distribution with v = k − 1 degrees of freedom. The symbols oi and ei represent the observed and expected frequencies, respectively, for the ith cell. The number of degrees of freedom associated with the chi-squared distribution used here is equal to k − 1, since there are only k − 1 freely determined cell fre- quencies. That is, once k − 1 cell frequencies are determined, so is the frequency for the kth cell. If the observed frequencies are close to the corresponding expected frequencies, the χ2 -value will be small, indicating a good fit. If the observed frequencies differ considerably from the expected frequencies, the χ2 -value will be large and the fit is poor. A good fit leads to the acceptance of H0, whereas a poor fit leads to its rejection. The critical region will, therefore, fall in the right tail of the chi-squared distribution. For a level of significance equal to α, we find the critical value χ2 α from Table A.5, and then χ2 χ2 α constitutes the critical region. The decision criterion described here should not be used unless each of the expected frequencies is at least equal to 5. This restriction may require the combining of adjacent cells, resulting in a reduction in the number of degrees of freedom. From Table 10.4, we find the χ2 -value to be χ2 = (20 − 20)2 20 + (22 − 20)2 20 + (17 − 20)2 20 + (18 − 20)2 20 + (19 − 20)2 20 + (24 − 20)2 20 = 1.7. Using Table A.5, we find χ2 0.05 = 11.070 for v = 5 degrees of freedom. Since 1.7 is less than the critical value, we fail to reject H0. We conclude that there is insufficient evidence that the die is not balanced. As a second illustration, let us test the hypothesis that the frequency distri- bution of battery lives given in Table 1.7 on page 23 may be approximated by a normal distribution with mean μ = 3.5 and standard deviation σ = 0.7. The expected frequencies for the 7 classes (cells), listed in Table 10.5, are obtained by computing the areas under the hypothesized normal curve that fall between the various class boundaries.
  • 393. 372 Chapter 10 One- and Two-Sample Tests of Hypotheses Table 10.5: Observed and Expected Frequencies of Battery Lives, Assuming Normality Class Boundaries oi ei 1.45−1.95 1.95−2.45 2.45−2.95 2 1 4 ⎫ ⎬ ⎭ 7 0.5 2.1 5.9 ⎫ ⎬ ⎭ 8.5 2.95−3.45 3.45−3.95 15 10 10.3 10.7 3.95−4.45 4.45−4.95 5 3 8 7.0 3.5 10.5 For example, the z-values corresponding to the boundaries of the fourth class are z1 = 2.95 − 3.5 0.7 = −0.79 and z2 = 3.45 − 3.5 0.7 = −0.07. From Table A.3 we find the area between z1 = −0.79 and z2 = −0.07 to be area = P(−0.79 Z −0.07) = P(Z −0.07) − P(Z −0.79) = 0.4721 − 0.2148 = 0.2573. Hence, the expected frequency for the fourth class is e4 = (0.2573)(40) = 10.3. It is customary to round these frequencies to one decimal. The expected frequency for the first class interval is obtained by using the total area under the normal curve to the left of the boundary 1.95. For the last class interval, we use the total area to the right of the boundary 4.45. All other expected frequencies are determined by the method described for the fourth class. Note that we have combined adjacent classes in Table 10.5 where the expected frequencies are less than 5 (a rule of thumb in the goodness-of-fit test). Consequently, the total number of intervals is reduced from 7 to 4, resulting in v = 3 degrees of freedom. The χ2 -value is then given by χ2 = (7 − 8.5)2 8.5 + (15 − 10.3)2 10.3 + (10 − 10.7)2 10.7 + (8 − 10.5)2 10.5 = 3.05. Since the computed χ2 -value is less than χ2 0.05 = 7.815 for 3 degrees of freedom, we have no reason to reject the null hypothesis and conclude that the normal distribution with μ = 3.5 and σ = 0.7 provides a good fit for the distribution of battery lives. The chi-squared goodness-of-fit test is an important resource, particularly since so many statistical procedures in practice depend, in a theoretical sense, on the assumption that the data gathered come from a specific type of distribution. As we have already seen, the normality assumption is often made. In the chapters that follow, we shall continue to make normality assumptions in order to provide a theoretical basis for certain tests and confidence intervals.
  • 394. 10.12 Test for Independence (Categorical Data) 373 There are tests in the literature that are more powerful than the chi-squared test for testing normality. One such test is called Geary’s test. This test is based on a very simple statistic which is a ratio of two estimators of the population standard deviation σ. Suppose that a random sample X1, X2, . . . , Xn is taken from a normal distribution, N(μ, σ). Consider the ratio U = π/2 n i=1 |Xi − X̄|/n % n i=1 (Xi − X̄)2/n . The reader should recognize that the denominator is a reasonable estimator of σ whether the distribution is normal or not. The numerator is a good estimator of σ if the distribution is normal but may overestimate or underestimate σ when there are departures from normality. Thus, values of U differing considerably from 1.0 represent the signal that the hypothesis of normality should be rejected. For large samples, a reasonable test is based on approximate normality of U. The test statistic is then a standardization of U, given by Z = U − 1 0.2661/ √ n . Of course, the test procedure involves the two-sided critical region. We compute a value of z from the data and do not reject the hypothesis of normality when −zα/2 Z zα/2. A paper dealing with Geary’s test is cited in the Bibliography (Geary, 1947). 10.12 Test for Independence (Categorical Data) The chi-squared test procedure discussed in Section 10.11 can also be used to test the hypothesis of independence of two variables of classification. Suppose that we wish to determine whether the opinions of the voting residents of the state of Illinois concerning a new tax reform are independent of their levels of income. Members of a random sample of 1000 registered voters from the state of Illinois are classified as to whether they are in a low, medium, or high income bracket and whether or not they favor the tax reform. The observed frequencies are presented in Table 10.6, which is known as a contingency table. Table 10.6: 2 × 3 Contingency Table Income Level Tax Reform Low Medium High Total For 182 213 203 598 Against 154 138 110 402 Total 336 351 313 1000
  • 395. 374 Chapter 10 One- and Two-Sample Tests of Hypotheses A contingency table with r rows and c columns is referred to as an r × c table (“r × c” is read “r by c”). The row and column totals in Table 10.6 are called marginal frequencies. Our decision to accept or reject the null hypothesis, H0, of independence between a voter’s opinion concerning the tax reform and his or her level of income is based upon how good a fit we have between the observed frequencies in each of the 6 cells of Table 10.6 and the frequencies that we would expect for each cell under the assumption that H0 is true. To find these expected frequencies, let us define the following events: L: A person selected is in the low-income level. M: A person selected is in the medium-income level. H: A person selected is in the high-income level. F: A person selected is for the tax reform. A: A person selected is against the tax reform. By using the marginal frequencies, we can list the following probability esti- mates: P(L) = 336 1000 , P(M) = 351 1000 , P(H) = 313 1000 , P(F) = 598 1000 , P(A) = 402 1000 . Now, if H0 is true and the two variables are independent, we should have P(L ∩ F) = P(L)P(F) = 336 1000 598 1000 , P(L ∩ A) = P(L)P(A) = 336 1000 402 1000 , P(M ∩ F) = P(M)P(F) = 351 1000 598 1000 , P(M ∩ A) = P(M)P(A) = 351 1000 402 1000 , P(H ∩ F) = P(H)P(F) = 313 1000 598 1000 , P(H ∩ A) = P(H)P(A) = 313 1000 402 1000 . The expected frequencies are obtained by multiplying each cell probability by the total number of observations. As before, we round these frequencies to one decimal. Thus, the expected number of low-income voters in our sample who favor the tax reform is estimated to be 336 1000 598 1000 (1000) = (336)(598) 1000 = 200.9
  • 396. 10.12 Test for Independence (Categorical Data) 375 when H0 is true. The general rule for obtaining the expected frequency of any cell is given by the following formula: expected frequency = (column total) × (row total) grand total . The expected frequency for each cell is recorded in parentheses beside the actual observed value in Table 10.7. Note that the expected frequencies in any row or column add up to the appropriate marginal total. In our example, we need to compute only two expected frequencies in the top row of Table 10.7 and then find the others by subtraction. The number of degrees of freedom associated with the chi-squared test used here is equal to the number of cell frequencies that may be filled in freely when we are given the marginal totals and the grand total, and in this illustration that number is 2. A simple formula providing the correct number of degrees of freedom is v = (r − 1)(c − 1). Table 10.7: Observed and Expected Frequencies Income Level Tax Reform Low Medium High Total For Against Total 182 (200.9) 154 (135.1) 336 213 (209.9) 138 (141.1) 351 203 (187.2) 110 (125.8) 313 598 402 1000 Hence, for our example, v = (2 − 1)(3 − 1) = 2 degrees of freedom. To test the null hypothesis of independence, we use the following decision criterion. Test for Independence Calculate χ2 = i (oi − ei)2 ei , where the summation extends over all rc cells in the r × c contingency table. If χ2 χ2 α with v = (r − 1)(c − 1) degrees of freedom, reject the null hypothesis of independence at the α-level of significance; otherwise, fail to reject the null hypothesis. Applying this criterion to our example, we find that χ2 = (182 − 200.9)2 200.9 + (213 − 209.9)2 209.9 + (203 − 187.2)2 187.2 + (154 − 135.1)2 135.1 + (138 − 141.1)2 141.1 + (110 − 125.8)2 125.8 = 7.85, P ≈ 0.02. From Table A.5 we find that χ2 0.05 = 5.991 for v = (2 − 1)(3 − 1) = 2 degrees of freedom. The null hypothesis is rejected and we conclude that a voter’s opinion concerning the tax reform and his or her level of income are not independent.
  • 397. 376 Chapter 10 One- and Two-Sample Tests of Hypotheses It is important to remember that the statistic on which we base our decision has a distribution that is only approximated by the chi-squared distribution. The computed χ2 -values depend on the cell frequencies and consequently are discrete. The continuous chi-squared distribution seems to approximate the discrete sam- pling distribution of χ2 very well, provided that the number of degrees of freedom is greater than 1. In a 2 × 2 contingency table, where we have only 1 degree of freedom, a correction called Yates’ correction for continuity is applied. The corrected formula then becomes χ2 (corrected) = i (|oi − ei| − 0.5)2 ei . If the expected cell frequencies are large, the corrected and uncorrected results are almost the same. When the expected frequencies are between 5 and 10, Yates’ correction should be applied. For expected frequencies less than 5, the Fisher-Irwin exact test should be used. A discussion of this test may be found in Basic Concepts of Probability and Statistics by Hodges and Lehmann (2005; see the Bibliography). The Fisher-Irwin test may be avoided, however, by choosing a larger sample. 10.13 Test for Homogeneity When we tested for independence in Section 10.12, a random sample of 1000 vot- ers was selected and the row and column totals for our contingency table were determined by chance. Another type of problem for which the method of Section 10.12 applies is one in which either the row or column totals are predetermined. Suppose, for example, that we decide in advance to select 200 Democrats, 150 Republicans, and 150 Independents from the voters of the state of North Carolina and record whether they are for a proposed abortion law, against it, or undecided. The observed responses are given in Table 10.8. Table 10.8: Observed Frequencies Political Affiliation Abortion Law Democrat Republican Independent Total For Against Undecided Total 82 93 25 200 70 62 18 150 62 67 21 150 214 222 64 500 Now, rather than test for independence, we test the hypothesis that the popu- lation proportions within each row are the same. That is, we test the hypothesis that the proportions of Democrats, Republicans, and Independents favoring the abortion law are the same; the proportions of each political affiliation against the law are the same; and the proportions of each political affiliation that are unde- cided are the same. We are basically interested in determining whether the three categories of voters are homogeneous with respect to their opinions concerning the proposed abortion law. Such a test is called a test for homogeneity. Assuming homogeneity, we again find the expected cell frequencies by multi- plying the corresponding row and column totals and then dividing by the grand
  • 398. 10.13 Test for Homogeneity 377 total. The analysis then proceeds using the same chi-squared statistic as before. We illustrate this process for the data of Table 10.8 in the following example. Example 10.14: Referring to the data of Table 10.8, test the hypothesis that opinions concerning the proposed abortion law are the same within each political affiliation. Use a 0.05 level of significance. Solution: 1. H0: For each opinion, the proportions of Democrats, Republicans, and Inde- pendents are the same. 2. H1: For at least one opinion, the proportions of Democrats, Republicans, and Independents are not the same. 3. α = 0.05. 4. Critical region: χ2 9.488 with v = 4 degrees of freedom. 5. Computations: Using the expected cell frequency formula on page 375, we need to compute 4 cell frequencies. All other frequencies are found by sub- traction. The observed and expected cell frequencies are displayed in Table 10.9. Table 10.9: Observed and Expected Frequencies Political Affiliation Abortion Law Democrat Republican Independent Total For Against Undecided Total 82 (85.6) 93 (88.8) 25 (25.6) 200 70 (64.2) 62 (66.6) 18 (19.2) 150 62 (64.2) 67 (66.6) 21 (19.2) 150 214 222 64 500 Now, χ2 = (82 − 85.6)2 85.6 + (70 − 64.2)2 64.2 + (62 − 64.2)2 64.2 + (93 − 88.8)2 88.8 + (62 − 66.6)2 66.6 + (67 − 66.6)2 66.6 + (25 − 25.6)2 25.6 + (18 − 19.2)2 19.2 + (21 − 19.2)2 19.2 = 1.53. 6. Decision: Do not reject H0. There is insufficient evidence to conclude that the proportions of Democrats, Republicans, and Independents differ for each stated opinion. Testing for Several Proportions The chi-squared statistic for testing for homogeneity is also applicable when testing the hypothesis that k binomial parameters have the same value. This is, therefore, an extension of the test presented in Section 10.9 for determining differences be- tween two proportions to a test for determining differences among k proportions. Hence, we are interested in testing the null hypothesis H0 : p1 = p2 = · · · = pk
  • 399. 378 Chapter 10 One- and Two-Sample Tests of Hypotheses against the alternative hypothesis, H1, that the population proportions are not all equal. To perform this test, we first observe independent random samples of size n1, n2, . . . , nk from the k populations and arrange the data in a 2 × k contingency table, Table 10.10. Table 10.10: k Independent Binomial Samples Sample: 1 2 · · · k Successes x1 x2 · · · xk Failures n1 − x1 n2 − x2 · · · nk − xk Depending on whether the sizes of the random samples were predetermined or occurred at random, the test procedure is identical to the test for homogeneity or the test for independence. Therefore, the expected cell frequencies are calculated as before and substituted, together with the observed frequencies, into the chi-squared statistic χ2 = i (oi − ei)2 ei , with v = (2 − 1)(k − 1) = k − 1 degrees of freedom. By selecting the appropriate upper-tail critical region of the form χ2 χ2 α, we can now reach a decision concerning H0. Example 10.15: In a shop study, a set of data was collected to determine whether or not the proportion of defectives produced was the same for workers on the day, evening, and night shifts. The data collected are shown in Table 10.11. Table 10.11: Data for Example 10.15 Shift: Day Evening Night Defectives 45 55 70 Nondefectives 905 890 870 Use a 0.025 level of significance to determine if the proportion of defectives is the same for all three shifts. Solution: Let p1, p2, and p3 represent the true proportions of defectives for the day, evening, and night shifts, respectively. 1. H0: p1 = p2 = p3. 2. H1: p1, p2, and p3 are not all equal. 3. α = 0.025. 4. Critical region: χ2 7.378 for v = 2 degrees of freedom.
  • 400. 10.14 Two-Sample Case Study 379 5. Computations: Corresponding to the observed frequencies o1 = 45 and o2 = 55, we find e1 = (950)(170) 2835 = 57.0 and e2 = (945)(170) 2835 = 56.7. All other expected frequencies are found by subtraction and are displayed in Table 10.12. Table 10.12: Observed and Expected Frequencies Shift: Day Evening Night Total Defectives Nondefectives 45 (57.0) 905 (893.0) 55 (56.7) 890 (888.3) 70 (56.3) 870 (883.7) 170 2665 Total 950 945 940 2835 Now χ2 = (45 − 57.0)2 57.0 + (55 − 56.7)2 56.7 + (70 − 56.3)2 56.3 + (905 − 893.0)2 893.0 + (890 − 888.3)2 888.3 + (870 − 883.7)2 883.7 = 6.29, P ≈ 0.04. 6. Decision: We do not reject H0 at α = 0.025. Nevertheless, with the above P-value computed, it would certainly be dangerous to conclude that the pro- portion of defectives produced is the same for all shifts. Often a complete study involving the use of statistical methods in hypothesis testing can be illustrated for the scientist or engineer using both test statistics, complete with P-values and statistical graphics. The graphics supplement the numerical diagnostics with pictures that show intuitively why the P-values appear as they do, as well as how reasonable (or not) the operative assumptions are. 10.14 Two-Sample Case Study In this section, we consider a study involving a thorough graphical and formal anal- ysis, along with annotated computer printout and conclusions. In a data analysis study conducted by personnel at the Statistics Consulting Center at Virginia Tech, two different materials, alloy A and alloy B, were compared in terms of breaking strength. Alloy B is more expensive, but it should certainly be adopted if it can be shown to be stronger than alloy A. The consistency of performance of the two alloys should also be taken into account. Random samples of beams made from each alloy were selected, and strength was measured in units of 0.001-inch deflection as a fixed force was applied at both ends of the beam. Twenty specimens were used for each of the two alloys. The data are given in Table 10.13. It is important that the engineer compare the two alloys. Of concern is average strength and reproducibility. It is of interest to determine if there is a severe
  • 401. 380 Chapter 10 One- and Two-Sample Tests of Hypotheses Table 10.13: Data for Two-Sample Case Study Alloy A Alloy B 88 82 87 75 81 80 79 85 90 77 78 81 84 88 83 86 78 77 89 80 81 84 82 78 81 85 80 80 83 87 78 76 82 80 83 85 79 78 76 79 violation of the normality assumption required of both the t- and F-tests. Figures 10.21 and 10.22 are normal quantile-quantile plots of the samples of the two alloys. There does not appear to be any serious violation of the normality assumption. In addition, Figure 10.23 shows two box-and-whisker plots on the same graph. The box-and-whisker plots suggest that there is no appreciable difference in the vari- ability of deflection for the two alloys. However, it seems that the mean deflection for alloy B is significantly smaller, suggesting, at least graphically, that alloy B is stronger. The sample means and standard deviations are ȳA = 83.55, sA = 3.663; ȳB = 79.70, sB = 3.097. The SAS printout for the PROC TTEST is shown in Figure 10.24. The F-test suggests no significant difference in variances (P = 0.4709), and the two-sample t-statistic for testing H0: μA = μB, H1: μA μB (t = 3.59, P = 0.0009) rejects H0 in favor of H1 and thus confirms what the graphical information suggests. Here we use the t-test that pools the two-sample variances together in light of the results of the F-test. On the basis of this analysis, the adoption of alloy B would seem to be in order. Statistical Significance and Engineering or Scientific Significance While the statistician may feel quite comfortable with the results of the comparison between the two alloys in the case study above, a dilemma remains for the engineer. The analysis demonstrated a statistically significant improvement with the use of alloy B. However, is the difference found really worth pursuing, since alloy B is more expensive? This illustration highlights a very important issue often overlooked by statisticians and data analysts—the distinction between statistical significance and engineering or scientific significance. Here the average difference in deflection is ȳA − ȳB = 0.00385 inch. In a complete analysis, the engineer must determine if the difference is sufficient to justify the extra cost in the long run. This is an economic and engineering issue. The reader should understand that a statistically significant difference merely implies that the difference in the sample
  • 402. 10.14 Two-Sample Case Study 381 2 1 1 2 0 78 80 82 84 86 88 90 Normal Quantile Quantile Figure 10.21: Normal quantile-quantile plot of data for alloy A. 76 78 80 82 84 86 2 1 1 2 0 Normal Quantile Quantile Figure 10.22: Normal quantile-quantile plot of data for alloy B. Alloy A Alloy B 75 80 85 90 Deflection Figure 10.23: Box-and-whisker plots for both alloys. means found in the data could hardly have occurred by chance. It does not imply that the difference in the population means is profound or particularly significant in the context of the problem. For example, in Section 10.4, an annotated computer printout was used to show evidence that a pH meter was, in fact, biased. That is, it does not demonstrate a mean pH of 7.00 for the material on which it was tested. But the variability among the observations in the sample is very small. The engineer may decide that the small deviations from 7.0 render the pH meter adequate.
  • 403. / / 382 Chapter 10 One- and Two-Sample Tests of Hypotheses The TTEST Procedure Alloy N Mean Std Dev Std Err Alloy A 20 83.55 3.6631 0.8191 Alloy B 20 79.7 3.0967 0.6924 Variances DF t Value Pr |t| Equal 38 3.59 0.0009 Unequal 37 3.59 0.0010 Equality of Variances Num DF Den DF F Value Pr F 19 19 1.40 0.4709 Figure 10.24: Annotated SAS printout for alloy data. Exercises 10.79 A machine is supposed to mix peanuts, hazel- nuts, cashews, and pecans in the ratio 5:2:2:1. A can containing 500 of these mixed nuts was found to have 269 peanuts, 112 hazelnuts, 74 cashews, and 45 pecans. At the 0.05 level of significance, test the hypothesis that the machine is mixing the nuts in the ratio 5:2:2:1. 10.80 The grades in a statistics course for a particu- lar semester were as follows: Grade A B C D F f 14 18 32 20 16 Test the hypothesis, at the 0.05 level of significance, that the distribution of grades is uniform. 10.81 A die is tossed 180 times with the following results: x 1 2 3 4 5 6 f 28 36 36 30 27 23 Is this a balanced die? Use a 0.01 level of significance. 10.82 Three marbles are selected from an urn con- taining 5 red marbles and 3 green marbles. After the number X of red marbles is recorded, the marbles are replaced in the urn and the experiment repeated 112 times. The results obtained are as follows: x 0 1 2 3 f 1 31 55 25 Test the hypothesis, at the 0.05 level of significance, that the recorded data may be fitted by the hypergeo- metric distribution h(x; 8, 3, 5), x = 0, 1, 2, 3. 10.83 A coin is thrown until a head occurs and the number X of tosses recorded. After repeating the ex- periment 256 times, we obtained the following results: x 1 2 3 4 5 6 7 8 f 136 60 34 12 9 1 3 1 Test the hypothesis, at the 0.05 level of significance, that the observed distribution of X may be fitted by the geometric distribution g(x; 1/2), x = 1, 2, 3, . . . . 10.84 For Exercise 1.18 on page 31, test the good- ness of fit between the observed class frequencies and the corresponding expected frequencies of a normal dis- tribution with μ = 65 and σ = 21, using a 0.05 level of significance. 10.85 For Exercise 1.19 on page 31, test the good- ness of fit between the observed class frequencies and the corresponding expected frequencies of a normal dis- tribution with μ = 1.8 and σ = 0.4, using a 0.01 level of significance. 10.86 In an experiment to study the dependence of hypertension on smoking habits, the following data were taken on 180 individuals: Non- Moderate Heavy smokers Smokers Smokers Hypertension 21 36 30 No hypertension 48 26 19 Test the hypothesis that the presence or absence of hy- pertension is independent of smoking habits. Use a 0.05 level of significance. 10.87 A random sample of 90 adults is classified ac- cording to gender and the number of hours of television watched during a week:
  • 404. / / Exercises 383 Gender Male Female Over 25 hours 15 29 Under 25 hours 27 19 Use a 0.01 level of significance and test the hypothesis that the time spent watching television is independent of whether the viewer is male or female. 10.88 A random sample of 200 married men, all re- tired, was classified according to education and number of children: Number of Children Education 0–1 2–3 Over 3 Elementary 14 37 32 Secondary 19 42 17 College 12 17 10 Test the hypothesis, at the 0.05 level of significance, that the size of a family is independent of the level of education attained by the father. 10.89 A criminologist conducted a survey to deter- mine whether the incidence of certain types of crime varied from one part of a large city to another. The particular crimes of interest were assault, burglary, larceny, and homicide. The following table shows the numbers of crimes committed in four areas of the city during the past year. Type of Crime District Assault Burglary Larceny Homicide 1 162 118 451 18 2 310 196 996 25 3 258 193 458 10 4 280 175 390 19 Can we conclude from these data at the 0.01 level of significance that the occurrence of these types of crime is dependent on the city district? 10.90 According to a Johns Hopkins University study published in the American Journal of Public Health, widows live longer than widowers. Consider the fol- lowing survival data collected on 100 widows and 100 widowers following the death of a spouse: Years Lived Widow Widower Less than 5 25 39 5 to 10 42 40 More than 10 33 21 Can we conclude at the 0.05 level of significance that the proportions of widows and widowers are equal with respect to the different time periods that a spouse sur- vives after the death of his or her mate? 10.91 The following responses concerning the stan- dard of living at the time of an independent opinion poll of 1000 households versus one year earlier seem to be in agreement with the results of a study published in Across the Board (June 1981): Standard of Living Somewhat Not as Period Better Same Good Total 1980: Jan. 72 144 84 300 May 63 135 102 300 Sept. 47 100 53 200 1981: Jan. 40 105 55 200 Test the hypothesis that the proportions of households within each standard of living category are the same for each of the four time periods. Use a P-value. 10.92 A college infirmary conducted an experiment to determine the degree of relief provided by three cough remedies. Each cough remedy was tried on 50 students and the following data recorded: Cough Remedy NyQuil Robitussin Triaminic No relief 11 13 9 Some relief 32 28 27 Total relief 7 9 14 Test the hypothesis that the three cough remedies are equally effective. Use a P-value in your conclusion. 10.93 To determine current attitudes about prayer in public schools, a survey was conducted in four Vir- ginia counties. The following table gives the attitudes of 200 parents from Craig County, 150 parents from Giles County, 100 parents from Franklin County, and 100 parents from Montgomery County: County Attitude Craig Giles Franklin Mont. Favor 65 66 40 34 Oppose 42 30 33 42 No opinion 93 54 27 24 Test for homogeneity of attitudes among the four coun- ties concerning prayer in the public schools. Use a P- value in your conclusion. 10.94 A survey was conducted in Indiana, Kentucky, and Ohio to determine the attitude of voters concern- ing school busing. A poll of 200 voters from each of these states yielded the following results: Voter Attitude Do Not State Support Support Undecided Indiana 82 97 21 Kentucky 107 66 27 Ohio 93 74 33 At the 0.05 level of significance, test the null hypothe- sis that the proportions of voters within each attitude category are the same for each of the three states.
  • 405. / / 384 Chapter 10 One- and Two-Sample Tests of Hypotheses 10.95 A survey was conducted in two Virginia cities to determine voter sentiment about two gubernatorial candidates in an upcoming election. Five hundred vot- ers were randomly selected from each city and the fol- lowing data were recorded: City Voter Sentiment Richmond Norfolk Favor A Favor B Undecided 204 211 85 225 198 77 At the 0.05 level of significance, test the null hypoth- esis that proportions of voters favoring candidate A, favoring candidate B, and undecided are the same for each city. 10.96 In a study to estimate the proportion of wives who regularly watch soap operas, it is found that 52 of 200 wives in Denver, 31 of 150 wives in Phoenix, and 37 of 150 wives in Rochester watch at least one soap opera. Use a 0.05 level of significance to test the hypothesis that there is no difference among the true proportions of wives who watch soap operas in these three cities. Review Exercises 10.97 State the null and alternative hypotheses to be used in testing the following claims and determine gen- erally where the critical region is located: (a) The mean snowfall at Lake George during the month of February is 21.8 centimeters. (b) No more than 20% of the faculty at the local uni- versity contributed to the annual giving fund. (c) On the average, children attend schools within 6.2 kilometers of their homes in suburban St. Louis. (d) At least 70% of next year’s new cars will be in the compact and subcompact category. (e) The proportion of voters favoring the incumbent in the upcoming election is 0.58. (f) The average rib-eye steak at the Longhorn Steak house weighs at least 340 grams. 10.98 A geneticist is interested in the proportions of males and females in a population who have a cer- tain minor blood disorder. In a random sample of 100 males, 31 are found to be afflicted, whereas only 24 of 100 females tested have the disorder. Can we conclude at the 0.01 level of significance that the proportion of men in the population afflicted with this blood disorder is significantly greater than the proportion of women afflicted? 10.99 A study was made to determine whether more Italians than Americans prefer white champagne to pink champagne at weddings. Of the 300 Italians selected at random, 72 preferred white champagne, and of the 400 Americans selected, 70 preferred white champagne. Can we conclude that a higher proportion of Italians than Americans prefer white champagne at weddings? Use a 0.05 level of significance. 10.100 Consider the situation of Exercise 10.54 on page 360. Oxygen consumption in mL/kg/min, was also measured. Subject With CO Without CO 1 26.46 25.41 2 17.46 22.53 3 16.32 16.32 4 20.19 27.48 5 19.84 24.97 6 20.65 21.77 7 28.21 28.17 8 33.94 32.02 9 29.32 28.96 It is conjectured that oxygen consumption should be higher in an environment relatively free of CO. Do a significance test and discuss the conjecture. 10.101 In a study analyzed by the Statistics Consult- ing Center at Virginia Tech, a group of subjects was asked to complete a certain task on the computer. The response measured was the time to completion. The purpose of the experiment was to test a set of facilita- tion tools developed by the Department of Computer Science at the university. There were 10 subjects in- volved. With a random assignment, five were given a standard procedure using Fortran language for comple- tion of the task. The other five were asked to do the task with the use of the facilitation tools. The data on the completion times for the task are given here. Group 1 Group 2 (Standard Procedure) (Facilitation Tool) 161 132 169 162 174 134 158 138 163 133 Assuming that the population distributions are nor- mal and variances are the same for the two groups, support or refute the conjecture that the facilitation tools increase the speed with which the task can be accomplished. 10.102 State the null and alternative hypotheses to be used in testing the following claims, and determine
  • 406. / / Review Exercises 385 generally where the critical region is located: (a) At most, 20% of next year’s wheat crop will be exported to the Soviet Union. (b) On the average, American homemakers drink 3 cups of coffee per day. (c) The proportion of college graduates in Virginia this year who majored in the social sciences is at least 0.15. (d) The average donation to the American Lung Asso- ciation is no more than $10. (e) Residents in suburban Richmond commute, on the average, 15 kilometers to their place of employ- ment. 10.103 If one can containing 500 nuts is selected at random from each of three different distributors of mixed nuts and there are, respectively, 345, 313, and 359 peanuts in each of the cans, can we conclude at the 0.01 level of significance that the mixed nuts of the three distributors contain equal proportions of peanuts? 10.104 A study was made to determine whether there is a difference between the proportions of parents in the states of Maryland (MD), Virginia (VA), Georgia (GA), and Alabama (AL) who favor placing Bibles in the elementary schools. The responses of 100 parents selected at random in each of these states are recorded in the following table: State Preference MD VA GA AL Yes 65 71 78 82 No 35 29 22 18 Can we conclude that the proportions of parents who favor placing Bibles in the schools are the same for these four states? Use a 0.01 level of significance. 10.105 A study was conducted at the Virginia- Maryland Regional College of Veterinary Medicine Equine Center to determine if the performance of a certain type of surgery on young horses had any effect on certain kinds of blood cell types in the animal. Fluid samples were taken from each of six foals before and af- ter surgery. The samples were analyzed for the number of postoperative white blood cell (WBC) leukocytes. A preoperative measure of WBC leukocytes was also measured. The data are given as follows: Foal Presurgery* Postsurgery* 1 10.80 10.60 2 12.90 16.60 3 9.59 17.20 4 8.81 14.00 5 12.00 10.60 6 6.07 8.60 *All values × 10−3. Use a paired sample t-test to determine if there is a sig- nificant change in WBC leukocytes with the surgery. 10.106 A study was conducted at the Department of Health and Physical Education at Virginia Tech to de- termine if 8 weeks of training truly reduces the choles- terol levels of the participants. A treatment group con- sisting of 15 people was given lectures twice a week on how to reduce cholesterol level. Another group of 18 people of similar age was randomly selected as a control group. All participants’ cholesterol levels were recorded at the end of the 8-week program and are listed below. Treatment: 129 131 154 172 115 126 175 191 122 238 159 156 176 175 126 Control: 151 132 196 195 188 198 187 168 115 165 137 208 133 217 191 193 140 146 Can we conclude, at the 5% level of significance, that the average cholesterol level has been reduced due to the program? Make the appropriate test on means. 10.107 In a study conducted by the Department of Mechanical Engineering and analyzed by the Statistics Consulting Center at Virginia Tech, steel rods supplied by two different companies were compared. Ten sam- ple springs were made out of the steel rods supplied by each company, and the “bounciness” was studied. The data are as follows: Company A: 9.3 8.8 6.8 8.7 8.5 6.7 8.0 6.5 9.2 7.0 Company B: 11.0 9.8 9.9 10.2 10.1 9.7 11.0 11.1 10.2 9.6 Can you conclude that there is virtually no difference in means between the steel rods supplied by the two companies? Use a P-value to reach your conclusion. Should variances be pooled here? 10.108 In a study conducted by the Water Resources Center and analyzed by the Statistics Consulting Cen- ter at Virginia Tech, two different wastewater treat- ment plants are compared. Plant A is located where the median household income is below $22,000 a year, and plant B is located where the median household income is above $60,000 a year. The amount of waste- water treated at each plant (thousands of gallons/day) was randomly sampled for 10 days. The data are as follows: Plant A: 21 19 20 23 22 28 32 19 13 18 Plant B: 20 39 24 33 30 28 30 22 33 24 Can we conclude, at the 5% level of significance, that
  • 407. 386 Chapter 10 One- and Two-Sample Tests of Hypotheses the average amount of wastewater treated at the plant in the high-income neighborhood is more than that treated at the plant in the low-income area? Assume normality. 10.109 The following data show the numbers of de- fects in 100,000 lines of code in a particular type of software program developed in the United States and Japan. Is there enough evidence to claim that there is a significant difference between the programs developed in the two countries? Test on means. Should variances be pooled? U.S. 48 39 42 52 40 48 52 52 54 48 52 55 43 46 48 52 Japan 50 48 42 40 43 48 50 46 38 38 36 40 40 48 48 45 10.110 Studies show that the concentration of PCBs is much higher in malignant breast tissue than in normal breast tissue. If a study of 50 women with breast cancer reveals an average PCB concentration of 22.8 × 10−4 gram, with a standard deviation of 4.8 × 10−4 gram, is the mean concentration of PCBs less than 24 × 10−4 gram? 10.111 z-Value for Testing p1−p2 = d0: To test the null hypothesis H0 that p1 −p2 = d0, where d0 = 0, we base our decision on z = p̂1 − p̂2 − d0 p̂1q̂1/n1 + p̂2q̂2/n2 , which is a value of a random variable whose distribu- tion approximates the standard normal distribution as long as n1 and n2 are both large. With reference to Example 10.11 on page 364, test the hypothesis that the percentage of town voters favoring the construction of the chemical plant will not exceed the percentage of county voters by more than 3%. Use a P-value in your conclusion. 10.15 Potential Misconceptions and Hazards; Relationship to Material in Other Chapters One of the easiest ways to misuse statistics relates to the final scientific conclusion drawn when the analyst does not reject the null hypothesis H0. In this text, we have attempted to make clear what the null hypothesis means and what the al- ternative means, and to stress that, in a large sense, the alternative hypothesis is much more important. Put in the form of an example, if an engineer is attempt- ing to compare two gauges using a two-sample t-test, and H0 is “the gauges are equivalent” while H1 is “the gauges are not equivalent,” not rejecting H0 does not lead to the conclusion of equivalent gauges. In fact, a case can be made for never writing or saying “accept H0”! Not rejecting H0 merely implies insufficient evidence. Depending on the nature of the hypothesis, a lot of possibilities are still not ruled out. In Chapter 9, we considered the case of the large-sample confidence interval using z = x̄ − μ s/ √ n . In hypothesis testing, replacing σ by s for n 30 is risky. If n ≥ 30 and the distribution is not normal but somehow close to normal, the Central Limit Theorem is being called upon and one is relying on the fact that with n ≥ 30, s ≈ σ. Of course, any t-test is accompanied by the concomitant assumption of normality. As in the case of confidence intervals, the t-test is relatively robust to normality. However, one should still use normal probability plotting, goodness-of-fit tests, or other graphical procedures when the sample is not too small. Most of the chapters in this text include discussions whose purpose is to relate the chapter in question to other material that will follow. The topics of estimation
  • 408. 10.15 Potential Misconceptions and Hazards 387 and hypothesis testing are both used in a major way in nearly all of the tech- niques that fall under the umbrella of “statistical methods.” This will be readily noted by students who advance to Chapters 11 through 16. It will be obvious that these chapters depend heavily on statistical modeling. Students will be ex- posed to the use of modeling in a wide variety of applications in many scientific and engineering fields. It will become obvious quite quickly that the framework of a statistical model is useless unless data are available with which to estimate parameters in the formulated model. This will become particularly apparent in Chapters 11 and 12 as we introduce the notion of regression models. The concepts and theory associated with Chapter 9 will carry over. As far as material in the present chapter is concerned, the framework of hypothesis testing, P-values, power of tests, and choice of sample size will collectively play a major role. Since initial model formulation quite often must be supplemented by model editing before the analyst is sufficiently comfortable to use the model for either process understand- ing or prediction, Chapters 11, 12, and 15 make major use of hypothesis testing to supplement diagnostic measures that are used to assess model quality.
  • 409. This page intentionally left blank
  • 410. Chapter 11 Simple Linear Regression and Correlation 11.1 Introduction to Linear Regression Often, in practice, one is called upon to solve problems involving sets of variables when it is known that there exists some inherent relationship among the variables. For example, in an industrial situation it may be known that the tar content in the outlet stream in a chemical process is related to the inlet temperature. It may be of interest to develop a method of prediction, that is, a procedure for estimating the tar content for various levels of the inlet temperature from experimental infor- mation. Now, of course, it is highly likely that for many example runs in which the inlet temperature is the same, say 130◦ C, the outlet tar content will not be the same. This is much like what happens when we study several automobiles with the same engine volume. They will not all have the same gas mileage. Houses in the same part of the country that have the same square footage of living space will not all be sold for the same price. Tar content, gas mileage (mpg), and the price of houses (in thousands of dollars) are natural dependent variables, or responses, in these three scenarios. Inlet temperature, engine volume (cubic feet), and square feet of living space are, respectively, natural independent variables, or regressors. A reasonable form of a relationship between the response Y and the regressor x is the linear relationship Y = β0 + β1x, where, of course, β0 is the intercept and β1 is the slope. The relationship is illustrated in Figure 11.1. If the relationship is exact, then it is a deterministic relationship between two scientific variables and there is no random or probabilistic component to it. However, in the examples listed above, as well as in countless other scientific and engineering phenomena, the relationship is not deterministic (i.e., a given x does not always give the same value for Y ). As a result, important problems here are probabilistic in nature since the relationship above cannot be viewed as being exact. The concept of regression analysis deals with finding the best relationship 389
  • 411. 390 Chapter 11 Simple Linear Regression and Correlation x Y } β0 Y = 0 +β β 1 x Figure 11.1: A linear relationship; β0: intercept; β1: slope. between Y and x, quantifying the strength of that relationship, and using methods that allow for prediction of the response values given values of the regressor x. In many applications, there will be more than one regressor (i.e., more than one independent variable that helps to explain Y ). For example, in the case where the response is the price of a house, one would expect the age of the house to contribute to the explanation of the price, so in this case the multiple regression structure might be written Y = β0 + β1x1 + β2x2, where Y is price, x1 is square footage, and x2 is age in years. In the next chap- ter, we will consider problems with multiple regressors. The resulting analysis is termed multiple regression, while the analysis of the single regressor case is called simple regression. As a second illustration of multiple regression, a chem- ical engineer may be concerned with the amount of hydrogen lost from samples of a particular metal when the material is placed in storage. In this case, there may be two inputs, storage time x1 in hours and storage temperature x2 in degrees centigrade. The response would then be hydrogen loss Y in parts per million. In this chapter, we deal with the topic of simple linear regression, treating only the case of a single regressor variable in which the relationship between y and x is linear. For the case of more than one regressor variable, the reader is referred to Chapter 12. Denote a random sample of size n by the set {(xi, yi); i = 1, 2, . . . , n}. If additional samples were taken using exactly the same values of x, we should expect the y values to vary. Hence, the value yi in the ordered pair (xi, yi) is a value of some random variable Yi. 11.2 The Simple Linear Regression (SLR) Model We have already confined the terminology regression analysis to situations in which relationships among variables are not deterministic (i.e., not exact). In other words, there must be a random component to the equation that relates the variables.
  • 412. 11.2 The Simple Linear Regression Model 391 This random component takes into account considerations that are not being mea- sured or, in fact, are not understood by the scientists or engineers. Indeed, in most applications of regression, the linear equation, say Y = β0 + β1x, is an approxima- tion that is a simplification of something unknown and much more complicated. For example, in our illustration involving the response Y = tar content and x = inlet temperature, Y = β0 + β1x is likely a reasonable approximation that may be operative within a confined range on x. More often than not, the models that are simplifications of more complicated and unknown structures are linear in nature (i.e., linear in the parameters β0 and β1 or, in the case of the model involving the price, size, and age of the house, linear in the parameters β0, β1, and β2). These linear structures are simple and empirical in nature and are thus called empirical models. An analysis of the relationship between Y and x requires the statement of a statistical model. A model is often used by a statistician as a representation of an ideal that essentially defines how we perceive that the data were generated by the system in question. The model must include the set {(xi, yi); i = 1, 2, . . . , n} of data involving n pairs of (x, y) values. One must bear in mind that the value yi depends on xi via a linear structure that also has the random component involved. The basis for the use of a statistical model relates to how the random variable Y moves with x and the random component. The model also includes what is assumed about the statistical properties of the random component. The statistical model for simple linear regression is given below. The response Y is related to the independent variable x through the equation Simple Linear Regression Model Y = β0 + β1x + . In the above, β0 and β1 are unknown intercept and slope parameters, respectively, and is a random variable that is assumed to be distributed with E() = 0 and Var() = σ2 . The quantity σ2 is often called the error variance or residual variance. From the model above, several things become apparent. The quantity Y is a random variable since is random. The value x of the regressor variable is not random and, in fact, is measured with negligible error. The quantity , often called a random error or random disturbance, has constant variance. This portion of the assumptions is often called the homogeneous variance assump- tion. The presence of this random error, , keeps the model from becoming simply a deterministic equation. Now, the fact that E() = 0 implies that at a specific x the y-values are distributed around the true, or population, regression line y = β0 + β1x. If the model is well chosen (i.e., there are no additional important regressors and the linear approximation is good within the ranges of the data), then positive and negative errors around the true regression are reasonable. We must keep in mind that in practice β0 and β1 are not known and must be estimated from data. In addition, the model described above is conceptual in nature. As a result, we never observe the actual values in practice and thus we can never draw the true regression line (but we assume it is there). We can only draw an estimated line. Figure 11.2 depicts the nature of hypothetical (x, y) data scattered around a true regression line for a case in which only n = 5 observations are available. Let us emphasize that what we see in Figure 11.2 is not the line that is used by the
  • 413. 392 Chapter 11 Simple Linear Regression and Correlation scientist or engineer. Rather, the picture merely describes what the assumptions mean! The regression that the user has at his or her disposal will now be described. x y ε1 ε2 ε3 ε4 ε5 “True’’ Regression Line E(Y) β 0 β1x = + Figure 11.2: Hypothetical (x, y) data scattered around the true regression line for n = 5. The Fitted Regression Line An important aspect of regression analysis is, very simply, to estimate the parame- ters β0 and β1 (i.e., estimate the so-called regression coefficients). The method of estimation will be discussed in the next section. Suppose we denote the esti- mates b0 for β0 and b1 for β1. Then the estimated or fitted regression line is given by ŷ = b0 + b1x, where ŷ is the predicted or fitted value. Obviously, the fitted line is an estimate of the true regression line. We expect that the fitted line should be closer to the true regression line when a large amount of data are available. In the following example, we illustrate the fitted line for a real-life pollution study. One of the more challenging problems confronting the water pollution control field is presented by the tanning industry. Tannery wastes are chemically complex. They are characterized by high values of chemical oxygen demand, volatile solids, and other pollution measures. Consider the experimental data in Table 11.1, which were obtained from 33 samples of chemically treated waste in a study conducted at Virginia Tech. Readings on x, the percent reduction in total solids, and y, the percent reduction in chemical oxygen demand, were recorded. The data of Table 11.1 are plotted in a scatter diagram in Figure 11.3. From an inspection of this scatter diagram, it can be seen that the points closely follow a straight line, indicating that the assumption of linearity between the two variables appears to be reasonable.
  • 414. 11.2 The Simple Linear Regression Model 393 Table 11.1: Measures of Reduction in Solids and Oxygen Demand Solids Reduction, Oxygen Demand Solids Reduction, Oxygen Demand x (%) Reduction, y (%) x (%) Reduction, y (%) 3 7 11 15 18 27 29 30 30 31 31 32 33 33 34 36 36 5 11 21 16 16 28 27 25 35 30 40 32 34 32 34 37 38 36 37 38 39 39 39 40 41 42 42 43 44 45 46 47 50 34 36 38 37 36 45 39 41 40 44 37 44 46 46 49 51 x y 0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 5 10 15 20 25 30 35 40 45 50 55 y ^ = b 0 + b 1 x μY x =β 0 +β1x | Figure 11.3: Scatter diagram with regression lines. The fitted regression line and a hypothetical true regression line are shown on the scatter diagram of Figure 11.3. This example will be revisited as we move on to the method of estimation, discussed in Section 11.3.
  • 415. 394 Chapter 11 Simple Linear Regression and Correlation Another Look at the Model Assumptions It may be instructive to revisit the simple linear regression model presented previ- ously and discuss in a graphical sense how it relates to the so-called true regression. Let us expand on Figure 11.2 by illustrating not merely where the i fall on a graph but also what the implication is of the normality assumption on the i. Suppose we have a simple linear regression with n = 6 evenly spaced values of x and a single y-value at each x. Consider the graph in Figure 11.4. This illustration should give the reader a clear representation of the model and the assumptions involved. The line in the graph is the true regression line. The points plotted are actual (y, x) points which are scattered about the line. Each point is on its own normal distribution with the center of the distribution (i.e., the mean of y) falling on the line. This is certainly expected since E(Y ) = β0 + β1x. As a result, the true regression line goes through the means of the response, and the actual observations are on the distribution around the means. Note also that all distributions have the same variance, which we referred to as σ2 . Of course, the deviation between an individual y and the point on the line will be its individual value. This is clear since yi − E(Yi) = yi − (β0 + β1xi) = i. Thus, at a given x, Y and the corresponding both have variance σ2 . x Y μ Y x =β0 + β 1x / x1 x2 x3 x4 x5 x6 Figure 11.4: Individual observations around true regression line. Note also that we have written the true regression line here as μY |x = β0 + β1x in order to reaffirm that the line goes through the mean of the Y random variable. 11.3 Least Squares and the Fitted Model In this section, we discuss the method of fitting an estimated regression line to the data. This is tantamount to the determination of estimates b0 for β0 and b1
  • 416. 11.3 Least Squares and the Fitted Model 395 for β1. This of course allows for the computation of predicted values from the fitted line ŷ = b0 + b1x and other types of analyses and diagnostic information that will ascertain the strength of the relationship and the adequacy of the fitted model. Before we discuss the method of least squares estimation, it is important to introduce the concept of a residual. A residual is essentially an error in the fit of the model ŷ = b0 + b1x. Residual: Error in Fit Given a set of regression data {(xi, yi); i = 1, 2, . . . , n} and a fitted model, ŷi = b0 + b1xi, the ith residual ei is given by ei = yi − ŷi, i = 1, 2, . . . , n. Obviously, if a set of n residuals is large, then the fit of the model is not good. Small residuals are a sign of a good fit. Another interesting relationship which is useful at times is the following: yi = b0 + b1xi + ei. The use of the above equation should result in clarification of the distinction be- tween the residuals, ei, and the conceptual model errors, i. One must bear in mind that whereas the i are not observed, the ei not only are observed but also play an important role in the total analysis. Figure 11.5 depicts the line fit to this set of data, namely ŷ = b0 + b1x, and the line reflecting the model μY |x = β0 + β1x. Now, of course, β0 and β1 are unknown parameters. The fitted line is an estimate of the line produced by the statistical model. Keep in mind that the line μY |x = β0 + β1x is not known. x y μY x β 0 β1x y ^ = b0 + = + b1x | (xi, yi) }ei { εi Figure 11.5: Comparing i with the residual, ei. The Method of Least Squares We shall find b0 and b1, the estimates of β0 and β1, so that the sum of the squares of the residuals is a minimum. The residual sum of squares is often called the sum of squares of the errors about the regression line and is denoted by SSE. This
  • 417. 396 Chapter 11 Simple Linear Regression and Correlation minimization procedure for estimating the parameters is called the method of least squares. Hence, we shall find a and b so as to minimize SSE = n i=1 e2 i = n i=1 (yi − ŷi)2 = n i=1 (yi − b0 − b1xi)2 . Differentiating SSE with respect to b0 and b1, we have ∂(SSE) ∂b0 = −2 n i=1 (yi − b0 − b1xi), ∂(SSE) ∂b1 = −2 n i=1 (yi − b0 − b1xi)xi. Setting the partial derivatives equal to zero and rearranging the terms, we obtain the equations (called the normal equations) nb0 + b1 n i=1 xi = n i=1 yi, b0 n i=1 xi + b1 n i=1 x2 i = n i=1 xiyi, which may be solved simultaneously to yield computing formulas for b0 and b1. Estimating the Regression Coefficients Given the sample {(xi, yi); i = 1, 2, . . . , n}, the least squares estimates b0 and b1 of the regression coefficients β0 and β1 are computed from the formulas b1 = n n i=1 xiyi − n i=1 xi n i=1 yi n n i=1 x2 i − n i=1 xi 2 = n i=1 (xi − x̄)(yi − ȳ) n i=1 (xi − x̄)2 and b0 = n i=1 yi − b1 n i=1 xi n = ȳ − b1x̄. The calculations of b0 and b1, using the data of Table 11.1, are illustrated by the following example. Example 11.1: Estimate the regression line for the pollution data of Table 11.1. Solution: 33 i=1 xi = 1104, 33 i=1 yi = 1124, 33 i=1 xiyi = 41,355, 33 i=1 x2 i = 41,086 Therefore, b1 = (33)(41,355) − (1104)(1124) (33)(41,086) − (1104)2 = 0.903643 and b0 = 1124 − (0.903643)(1104) 33 = 3.829633. Thus, the estimated regression line is given by ŷ = 3.8296 + 0.9036x. Using the regression line of Example 11.1, we would predict a 31% reduction in the chemical oxygen demand when the reduction in the total solids is 30%. The
  • 418. 11.3 Least Squares and the Fitted Model 397 31% reduction in the chemical oxygen demand may be interpreted as an estimate of the population mean μY |30 or as an estimate of a new observation when the reduction in total solids is 30%. Such estimates, however, are subject to error. Even if the experiment were controlled so that the reduction in total solids was 30%, it is unlikely that we would measure a reduction in the chemical oxygen demand exactly equal to 31%. In fact, the original data recorded in Table 11.1 show that measurements of 25% and 35% were recorded for the reduction in oxygen demand when the reduction in total solids was kept at 30%. What Is Good about Least Squares? It should be noted that the least squares criterion is designed to provide a fitted line that results in a “closeness” between the line and the plotted points. There are many ways of measuring closeness. For example, one may wish to determine b0 and b1 for which n i=1 |yi − ŷi| is minimized or for which n i=1 |yi − ŷi|1.5 is minimized. These are both viable and reasonable methods. Note that both of these, as well as the least squares procedure, result in forcing residuals to be “small” in some sense. One should remember that the residuals are the empirical counterpart to the values. Figure 11.6 illustrates a set of residuals. One should note that the fitted line has predicted values as points on the line and hence the residuals are vertical deviations from points to the line. As a result, the least squares procedure produces a line that minimizes the sum of squares of vertical deviations from the points to the line. x y y ^ = b 0 + b 1x Figure 11.6: Residuals as vertical deviations.
  • 419. / / 398 Chapter 11 Simple Linear Regression and Correlation Exercises 11.1 A study was conducted at Virginia Tech to de- termine if certain static arm-strength measures have an influence on the “dynamic lift” characteristics of an individual. Twenty-five individuals were subjected to strength tests and then were asked to perform a weight- lifting test in which weight was dynamically lifted over- head. The data are given here. Arm Dynamic Individual Strength, x Lift, y 1 2 3 4 5 6 7 8 9 10 11 12 17.3 19.3 19.5 19.7 22.9 23.1 26.4 26.8 27.6 28.1 28.2 28.7 71.7 48.3 88.3 75.0 91.7 100.0 73.3 65.0 75.0 88.3 68.3 96.7 13 14 15 16 17 18 19 20 21 22 23 24 25 29.0 29.6 29.9 29.9 30.3 31.3 36.0 39.5 40.4 44.3 44.6 50.4 55.9 76.7 78.3 60.0 71.7 85.0 85.0 88.3 100.0 100.0 100.0 91.7 100.0 71.7 (a) Estimate β0 and β1 for the linear regression curve μY |x = β0 + β1x. (b) Find a point estimate of μY |30. (c) Plot the residuals versus the x’s (arm strength). Comment. 11.2 The grades of a class of 9 students on a midterm report (x) and on the final examination (y) are as fol- lows: x 77 50 71 72 81 94 96 99 67 y 82 66 78 34 47 85 99 99 68 (a) Estimate the linear regression line. (b) Estimate the final examination grade of a student who received a grade of 85 on the midterm report. 11.3 The amounts of a chemical compound y that dis- solved in 100 grams of water at various temperatures x were recorded as follows: x (◦ C) y (grams) 0 15 30 45 60 75 8 12 25 31 44 48 6 10 21 33 39 51 8 14 24 28 42 44 (a) Find the equation of the regression line. (b) Graph the line on a scatter diagram. (c) Estimate the amount of chemical that will dissolve in 100 grams of water at 50◦ C. 11.4 The following data were collected to determine the relationship between pressure and the correspond- ing scale reading for the purpose of calibration. Pressure, x (lb/sq in.) Scale Reading, y 10 13 10 18 10 16 10 15 10 20 50 86 50 90 50 88 50 88 50 92 (a) Find the equation of the regression line. (b) The purpose of calibration in this application is to estimate pressure from an observed scale reading. Estimate the pressure for a scale reading of 54 using x̂ = (54 − b0)/b1. 11.5 A study was made on the amount of converted sugar in a certain process at various temperatures. The data were coded and recorded as follows: Temperature, x Converted Sugar, y 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 8.1 7.8 8.5 9.8 9.5 8.9 8.6 10.2 9.3 9.2 10.5 (a) Estimate the linear regression line. (b) Estimate the mean amount of converted sugar pro- duced when the coded temperature is 1.75. (c) Plot the residuals versus temperature. Comment.
  • 420. / / Exercises 399 11.6 In a certain type of metal test specimen, the nor- mal stress on a specimen is known to be functionally related to the shear resistance. The following is a set of coded experimental data on the two variables: Normal Stress, x Shear Resistance, y 26.8 26.5 25.4 27.3 28.9 24.2 23.6 27.1 27.7 23.6 23.9 25.9 24.7 26.3 28.1 22.5 26.9 21.7 27.4 21.4 22.6 25.8 25.6 24.9 (a) Estimate the regression line μY |x = β0 + β1x. (b) Estimate the shear resistance for a normal stress of 24.5. 11.7 The following is a portion of a classic data set called the “pilot plot data” in Fitting Equations to Data by Daniel and Wood, published in 1971. The response y is the acid content of material produced by titration, whereas the regressor x is the organic acid content produced by extraction and weighing. y x y x 76 62 66 58 88 123 55 100 75 159 70 37 82 88 43 109 48 138 164 28 (a) Plot the data; does it appear that a simple linear regression will be a suitable model? (b) Fit a simple linear regression; estimate a slope and intercept. (c) Graph the regression line on the plot in (a). 11.8 A mathematics placement test is given to all en- tering freshmen at a small college. A student who re- ceives a grade below 35 is denied admission to the regu- lar mathematics course and placed in a remedial class. The placement test scores and the final grades for 20 students who took the regular course were recorded. (a) Plot a scatter diagram. (b) Find the equation of the regression line to predict course grades from placement test scores. (c) Graph the line on the scatter diagram. (d) If 60 is the minimum passing grade, below which placement test score should students in the future be denied admission to this course? Placement Test Course Grade 50 53 35 41 35 61 40 56 55 68 65 36 35 11 60 70 90 79 35 59 90 54 80 91 60 48 60 71 60 71 40 47 55 53 50 68 65 57 50 79 11.9 A study was made by a retail merchant to deter- mine the relation between weekly advertising expendi- tures and sales. Advertising Costs ($) Sales ($) 40 385 20 400 25 395 20 365 30 475 50 440 40 490 20 420 50 560 40 525 25 480 50 510 (a) Plot a scatter diagram. (b) Find the equation of the regression line to predict weekly sales from advertising expenditures. (c) Estimate the weekly sales when advertising costs are $35. (d) Plot the residuals versus advertising costs. Com- ment. 11.10 The following data are the selling prices z of a certain make and model of used car w years old. Fit a curve of the form μz|w = γδw by means of the nonlin- ear sample regression equation ẑ = cdw . [Hint: Write ln ẑ = ln c + (ln d)w = b0 + b1w.] w (years) z (dollars) w (years) z (dollars) 1 6350 3 5395 2 5695 5 4985 2 5750 5 4895
  • 421. 400 Chapter 11 Simple Linear Regression and Correlation 11.11 The thrust of an engine (y) is a function of exhaust temperature (x) in ◦ F when other important variables are held constant. Consider the following data. y x y x 4300 1760 4010 1665 4650 1652 3810 1550 3200 1485 4500 1700 3150 1390 3008 1270 4950 1820 (a) Plot the data. (b) Fit a simple linear regression to the data and plot the line through the data. 11.12 A study was done to study the effect of ambi- ent temperature x on the electric power consumed by a chemical plant y. Other factors were held constant, and the data were collected from an experimental pilot plant. y (BTU) x (◦ F) y (BTU) x (◦ F) 250 27 265 31 285 45 298 60 320 72 267 34 295 58 321 74 (a) Plot the data. (b) Estimate the slope and intercept in a simple linear regression model. (c) Predict power consumption for an ambient temper- ature of 65◦ F. 11.13 A study of the amount of rainfall and the quan- tity of air pollution removed produced the following data: Daily Rainfall, Particulate Removed, x (0.01 cm) y (μg/m3 ) 4.3 126 4.5 121 5.9 116 5.6 118 6.1 114 5.2 118 3.8 132 2.1 141 7.5 108 (a) Find the equation of the regression line to predict the particulate removed from the amount of daily rainfall. (b) Estimate the amount of particulate removed when the daily rainfall is x = 4.8 units. 11.14 A professor in the School of Business in a uni- versity polled a dozen colleagues about the number of professional meetings they attended in the past five years (x) and the number of papers they submitted to refereed journals (y) during the same period. The summary data are given as follows: n = 12, x̄ = 4, ȳ = 12, n i=1 x2 i = 232, n i=1 xiyi = 318. Fit a simple linear regression model between x and y by finding out the estimates of intercept and slope. Com- ment on whether attending more professional meetings would result in publishing more papers. 11.4 Properties of the Least Squares Estimators In addition to the assumptions that the error term in the model Yi = β0 + β1xi + i is a random variable with mean 0 and constant variance σ2 , suppose that we make the further assumption that 1, 2, . . . , n are independent from run to run in the experiment. This provides a foundation for finding the