SlideShare a Scribd company logo
Multiple Regression Analysis  and Model Building
Chapter Goals After completing this chapter, you should be able to:   explain model building using multiple regression analysis apply multiple regression analysis to business decision-making situations analyze and interpret the computer output for a multiple regression model test the significance of the independent variables in a multiple regression model
Chapter Goals After completing this chapter, you should be able to:   recognize potential problems in multiple regression analysis and take steps to correct the problems incorporate qualitative variables into the regression model by using dummy variables  use variable transformations to model nonlinear relationships (continued)
The Multiple Regression Model Idea: Examine the linear relationship between  1 dependent (y) & 2 or more independent variables (x i ) Population model: Y-intercept Population slopes Random Error Estimated  (or predicted)  value of y Estimated slope coefficients Estimated multiple regression model: Estimated intercept
Multiple Regression Model Two variable model y x 1 x 2 Slope for variable x 1 Slope for variable x 2
Multiple Regression Model Two variable model y x 1 x 2 y i y i < e = (y – y) < x 2i x 1i The best fit equation, y , is found by minimizing the sum of squared errors,   e 2 < Sample observation
Multiple Regression Assumptions The model errors are independent and random The errors are normally distributed The mean of the errors is zero Errors have a constant variance e = (y – y) < Errors (residuals) from the regression model:
Model Specification Decide what you want to do and select the dependent variable Determine the potential independent variables for your model Gather sample data (observations) for all variables
The Correlation Matrix Correlation between the dependent variable and selected independent variables can be found using Excel: Formula Tab:  Data Analysis / Correlation Can check for statistical significance of correlation with a t test
Example A distributor of frozen desert pies wants to evaluate factors thought to influence demand Dependent variable:  Pie sales (units per week) Independent variables:  Price (in $)   Advertising ($100’s) Data are collected for 15 weeks
Pie Sales Model Sales = b 0  + b 1  (Price)    + b 2  (Advertising) Correlation matrix: Multiple regression model: 2.7 7.00 300 15 3.5 5.00 450 14 4.0 5.90 440 13 3.2 7.90 300 12 3.5 7.20 340 11 4.0 5.00 490 10 3.5 7.00 450 9 3.7 6.40 470 8 3.0 4.50 430 7 4.0 7.50 380 6 3.0 6.80 350 5 4.5 8.00 430 4 3.0 8.00 350 3 3.3 7.50 460 2 3.3 5.50 350 1 Advertising ($100s) Price ($) Pie Sales Week 1 0.03044 0.55632 Advertising 1 -0.44327 Price 1 Pie Sales Advertising Price Pie Sales  
Interpretation of Estimated Coefficients Slope (b i ) Estimates that the average value of y changes by b i  units for each 1 unit increase in X i  holding all other variables constant Example: if b 1  = -20, then sales (y) is expected to decrease by an estimated 20 pies per week for each $1 increase in selling price (x 1 ), net of the effects of changes due to advertising (x 2 ) y-intercept (b 0 ) The estimated average value of y when all x i  = 0 (assuming all x i  = 0 is within the range of observed values)
Pie Sales Correlation Matrix Price vs. Sales :  r = -0.44327 There is a  negative  association between  price and sales Advertising vs. Sales :  r = 0.55632 There is a  positive  association between  advertising and sales 1 0.03044 0.55632 Advertising 1 -0.44327 Price 1 Pie Sales Advertising Price Pie Sales  
Scatter Diagrams Sales Sales Price Advertising
Estimating a Multiple Linear  Regression Equation Computer software is generally used to generate the coefficients and measures of goodness of fit for multiple regression Excel: Data / Data Analysis / Regression PHStat: Add-Ins / PHStat / Regression / Multiple Regression…
Estimating a Multiple Linear  Regression Equation Excel: Data / Data Analysis / Regression
Estimating a Multiple Linear  Regression Equation PHStat: Add-Ins / PHStat / Regression / Multiple Regression…
Multiple Regression Output 130.70888 17.55303 0.01449 2.85478 25.96732 74.13096 Advertising -1.37392 -48.57626 0.03979 -2.30565 10.83213 -24.97509 Price 555.46404 57.58835 0.01993 2.68285 114.25389 306.52619 Intercept Upper 95% Lower 95% P-value t Stat Standard Error Coefficients         56493.333 14 Total 2252.776 27033.306 12 Residual 0.01201 6.53861 14730.013 29460.027 2 Regression Significance F F MS SS df ANOVA     15 Observations 47.46341 Standard Error 0.44172 Adjusted R Square 0.52148 R Square 0.72213 Multiple R Regression Statistics
The Multiple Regression Equation b 1  = -24.975 :  sales will decrease, on average, by 24.975 pies per week for each $1 increase in selling price, net of the effects of changes due to advertising b 2  = 74.131 :  sales will increase, on average, by 74.131 pies per week for each $100 increase in advertising, net of the effects of changes due to price where Sales is in number of pies per week Price is in $ Advertising is in $100’s.
Using The Model to Make Predictions Predict sales for a week in which the selling price is $5.50 and advertising is $350: Predicted sales is 428.62 pies Note that Advertising is in $100’s, so $350 means that x 2  = 3.5
Predictions in PHStat PHStat | regression | multiple regression … Check the “confidence and prediction interval estimates” box
Input values Predictions in PHStat (continued) Predicted  y value < Confidence interval for the mean y value, given these x’s < Prediction interval for an individual  y value, given these x’s <
Multiple Coefficient of   Determination (R 2 ) Reports the proportion of total variation in  y explained by all  x  variables taken together
Multiple Coefficient of   Determination 52.1% of the variation in pie sales is explained by the variation in price and advertising (continued) 130.70888 17.55303 0.01449 2.85478 25.96732 74.13096 Advertising -1.37392 -48.57626 0.03979 -2.30565 10.83213 -24.97509 Price 555.46404 57.58835 0.01993 2.68285 114.25389 306.52619 Intercept Upper 95% Lower 95% P-value t Stat Standard Error Coefficients         56493.333 14 Total 2252.776 27033.306 12 Residual 0.01201 6.53861 14730.013 29460.027 2 Regression Significance F F MS SS df ANOVA     15 Observations 47.46341 Standard Error 0.44172 Adjusted R Square 0.52148 R Square 0.72213 Multiple R Regression Statistics
Adjusted R 2 R 2   never decreases when a new  x  variable is added to the model This can be a disadvantage when comparing models What is the net effect of adding a new variable? We lose a degree of freedom when a new  x variable is added Did the new  x  variable add enough explanatory power to offset the loss of one degree of freedom?
Shows the  proportion of variation in y explained  by all x variables adjusted for the number of x   variables used (where n = sample size, k = number of independent variables) Penalize excessive use of unimportant independent variables Smaller than R 2 Useful in comparing among models Adjusted R 2 (continued)
Multiple Coefficient of   Determination 44.2% of the variation in pie sales is explained by the variation in price and advertising, taking into account the sample size and number of independent variables (continued) 130.70888 17.55303 0.01449 2.85478 25.96732 74.13096 Advertising -1.37392 -48.57626 0.03979 -2.30565 10.83213 -24.97509 Price 555.46404 57.58835 0.01993 2.68285 114.25389 306.52619 Intercept Upper 95% Lower 95% P-value t Stat Standard Error Coefficients         56493.333 14 Total 2252.776 27033.306 12 Residual 0.01201 6.53861 14730.013 29460.027 2 Regression Significance F F MS SS df ANOVA     15 Observations 47.46341 Standard Error 0.44172 Adjusted R Square 0.52148 R Square 0.72213 Multiple R Regression Statistics
Is the Model Significant? F-Test for Overall Significance of the Model Shows if there is a linear relationship between all of the  x  variables considered together and  y Use F test statistic Hypotheses: H 0 :  β 1  =  β 2  = … =  β k  = 0  (no linear relationship) H A :  at least one  β i   ≠  0  (at least one independent   variable affects y)
F-Test for Overall Significance Test statistic: where F has  (numerator)  D 1  = k  and (denominator)  D 2  = (n – k – 1) degrees of freedom   (continued)
F-Test for Overall Significance (continued) With 2 and 12 degrees of freedom P-value for the F-Test 130.70888 17.55303 0.01449 2.85478 25.96732 74.13096 Advertising -1.37392 -48.57626 0.03979 -2.30565 10.83213 -24.97509 Price 555.46404 57.58835 0.01993 2.68285 114.25389 306.52619 Intercept Upper 95% Lower 95% P-value t Stat Standard Error Coefficients         56493.333 14 Total 2252.776 27033.306 12 Residual 0.01201 6.53861 14730.013 29460.027 2 Regression Significance F F MS SS df ANOVA     15 Observations 47.46341 Standard Error 0.44172 Adjusted R Square 0.52148 R Square 0.72213 Multiple R Regression Statistics
H 0 :  β 1  =  β 2  =  0 H A :  β 1  and  β 2  not both zero    = .05 df 1 = 2  df 2  = 12  F-Test for Overall Significance Test Statistic:  Decision: Conclusion: Reject H 0  at    = 0.05 The regression model does explain a significant portion of the variation in pie sales  (There is evidence that at least one independent variable affects  y ) 0      = .05 F .05  = 3.885 Reject H 0 Do not  reject H 0 Critical Value:  F    = 3.885 (continued) F
Are Individual Variables Significant? Use t-tests of individual variable slopes Shows if there is a linear relationship between the variable  x i   and  y Hypotheses: H 0 :  β i   = 0 (no linear relationship) H A :  β i   ≠  0  (linear relationship does exist   between  x i   and  y)
Are Individual Variables Significant? H 0 :  β i   = 0 (no linear relationship) H A :  β i   ≠  0  (linear relationship does exist   between  x i   and  y ) Test Statistic: ( df = n – k – 1) (continued)
Are Individual Variables Significant? t-value for Price is  t = -2.306, with p-value .0398 t-value for Advertising is t = 2.855, with p-value .0145 (continued) 130.70888 17.55303 0.01449 2.85478 25.96732 74.13096 Advertising -1.37392 -48.57626 0.03979 -2.30565 10.83213 -24.97509 Price 555.46404 57.58835 0.01993 2.68285 114.25389 306.52619 Intercept Upper 95% Lower 95% P-value t Stat Standard Error Coefficients         56493.333 14 Total 2252.776 27033.306 12 Residual 0.01201 6.53861 14730.013 29460.027 2 Regression Significance F F MS SS df ANOVA     15 Observations 47.46341 Standard Error 0.44172 Adjusted R Square 0.52148 R Square 0.72213 Multiple R Regression Statistics
Inferences about the Slope:  t   Test Example H 0 :  β i  = 0 H A :  β i     0 d.f. = 15-2-1 = 12 = .05 t  /2  = 2.1788 The test statistic for each variable falls in the rejection region (p-values < .05) There is evidence that both Price and Advertising affect pie sales at    = .05 From Excel output:  Reject H 0  for each variable Decision: Conclusion: Reject H 0 Reject H 0  /2=.025 -t α /2 Do not reject H 0 0 t α /2  /2=.025 -2.1788 2.1788 0.01449 2.85478 25.96732 74.13096 Advertising 0.03979 -2.30565 10.83213 -24.97509 Price P-value t Stat Standard Error Coefficients  
Confidence Interval Estimate  for the Slope Confidence interval for the population slope  β 1  (the effect of changes in price on pie sales): Example:   Weekly sales are estimated to be reduced by between 1.37 to 48.58 pies for each increase of $1 in the selling price where t has  (n – k – 1) d.f. … … … … 130.70888 17.55303 25.96732 74.13096 Advertising -1.37392 -48.57626 10.83213 -24.97509 Price 555.46404 57.58835 114.25389 306.52619 Intercept Upper 95% Lower 95% Standard Error Coefficients  
Standard Deviation of the Regression Model The estimate of the standard deviation of the regression model is: Is this value large or small?  Must compare to the mean size of  y  for comparison
Standard Deviation of the Regression Model The standard deviation of the regression model is 47.46  (continued) 130.70888 17.55303 0.01449 2.85478 25.96732 74.13096 Advertising -1.37392 -48.57626 0.03979 -2.30565 10.83213 -24.97509 Price 555.46404 57.58835 0.01993 2.68285 114.25389 306.52619 Intercept Upper 95% Lower 95% P-value t Stat Standard Error Coefficients         56493.333 14 Total 2252.776 27033.306 12 Residual 0.01201 6.53861 14730.013 29460.027 2 Regression Significance F F MS SS df ANOVA     15 Observations 47.46341 Standard Error 0.44172 Adjusted R Square 0.52148 R Square 0.72213 Multiple R Regression Statistics
The standard deviation of the regression model is 47.46   A rough prediction range for pie sales in a given week is Pie sales in the sample were in the 300 to 500 per week range, so this range is probably too large to be acceptable.  The analyst may want to look for additional variables that can explain more of the variation in weekly sales Standard Deviation of the Regression Model (continued)
Multicollinearity Multicollinearity:  High correlation exists between two independent variables This means the two variables contribute redundant information to the multiple regression model
Multicollinearity Including two highly correlated independent variables can adversely affect the regression results No new information provided Can lead to unstable coefficients (large standard error and low t-values) Coefficient signs may not match prior expectations (continued)
Some Indications of Severe Multicollinearity Incorrect signs on the coefficients Large change in the value of a previous coefficient when a new variable is added to the model A previously significant variable becomes insignificant when a new independent variable is added The estimate of the standard deviation of the model increases when a variable is added to the model
Qualitative (Dummy) Variables Categorical explanatory variable (dummy variable) with two or more levels: yes or no, on or off, male or female coded as 0 or 1 Regression intercepts are different if the variable is significant Assumes equal slopes for other variables The number of dummy variables needed is (number of levels – 1)
Dummy-Variable Model Example  (with 2 Levels) Let: y  = pie sales x 1  = price x 2  = holiday   (X 2  = 1 if a holiday occurred during the week)    (X 2  = 0 if there was no holiday that week)
Dummy-Variable Model Example  (with 2 Levels) Same slope (continued) x 1  (Price) y (sales) b 0  + b 2 b 0   Holiday No Holiday Different intercept Holiday No Holiday If  H 0 :  β 2  = 0  is rejected, then “ Holiday” has a significant effect on pie sales
Interpreting the Dummy Variable Coefficient (with 2 Levels) Sales: number of pies sold per week Price:  pie price in $ Holiday: Example: 1  If a holiday occurred during the week 0  If no holiday occurred b 2  = 15: on average, sales were 15 pies greater in weeks with a holiday than in weeks without a holiday, given the same price
Dummy-Variable Models  (more than 2 Levels) The number of dummy variables is  one less than the number of levels Example: y = house price ;  x 1  = square feet The style of the house is also thought to matter: Style =  ranch,  split level,  condo Three levels, so two dummy variables are needed
Dummy-Variable Models  (more than 2 Levels) b 2  shows the impact on price if the house is a ranch style,  compared to a condo b 3  shows the impact on price if the house is a split level style,  compared to a condo (continued) Let the default category be “condo”
Interpreting the Dummy Variable Coefficients (with 3 Levels) With the same square feet, a ranch will have an estimated average price of 23.53 thousand dollars more than a condo With the same square feet, a ranch will have an estimated average price of 18.84 thousand dollars more than a condo. Suppose the estimated equation is For a condo: x 2  = x 3  = 0 For a ranch: x 3  = 0 For a split level: x 2  = 0
Model Building Goal is to develop a model with the best set of independent variables Easier to interpret if unimportant variables are removed Lower probability of collinearity Stepwise regression procedure Provide evaluation of alternative models as variables are added Best-subset approach Try all combinations and select the best using the highest adjusted R 2  and lowest s ε
Idea:   develop the least squares regression equation in steps, either through  forward selection ,  backward elimination , or through  standard stepwise regression Stepwise Regression
Best Subsets Regression Idea:  estimate all possible regression equations using  all possible combinations  of independent variables Choose the best fit by looking for the  highest adjusted R 2  and  lowest standard error s ε Stepwise regression and best subsets regression can be performed using PHStat, Minitab, or other statistical software packages
Aptness of the Model Diagnostic checks on the model include verifying the assumptions of multiple regression: Errors are independent and random  Error are normally distributed  Errors have constant variance Each x i  is linearly related to y Errors (or Residuals) are given by
Residual Analysis Non-constant variance  Constant variance x x residuals residuals Not Independent Independent x residuals x residuals 
The Normality Assumption Errors are assumed to be normally distributed Standardized residuals can be calculated by computer Examine a  histogram  or a  normal probability plot  of the standardized residuals to check for normality
Chapter Summary Developed the multiple regression model Tested the significance of the multiple regression model Developed adjusted R 2 Tested individual regression coefficients Used dummy variables
Chapter Summary Described multicollinearity Discussed model building Stepwise regression Best subsets regression Examined residual plots to check model assumptions (continued)

More Related Content

PPT
Multiple Regression.ppt
PDF
PPTX
Lesson 2 stationary_time_series
PPT
Granger causality test
PPT
Auto Correlation Presentation
PPTX
Chap12 multiple regression
PDF
Factor analysis
PPTX
Conjoint analysis
Multiple Regression.ppt
Lesson 2 stationary_time_series
Granger causality test
Auto Correlation Presentation
Chap12 multiple regression
Factor analysis
Conjoint analysis

What's hot (20)

PDF
6. bounds test for cointegration within ardl or vecm
PPTX
Final observation ppt2
PPTX
BS6_Measurement of Trend.pptx
PPT
Mba 532 2011_part_3_time_series_analysis
PPT
Two sample t-test
PPTX
Application of Univariate, Bi-variate and Multivariate analysis Pooja k shetty
PDF
Time Series - 1
PPTX
Elements of inferential statistics
DOCX
Kebebasan Galat
PPT
Statistika non parametrik
PPTX
Time Series Analysis.pptx
PPTX
Lesson 1 introduction_to_time_series
PPTX
Heteroscedasticity
PPTX
Basic Statistics in 1 hour.pptx
PDF
Analysis of variance
PPTX
Inferential statistics quantitative data - anova
PPTX
One-Sample Hypothesis Tests
PPT
T test statistic
PPT
linear Regression, multiple Regression and Annova
PPTX
Multiple Regression Analysis (MRA)
6. bounds test for cointegration within ardl or vecm
Final observation ppt2
BS6_Measurement of Trend.pptx
Mba 532 2011_part_3_time_series_analysis
Two sample t-test
Application of Univariate, Bi-variate and Multivariate analysis Pooja k shetty
Time Series - 1
Elements of inferential statistics
Kebebasan Galat
Statistika non parametrik
Time Series Analysis.pptx
Lesson 1 introduction_to_time_series
Heteroscedasticity
Basic Statistics in 1 hour.pptx
Analysis of variance
Inferential statistics quantitative data - anova
One-Sample Hypothesis Tests
T test statistic
linear Regression, multiple Regression and Annova
Multiple Regression Analysis (MRA)
Ad

Similar to Multiple Regression (20)

PPT
ch13 multiple regression ppt: introduction to multiple regression
PDF
Bbs11 ppt ch14
PPT
Lesson07_new
PPT
Introduction to Multiple Regression
PPTX
Statr session 23 and 24
PPT
Newbold_chap13.ppt
PDF
Stat_AMBA_600_Problem Set3
PPTX
IBM401 Lecture 5
PDF
6. Multiple Regression Analysis Using R.pdf
PDF
Group 5 - Regression Analysis.pdf
PPTX
01_SLR_final (1).pptx
PPT
Chap13 intro to multiple regression
PPTX
Me ppt
PPTX
Bivariate
PPT
Chapter14
PPT
koefisienkorelasiUNTUKILMUMANAJEMENS2.ppt
PPT
Regression analysis
DOCX
Chapter 15Multiple Regression and Model BuildingCo
PPTX
An Introduction to Regression Models: Linear and Logistic approaches
DOC
Marketing Engineering Notes
ch13 multiple regression ppt: introduction to multiple regression
Bbs11 ppt ch14
Lesson07_new
Introduction to Multiple Regression
Statr session 23 and 24
Newbold_chap13.ppt
Stat_AMBA_600_Problem Set3
IBM401 Lecture 5
6. Multiple Regression Analysis Using R.pdf
Group 5 - Regression Analysis.pdf
01_SLR_final (1).pptx
Chap13 intro to multiple regression
Me ppt
Bivariate
Chapter14
koefisienkorelasiUNTUKILMUMANAJEMENS2.ppt
Regression analysis
Chapter 15Multiple Regression and Model BuildingCo
An Introduction to Regression Models: Linear and Logistic approaches
Marketing Engineering Notes
Ad

Recently uploaded (20)

PDF
1 - Historical Antecedents, Social Consideration.pdf
PPTX
OMC Textile Division Presentation 2021.pptx
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PDF
Enhancing emotion recognition model for a student engagement use case through...
PDF
Hindi spoken digit analysis for native and non-native speakers
PPTX
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
PDF
Mushroom cultivation and it's methods.pdf
PPTX
A Presentation on Touch Screen Technology
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Hybrid model detection and classification of lung cancer
PDF
Zenith AI: Advanced Artificial Intelligence
PDF
DP Operators-handbook-extract for the Mautical Institute
PDF
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Approach and Philosophy of On baking technology
PDF
project resource management chapter-09.pdf
1 - Historical Antecedents, Social Consideration.pdf
OMC Textile Division Presentation 2021.pptx
Accuracy of neural networks in brain wave diagnosis of schizophrenia
Enhancing emotion recognition model for a student engagement use case through...
Hindi spoken digit analysis for native and non-native speakers
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
Mushroom cultivation and it's methods.pdf
A Presentation on Touch Screen Technology
Unlocking AI with Model Context Protocol (MCP)
cloud_computing_Infrastucture_as_cloud_p
A novel scalable deep ensemble learning framework for big data classification...
Encapsulation_ Review paper, used for researhc scholars
Hybrid model detection and classification of lung cancer
Zenith AI: Advanced Artificial Intelligence
DP Operators-handbook-extract for the Mautical Institute
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
A comparative analysis of optical character recognition models for extracting...
Assigned Numbers - 2025 - Bluetooth® Document
Approach and Philosophy of On baking technology
project resource management chapter-09.pdf

Multiple Regression

  • 1. Multiple Regression Analysis and Model Building
  • 2. Chapter Goals After completing this chapter, you should be able to: explain model building using multiple regression analysis apply multiple regression analysis to business decision-making situations analyze and interpret the computer output for a multiple regression model test the significance of the independent variables in a multiple regression model
  • 3. Chapter Goals After completing this chapter, you should be able to: recognize potential problems in multiple regression analysis and take steps to correct the problems incorporate qualitative variables into the regression model by using dummy variables use variable transformations to model nonlinear relationships (continued)
  • 4. The Multiple Regression Model Idea: Examine the linear relationship between 1 dependent (y) & 2 or more independent variables (x i ) Population model: Y-intercept Population slopes Random Error Estimated (or predicted) value of y Estimated slope coefficients Estimated multiple regression model: Estimated intercept
  • 5. Multiple Regression Model Two variable model y x 1 x 2 Slope for variable x 1 Slope for variable x 2
  • 6. Multiple Regression Model Two variable model y x 1 x 2 y i y i < e = (y – y) < x 2i x 1i The best fit equation, y , is found by minimizing the sum of squared errors,  e 2 < Sample observation
  • 7. Multiple Regression Assumptions The model errors are independent and random The errors are normally distributed The mean of the errors is zero Errors have a constant variance e = (y – y) < Errors (residuals) from the regression model:
  • 8. Model Specification Decide what you want to do and select the dependent variable Determine the potential independent variables for your model Gather sample data (observations) for all variables
  • 9. The Correlation Matrix Correlation between the dependent variable and selected independent variables can be found using Excel: Formula Tab: Data Analysis / Correlation Can check for statistical significance of correlation with a t test
  • 10. Example A distributor of frozen desert pies wants to evaluate factors thought to influence demand Dependent variable: Pie sales (units per week) Independent variables: Price (in $) Advertising ($100’s) Data are collected for 15 weeks
  • 11. Pie Sales Model Sales = b 0 + b 1 (Price) + b 2 (Advertising) Correlation matrix: Multiple regression model: 2.7 7.00 300 15 3.5 5.00 450 14 4.0 5.90 440 13 3.2 7.90 300 12 3.5 7.20 340 11 4.0 5.00 490 10 3.5 7.00 450 9 3.7 6.40 470 8 3.0 4.50 430 7 4.0 7.50 380 6 3.0 6.80 350 5 4.5 8.00 430 4 3.0 8.00 350 3 3.3 7.50 460 2 3.3 5.50 350 1 Advertising ($100s) Price ($) Pie Sales Week 1 0.03044 0.55632 Advertising 1 -0.44327 Price 1 Pie Sales Advertising Price Pie Sales  
  • 12. Interpretation of Estimated Coefficients Slope (b i ) Estimates that the average value of y changes by b i units for each 1 unit increase in X i holding all other variables constant Example: if b 1 = -20, then sales (y) is expected to decrease by an estimated 20 pies per week for each $1 increase in selling price (x 1 ), net of the effects of changes due to advertising (x 2 ) y-intercept (b 0 ) The estimated average value of y when all x i = 0 (assuming all x i = 0 is within the range of observed values)
  • 13. Pie Sales Correlation Matrix Price vs. Sales : r = -0.44327 There is a negative association between price and sales Advertising vs. Sales : r = 0.55632 There is a positive association between advertising and sales 1 0.03044 0.55632 Advertising 1 -0.44327 Price 1 Pie Sales Advertising Price Pie Sales  
  • 14. Scatter Diagrams Sales Sales Price Advertising
  • 15. Estimating a Multiple Linear Regression Equation Computer software is generally used to generate the coefficients and measures of goodness of fit for multiple regression Excel: Data / Data Analysis / Regression PHStat: Add-Ins / PHStat / Regression / Multiple Regression…
  • 16. Estimating a Multiple Linear Regression Equation Excel: Data / Data Analysis / Regression
  • 17. Estimating a Multiple Linear Regression Equation PHStat: Add-Ins / PHStat / Regression / Multiple Regression…
  • 18. Multiple Regression Output 130.70888 17.55303 0.01449 2.85478 25.96732 74.13096 Advertising -1.37392 -48.57626 0.03979 -2.30565 10.83213 -24.97509 Price 555.46404 57.58835 0.01993 2.68285 114.25389 306.52619 Intercept Upper 95% Lower 95% P-value t Stat Standard Error Coefficients         56493.333 14 Total 2252.776 27033.306 12 Residual 0.01201 6.53861 14730.013 29460.027 2 Regression Significance F F MS SS df ANOVA   15 Observations 47.46341 Standard Error 0.44172 Adjusted R Square 0.52148 R Square 0.72213 Multiple R Regression Statistics
  • 19. The Multiple Regression Equation b 1 = -24.975 : sales will decrease, on average, by 24.975 pies per week for each $1 increase in selling price, net of the effects of changes due to advertising b 2 = 74.131 : sales will increase, on average, by 74.131 pies per week for each $100 increase in advertising, net of the effects of changes due to price where Sales is in number of pies per week Price is in $ Advertising is in $100’s.
  • 20. Using The Model to Make Predictions Predict sales for a week in which the selling price is $5.50 and advertising is $350: Predicted sales is 428.62 pies Note that Advertising is in $100’s, so $350 means that x 2 = 3.5
  • 21. Predictions in PHStat PHStat | regression | multiple regression … Check the “confidence and prediction interval estimates” box
  • 22. Input values Predictions in PHStat (continued) Predicted y value < Confidence interval for the mean y value, given these x’s < Prediction interval for an individual y value, given these x’s <
  • 23. Multiple Coefficient of Determination (R 2 ) Reports the proportion of total variation in y explained by all x variables taken together
  • 24. Multiple Coefficient of Determination 52.1% of the variation in pie sales is explained by the variation in price and advertising (continued) 130.70888 17.55303 0.01449 2.85478 25.96732 74.13096 Advertising -1.37392 -48.57626 0.03979 -2.30565 10.83213 -24.97509 Price 555.46404 57.58835 0.01993 2.68285 114.25389 306.52619 Intercept Upper 95% Lower 95% P-value t Stat Standard Error Coefficients         56493.333 14 Total 2252.776 27033.306 12 Residual 0.01201 6.53861 14730.013 29460.027 2 Regression Significance F F MS SS df ANOVA   15 Observations 47.46341 Standard Error 0.44172 Adjusted R Square 0.52148 R Square 0.72213 Multiple R Regression Statistics
  • 25. Adjusted R 2 R 2 never decreases when a new x variable is added to the model This can be a disadvantage when comparing models What is the net effect of adding a new variable? We lose a degree of freedom when a new x variable is added Did the new x variable add enough explanatory power to offset the loss of one degree of freedom?
  • 26. Shows the proportion of variation in y explained by all x variables adjusted for the number of x variables used (where n = sample size, k = number of independent variables) Penalize excessive use of unimportant independent variables Smaller than R 2 Useful in comparing among models Adjusted R 2 (continued)
  • 27. Multiple Coefficient of Determination 44.2% of the variation in pie sales is explained by the variation in price and advertising, taking into account the sample size and number of independent variables (continued) 130.70888 17.55303 0.01449 2.85478 25.96732 74.13096 Advertising -1.37392 -48.57626 0.03979 -2.30565 10.83213 -24.97509 Price 555.46404 57.58835 0.01993 2.68285 114.25389 306.52619 Intercept Upper 95% Lower 95% P-value t Stat Standard Error Coefficients         56493.333 14 Total 2252.776 27033.306 12 Residual 0.01201 6.53861 14730.013 29460.027 2 Regression Significance F F MS SS df ANOVA   15 Observations 47.46341 Standard Error 0.44172 Adjusted R Square 0.52148 R Square 0.72213 Multiple R Regression Statistics
  • 28. Is the Model Significant? F-Test for Overall Significance of the Model Shows if there is a linear relationship between all of the x variables considered together and y Use F test statistic Hypotheses: H 0 : β 1 = β 2 = … = β k = 0 (no linear relationship) H A : at least one β i ≠ 0 (at least one independent variable affects y)
  • 29. F-Test for Overall Significance Test statistic: where F has (numerator) D 1 = k and (denominator) D 2 = (n – k – 1) degrees of freedom (continued)
  • 30. F-Test for Overall Significance (continued) With 2 and 12 degrees of freedom P-value for the F-Test 130.70888 17.55303 0.01449 2.85478 25.96732 74.13096 Advertising -1.37392 -48.57626 0.03979 -2.30565 10.83213 -24.97509 Price 555.46404 57.58835 0.01993 2.68285 114.25389 306.52619 Intercept Upper 95% Lower 95% P-value t Stat Standard Error Coefficients         56493.333 14 Total 2252.776 27033.306 12 Residual 0.01201 6.53861 14730.013 29460.027 2 Regression Significance F F MS SS df ANOVA   15 Observations 47.46341 Standard Error 0.44172 Adjusted R Square 0.52148 R Square 0.72213 Multiple R Regression Statistics
  • 31. H 0 : β 1 = β 2 = 0 H A : β 1 and β 2 not both zero  = .05 df 1 = 2 df 2 = 12 F-Test for Overall Significance Test Statistic: Decision: Conclusion: Reject H 0 at  = 0.05 The regression model does explain a significant portion of the variation in pie sales (There is evidence that at least one independent variable affects y ) 0  = .05 F .05 = 3.885 Reject H 0 Do not reject H 0 Critical Value: F  = 3.885 (continued) F
  • 32. Are Individual Variables Significant? Use t-tests of individual variable slopes Shows if there is a linear relationship between the variable x i and y Hypotheses: H 0 : β i = 0 (no linear relationship) H A : β i ≠ 0 (linear relationship does exist between x i and y)
  • 33. Are Individual Variables Significant? H 0 : β i = 0 (no linear relationship) H A : β i ≠ 0 (linear relationship does exist between x i and y ) Test Statistic: ( df = n – k – 1) (continued)
  • 34. Are Individual Variables Significant? t-value for Price is t = -2.306, with p-value .0398 t-value for Advertising is t = 2.855, with p-value .0145 (continued) 130.70888 17.55303 0.01449 2.85478 25.96732 74.13096 Advertising -1.37392 -48.57626 0.03979 -2.30565 10.83213 -24.97509 Price 555.46404 57.58835 0.01993 2.68285 114.25389 306.52619 Intercept Upper 95% Lower 95% P-value t Stat Standard Error Coefficients         56493.333 14 Total 2252.776 27033.306 12 Residual 0.01201 6.53861 14730.013 29460.027 2 Regression Significance F F MS SS df ANOVA   15 Observations 47.46341 Standard Error 0.44172 Adjusted R Square 0.52148 R Square 0.72213 Multiple R Regression Statistics
  • 35. Inferences about the Slope: t Test Example H 0 : β i = 0 H A : β i  0 d.f. = 15-2-1 = 12 = .05 t  /2 = 2.1788 The test statistic for each variable falls in the rejection region (p-values < .05) There is evidence that both Price and Advertising affect pie sales at  = .05 From Excel output: Reject H 0 for each variable Decision: Conclusion: Reject H 0 Reject H 0  /2=.025 -t α /2 Do not reject H 0 0 t α /2  /2=.025 -2.1788 2.1788 0.01449 2.85478 25.96732 74.13096 Advertising 0.03979 -2.30565 10.83213 -24.97509 Price P-value t Stat Standard Error Coefficients  
  • 36. Confidence Interval Estimate for the Slope Confidence interval for the population slope β 1 (the effect of changes in price on pie sales): Example: Weekly sales are estimated to be reduced by between 1.37 to 48.58 pies for each increase of $1 in the selling price where t has (n – k – 1) d.f. … … … … 130.70888 17.55303 25.96732 74.13096 Advertising -1.37392 -48.57626 10.83213 -24.97509 Price 555.46404 57.58835 114.25389 306.52619 Intercept Upper 95% Lower 95% Standard Error Coefficients  
  • 37. Standard Deviation of the Regression Model The estimate of the standard deviation of the regression model is: Is this value large or small? Must compare to the mean size of y for comparison
  • 38. Standard Deviation of the Regression Model The standard deviation of the regression model is 47.46 (continued) 130.70888 17.55303 0.01449 2.85478 25.96732 74.13096 Advertising -1.37392 -48.57626 0.03979 -2.30565 10.83213 -24.97509 Price 555.46404 57.58835 0.01993 2.68285 114.25389 306.52619 Intercept Upper 95% Lower 95% P-value t Stat Standard Error Coefficients         56493.333 14 Total 2252.776 27033.306 12 Residual 0.01201 6.53861 14730.013 29460.027 2 Regression Significance F F MS SS df ANOVA   15 Observations 47.46341 Standard Error 0.44172 Adjusted R Square 0.52148 R Square 0.72213 Multiple R Regression Statistics
  • 39. The standard deviation of the regression model is 47.46 A rough prediction range for pie sales in a given week is Pie sales in the sample were in the 300 to 500 per week range, so this range is probably too large to be acceptable. The analyst may want to look for additional variables that can explain more of the variation in weekly sales Standard Deviation of the Regression Model (continued)
  • 40. Multicollinearity Multicollinearity: High correlation exists between two independent variables This means the two variables contribute redundant information to the multiple regression model
  • 41. Multicollinearity Including two highly correlated independent variables can adversely affect the regression results No new information provided Can lead to unstable coefficients (large standard error and low t-values) Coefficient signs may not match prior expectations (continued)
  • 42. Some Indications of Severe Multicollinearity Incorrect signs on the coefficients Large change in the value of a previous coefficient when a new variable is added to the model A previously significant variable becomes insignificant when a new independent variable is added The estimate of the standard deviation of the model increases when a variable is added to the model
  • 43. Qualitative (Dummy) Variables Categorical explanatory variable (dummy variable) with two or more levels: yes or no, on or off, male or female coded as 0 or 1 Regression intercepts are different if the variable is significant Assumes equal slopes for other variables The number of dummy variables needed is (number of levels – 1)
  • 44. Dummy-Variable Model Example (with 2 Levels) Let: y = pie sales x 1 = price x 2 = holiday (X 2 = 1 if a holiday occurred during the week) (X 2 = 0 if there was no holiday that week)
  • 45. Dummy-Variable Model Example (with 2 Levels) Same slope (continued) x 1 (Price) y (sales) b 0 + b 2 b 0 Holiday No Holiday Different intercept Holiday No Holiday If H 0 : β 2 = 0 is rejected, then “ Holiday” has a significant effect on pie sales
  • 46. Interpreting the Dummy Variable Coefficient (with 2 Levels) Sales: number of pies sold per week Price: pie price in $ Holiday: Example: 1 If a holiday occurred during the week 0 If no holiday occurred b 2 = 15: on average, sales were 15 pies greater in weeks with a holiday than in weeks without a holiday, given the same price
  • 47. Dummy-Variable Models (more than 2 Levels) The number of dummy variables is one less than the number of levels Example: y = house price ; x 1 = square feet The style of the house is also thought to matter: Style = ranch, split level, condo Three levels, so two dummy variables are needed
  • 48. Dummy-Variable Models (more than 2 Levels) b 2 shows the impact on price if the house is a ranch style, compared to a condo b 3 shows the impact on price if the house is a split level style, compared to a condo (continued) Let the default category be “condo”
  • 49. Interpreting the Dummy Variable Coefficients (with 3 Levels) With the same square feet, a ranch will have an estimated average price of 23.53 thousand dollars more than a condo With the same square feet, a ranch will have an estimated average price of 18.84 thousand dollars more than a condo. Suppose the estimated equation is For a condo: x 2 = x 3 = 0 For a ranch: x 3 = 0 For a split level: x 2 = 0
  • 50. Model Building Goal is to develop a model with the best set of independent variables Easier to interpret if unimportant variables are removed Lower probability of collinearity Stepwise regression procedure Provide evaluation of alternative models as variables are added Best-subset approach Try all combinations and select the best using the highest adjusted R 2 and lowest s ε
  • 51. Idea: develop the least squares regression equation in steps, either through forward selection , backward elimination , or through standard stepwise regression Stepwise Regression
  • 52. Best Subsets Regression Idea: estimate all possible regression equations using all possible combinations of independent variables Choose the best fit by looking for the highest adjusted R 2 and lowest standard error s ε Stepwise regression and best subsets regression can be performed using PHStat, Minitab, or other statistical software packages
  • 53. Aptness of the Model Diagnostic checks on the model include verifying the assumptions of multiple regression: Errors are independent and random Error are normally distributed Errors have constant variance Each x i is linearly related to y Errors (or Residuals) are given by
  • 54. Residual Analysis Non-constant variance  Constant variance x x residuals residuals Not Independent Independent x residuals x residuals 
  • 55. The Normality Assumption Errors are assumed to be normally distributed Standardized residuals can be calculated by computer Examine a histogram or a normal probability plot of the standardized residuals to check for normality
  • 56. Chapter Summary Developed the multiple regression model Tested the significance of the multiple regression model Developed adjusted R 2 Tested individual regression coefficients Used dummy variables
  • 57. Chapter Summary Described multicollinearity Discussed model building Stepwise regression Best subsets regression Examined residual plots to check model assumptions (continued)