SlideShare a Scribd company logo
1
The Nature of Autocorrelation
 The randomness of the sample implies that the error terms for
different observations will be uncorrelated.
 When we have time-series data, where the observations follow
a natural ordering through time, there is always a possibility
that successive errors will be correlated with each other.
 In any one period, the current error term contains not only the
effects of current shocks but also the carryover from previous
shocks. This carryover will be related to, or correlated with,
the effects of the earlier shocks. When circumstances such as
these lead to error terms that are correlated, we say that
autocorrelation exists.
 The possibility of autocorrelation should always be entertained
when we are dealing with time-series data.
2
For efficiency (accurate estimation /
prediction) all systematic information
needs to be incorporated into the regression
model.
Autocorrelation is a systematic pattern in
the errors that can be either attracting
(positive) or repelling (negative)
autocorrelation.
3
Postive
Auto.
No
Auto.
Negative
Auto.
et
.
0
et
0
et
0
t
t
t
.
. . . .
. . . . .
. .
. . . ..
. ...
.
. .
.
.. .
..
.
.
.
.
.
. .
. .
.
. .
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
crosses line not enough (attracting)
crosses line randomly
crosses line too much (repelling)
4
yt = 1 + 2xt + et
Regression Model
E(et) = 0
var(et) = 
2
zero mean:
homoskedasticity:
nonautocorrelation:
autocorrelation:
cov(et, es) = t=s
cov(et, es) = t=s
5
Order of Autocorrelation
yt = 1 + 2xt + et
et = et1 + t
et = 1et1 + 2et2 + t
et = 1et1 + 2et2 + 3et3 + t
1st Order:
2nd Order:
3rd Order:
We will assume First Order Autocorrelation:
et = et1 + t
AR(1) :
6
First Order Autocorrelation
yt = 1 + 2xt + et
et = et1 + t where 1 <  < 1
E(t) = 0 var(t) = 
2
These assumptions about t imply the following about et :
E(et) = 0
var(et) = e
2
=
cov(et, etk) = e
2
k
 for k > 0
corr(et, etk) = k
 for k > 0

2
12
cov(t, s) = t=s
7
Autocorrelation creates some
Problems for Least Squares:
If we have an equation whose errors exhibit
autocorrelation, but we ignore it, or are
simply unaware of it, what does it have on the
properties of least squares estimates?
1. The least squares estimator is still linear and
unbiased but it is not efficient.
2. The formulas normally used to compute the
least squares standard errors are no longer
correct and confidence intervals and
hypothesis tests using them will be wrong.
8
yt = 1 + 2xt + e
Autocorrelation:
E(et) = 0, var(et) = 2
, cov(et, es) = t  s
 (Linear)
Where

(Unbiased)

 

 t
t
t
t e
w
y
w
b 2
2 
 
 

 2
X
X
X
X
t
t
t
w
2
2
2
2 )
(
)
(
)
( 

 



 
 t
t
t
t e
E
w
e
w
E
b
E
9
yt = 1 + 2xt + et
Autocorrelation:
Incorrect formula for least squares variance:
var(b2) =
xt x
Correct formula for least squares variance:
cov(et, es) = t  s
)
,
cov(
)
var(
)
var( 2
2 s
t
s
t
s
t
t
t e
e
w
w
e
w
b 




s
t
s
t
s
t
e
t
e w
w
w 

 

 

 2
2
2
e
2
10
Generalized Least Squares
yt = 1 + 2xt + et
et = et1 + t
yt = 1 + 2xt + et1 + t
substitute
in for et
Now we need to get rid of et1
(continued)
AR(1) :
11
yt = 1 + 2xt + et
yt = 1 + 2xt + et1 + t
et = yt  12xt
et1 = yt1  12xt1
yt = 1 + 2xt + yt1  12xt1 + t
lag the
errors
once
(continued)
12
yt = 1 + 2xt + yt1  12xt1 + t
yt = 1 + 2xt + yt1  12xt1 + t
yt  yt1 = 1(1) + 2(xtxt1) + t
yt = 1 xt1
*
+ 2 xt2
*
+ t ,
*
yt = yt  yt1
*
x1t
*
= (1)
xt2 = (xtxt1)
*
t =2, 3, …, T.
13
yt = yt  yt1
*
xt2 = xt xt1
*
Problems estimating this model with least squares:
1. One observation is used up in creating the transformed
(lagged) variables leaving only (T1) observations for
estimating the model (Cochrane-Orcutt method
drops the first observation).
2. The value of  is not known. We must find some way to
estimate it.
x1t
*
= (1)
yt = 1 xt1
*
+ 2 xt2
*
+ t ,
*
14
(Option) Recovering the 1st Observation
Dropping the 1st observation and applying least squares
is not the best linear unbiased estimation method.
Efficiency is lost because the variance of the
error associated with the 1st observation is not
equal to that of the other errors.
This is a special case of the heteroskedasticity
problem except that here all errors are assumed
to have equal variance except the 1st error.
15
Recovering the 1st Observation
y1 = 1 + 2x1 + e1
The 1st observation should fit the original model as:
We could include this as the 1st observation for
our estimation procedure but we must first
transform it so that it has the same error variance
as the other observations.
with error variance: var(e1) = e
2
= 
2
/(1-2
).
Note: The other observations all have error variance 
2
.
16
y1 = 1 + 2x1 + e1
with error variance: var(e1) = e
2
= 
2
/ (1-2
).
The other observations all have error variance 
2
.
Given any constant c : var(ce1) = c2
var(e1).
If c = 1-2
, then var( 1-2
e1) = (1-2
) var(e1).
= (1-2
) e
2
= (1-2
) 
2
/(1-2
)
= 
2
The transformation 1 = 1-2
e1 has variance 
2
.
17
y1 = 1 + 2x1 + e1
The transformed error 1 = 1-2
e1 has variance 
2
.
Multiply through by 1-2
to get:
1-2
y1 = 1-2
1 + 1-2
2x1 + 1-2
e1
This transformed first observation may now be
added to the other (T-1) observations to obtain
the fully restored set of T observations.
18
We can summarize these results by saying that,
providing  is known, we can find the Best Linear
Unbiased Estimator for 1
and 2
by applying least
squares to the transformed mode
where the transformed variables are defined by
yt = 1 xt1
*
+ 2 xt2
*
+ t ,
* t =1, 2, 3, …, T.
1
2
*
12
2
*
11
1
2
*
1 1
,
1
,
1 x
x
x
y
y 

 





for the first observation, and
1
*
2
*
1
1
*
,
1
, 
 




 t
t
t
t
t
t
t x
x
x
x
y
y
y 


for the remaining t = 2, 3, …, T observations.
19
Estimating Unknown  Value
et = et1 + t
First, use least squares to estimate the model:
If we had values for the et , we could estimate:
yt = 1 + 2xt + et
The residuals from this estimation are:
et = yt  b1  b2xt
^
20
et = yt - b1 - b2xt
^
et = et1 + t
^ ^
Next, estimate the following by least squares:
The least squares solution is:
et et-1
et-1
T
T
t = 2
t = 2
2
^ ^
^
=
^
21
Durbin-Watson Test
The Durbin-Watson test is by far the most
important one for detecting AR(1) errors.
It is assumed that the vt
are independent
random errors with distribution N(0, v
2
).
The assumption of normally distributed
random errors is needed to derive the probability
distribution of the test statistic used in the Durbin-
Watson test.
22
The Durbin-Watson Test statistic, d, is :
et et-1
et
T
T
t = 2
t = 1
2
^ ^
^
d=
2
For a null hypothesis of no
autocorrelation, we can use H0
:  = 0.
For an alternative hypothesis we could
use H1
:  > 0 or H1
:  < 0 or H1
:   0.
23
Testing for Autocorrelation
The test statistic, d , is approximately related to  as:
^
0  d  2(1)  4
When  = 0 , the Durbin-Watson statistic is d  2.
^
When  = 1 , the Durbin-Watson statistic is d  0.
^
Tables for critical values for d are not always
readily available so it is easier to use the p-value
that most computer programs provide for d.
Reject H0 if p-value < , the significance level.
When  = 1, the Durbin-Watson statistic is d  4.
^
^
24
H1:  > 0 d < dL dL < d < dU d > dU
Reject H0 Inclusive Do not reject H0
H1:  < 0 d > 4  dL 4  dU < d < 4  dL d < 4  dU
d > 4  dL 4  dU < d < 4  dL
H1:   0 or or dU < d < 4  dU
d < dL dL < d < dU
Test for the first-order autocorrelation
Note: The lower and upper bounds (dL and dU) depend
on sample size n and the number of explanatory
variables k (not include intercept).
25
 > 0
 < 0
 > 0  < 0
No evidence of positive autocorrelation
No evidence of negative autocorrelation
No evidence of
autocorrelation
Inclusive
Inclusive
Inclusive Inclusive
0
0
0
4
4
4
dL
dL
4  dL
4  dL
2
2
2
dU
4  dU
dU 4  dU
A. Test for Positive Autocorrelation
B. Test for Negative Autocorrelation
C. Two-Sided Test for Autocorrelation
26
Prediction with AR(1) Errors
When errors are autocorrelated, the previous period
error may help us predict next period error.
The best predictor, yT+1 , for next period is:
yT+1 = 1 + 2xT+1 +  eT
^ ^ ^ ^ ~
where 1 and 2 are generalized least squares
estimates and eT is given by:
~
^ ^
eT = yT  1  2xT
^ ^
~
27
yT+h = 1 + 2xT+h + h
eT
^ ^ ^ ^ ~
For h periods ahead, the best predictor is:
Assuming |  | < 1, the influence of h
eT
diminishes the further we go into the future
(the larger h becomes).
^ ^ ~

More Related Content

PDF
Ali, Redescending M-estimator
PPT
Ch6 slides
PPTX
Chapter 07 - Autocorrelation.pptx
PPT
Auto Correlation Presentation
PPTX
Regression topics
DOCX
Autocorrelation
PPT
Autocorrelation- Detection- part 1- Durbin-Watson d test
PPT
INTRODUCTION TO TIME SERIES REGRESSION AND FORCASTING
Ali, Redescending M-estimator
Ch6 slides
Chapter 07 - Autocorrelation.pptx
Auto Correlation Presentation
Regression topics
Autocorrelation
Autocorrelation- Detection- part 1- Durbin-Watson d test
INTRODUCTION TO TIME SERIES REGRESSION AND FORCASTING

Similar to Autocorrelation Function Nature and Characteristics (20)

PPT
Autocorrelation Function Nature and Characterstics ppt 2.ppt
PDF
Autocorrelation (1)
PDF
auto correlation.pdf
PDF
Introduction to Statistical Methods for Financial Models 1st Severini Solutio...
DOC
Ch 12 Slides.doc. Introduction of science of business
PPTX
autocorrelation.pptx
PPTX
Autocorrelation
PPTX
Advanced Econometrics L10.pptx
PDF
Eonometrics for acct and finance ch 6 2023 (2).pdf
PPT
Econometric lec3.ppt
PPT
Multicollinearity in Econometrics Analysis
PDF
Linear regression
PPTX
Advanced Econometrics L9.pptx
PDF
Vector Auto-Regressive model Part one.pdf
PDF
Time Series for FRAM-Second_Sem_2021-22 (1).pdf
PDF
Univariate Financial Time Series Analysis
PDF
Ch14_slides.pdf
PDF
Autocorrelation
PDF
Linear Regression.pdf
PPT
Estimation of Dynamic Causal Effects -Introduction to Economics
Autocorrelation Function Nature and Characterstics ppt 2.ppt
Autocorrelation (1)
auto correlation.pdf
Introduction to Statistical Methods for Financial Models 1st Severini Solutio...
Ch 12 Slides.doc. Introduction of science of business
autocorrelation.pptx
Autocorrelation
Advanced Econometrics L10.pptx
Eonometrics for acct and finance ch 6 2023 (2).pdf
Econometric lec3.ppt
Multicollinearity in Econometrics Analysis
Linear regression
Advanced Econometrics L9.pptx
Vector Auto-Regressive model Part one.pdf
Time Series for FRAM-Second_Sem_2021-22 (1).pdf
Univariate Financial Time Series Analysis
Ch14_slides.pdf
Autocorrelation
Linear Regression.pdf
Estimation of Dynamic Causal Effects -Introduction to Economics
Ad

More from ChandraShekar270464 (6)

PPTX
Theories of Accounting_Demystified Samson
PPTX
Implications of Globalisation on Managers.pptx
PPT
Marketing Research Presentation chapter06.ppt
PPT
Marketing Research Presentation chapter02.ppt
PPT
Marketing Research Presentation Bangalore university
PPT
Business Research Introduction Nature Scope
Theories of Accounting_Demystified Samson
Implications of Globalisation on Managers.pptx
Marketing Research Presentation chapter06.ppt
Marketing Research Presentation chapter02.ppt
Marketing Research Presentation Bangalore university
Business Research Introduction Nature Scope
Ad

Recently uploaded (20)

PDF
O7-L3 Supply Chain Operations - ICLT Program
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
Institutional Correction lecture only . . .
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
Computing-Curriculum for Schools in Ghana
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PPTX
Pharma ospi slides which help in ospi learning
PDF
Complications of Minimal Access Surgery at WLH
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PPTX
Cell Structure & Organelles in detailed.
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PPTX
Lesson notes of climatology university.
O7-L3 Supply Chain Operations - ICLT Program
Supply Chain Operations Speaking Notes -ICLT Program
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
Microbial disease of the cardiovascular and lymphatic systems
Institutional Correction lecture only . . .
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
FourierSeries-QuestionsWithAnswers(Part-A).pdf
O5-L3 Freight Transport Ops (International) V1.pdf
Computing-Curriculum for Schools in Ghana
Final Presentation General Medicine 03-08-2024.pptx
Final Presentation General Medicine 03-08-2024.pptx
Pharma ospi slides which help in ospi learning
Complications of Minimal Access Surgery at WLH
Anesthesia in Laparoscopic Surgery in India
VCE English Exam - Section C Student Revision Booklet
Module 4: Burden of Disease Tutorial Slides S2 2025
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
Cell Structure & Organelles in detailed.
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
Lesson notes of climatology university.

Autocorrelation Function Nature and Characteristics

  • 1. 1 The Nature of Autocorrelation  The randomness of the sample implies that the error terms for different observations will be uncorrelated.  When we have time-series data, where the observations follow a natural ordering through time, there is always a possibility that successive errors will be correlated with each other.  In any one period, the current error term contains not only the effects of current shocks but also the carryover from previous shocks. This carryover will be related to, or correlated with, the effects of the earlier shocks. When circumstances such as these lead to error terms that are correlated, we say that autocorrelation exists.  The possibility of autocorrelation should always be entertained when we are dealing with time-series data.
  • 2. 2 For efficiency (accurate estimation / prediction) all systematic information needs to be incorporated into the regression model. Autocorrelation is a systematic pattern in the errors that can be either attracting (positive) or repelling (negative) autocorrelation.
  • 3. 3 Postive Auto. No Auto. Negative Auto. et . 0 et 0 et 0 t t t . . . . . . . . . . . . . . . .. . ... . . . . .. . .. . . . . . . . . . . . . ... . . . . . . . . . . . . . . . . . . . . . . crosses line not enough (attracting) crosses line randomly crosses line too much (repelling)
  • 4. 4 yt = 1 + 2xt + et Regression Model E(et) = 0 var(et) =  2 zero mean: homoskedasticity: nonautocorrelation: autocorrelation: cov(et, es) = t=s cov(et, es) = t=s
  • 5. 5 Order of Autocorrelation yt = 1 + 2xt + et et = et1 + t et = 1et1 + 2et2 + t et = 1et1 + 2et2 + 3et3 + t 1st Order: 2nd Order: 3rd Order: We will assume First Order Autocorrelation: et = et1 + t AR(1) :
  • 6. 6 First Order Autocorrelation yt = 1 + 2xt + et et = et1 + t where 1 <  < 1 E(t) = 0 var(t) =  2 These assumptions about t imply the following about et : E(et) = 0 var(et) = e 2 = cov(et, etk) = e 2 k  for k > 0 corr(et, etk) = k  for k > 0  2 12 cov(t, s) = t=s
  • 7. 7 Autocorrelation creates some Problems for Least Squares: If we have an equation whose errors exhibit autocorrelation, but we ignore it, or are simply unaware of it, what does it have on the properties of least squares estimates? 1. The least squares estimator is still linear and unbiased but it is not efficient. 2. The formulas normally used to compute the least squares standard errors are no longer correct and confidence intervals and hypothesis tests using them will be wrong.
  • 8. 8 yt = 1 + 2xt + e Autocorrelation: E(et) = 0, var(et) = 2 , cov(et, es) = t  s  (Linear) Where  (Unbiased)      t t t t e w y w b 2 2        2 X X X X t t t w 2 2 2 2 ) ( ) ( ) (           t t t t e E w e w E b E
  • 9. 9 yt = 1 + 2xt + et Autocorrelation: Incorrect formula for least squares variance: var(b2) = xt x Correct formula for least squares variance: cov(et, es) = t  s ) , cov( ) var( ) var( 2 2 s t s t s t t t e e w w e w b      s t s t s t e t e w w w          2 2 2 e 2
  • 10. 10 Generalized Least Squares yt = 1 + 2xt + et et = et1 + t yt = 1 + 2xt + et1 + t substitute in for et Now we need to get rid of et1 (continued) AR(1) :
  • 11. 11 yt = 1 + 2xt + et yt = 1 + 2xt + et1 + t et = yt  12xt et1 = yt1  12xt1 yt = 1 + 2xt + yt1  12xt1 + t lag the errors once (continued)
  • 12. 12 yt = 1 + 2xt + yt1  12xt1 + t yt = 1 + 2xt + yt1  12xt1 + t yt  yt1 = 1(1) + 2(xtxt1) + t yt = 1 xt1 * + 2 xt2 * + t , * yt = yt  yt1 * x1t * = (1) xt2 = (xtxt1) * t =2, 3, …, T.
  • 13. 13 yt = yt  yt1 * xt2 = xt xt1 * Problems estimating this model with least squares: 1. One observation is used up in creating the transformed (lagged) variables leaving only (T1) observations for estimating the model (Cochrane-Orcutt method drops the first observation). 2. The value of  is not known. We must find some way to estimate it. x1t * = (1) yt = 1 xt1 * + 2 xt2 * + t , *
  • 14. 14 (Option) Recovering the 1st Observation Dropping the 1st observation and applying least squares is not the best linear unbiased estimation method. Efficiency is lost because the variance of the error associated with the 1st observation is not equal to that of the other errors. This is a special case of the heteroskedasticity problem except that here all errors are assumed to have equal variance except the 1st error.
  • 15. 15 Recovering the 1st Observation y1 = 1 + 2x1 + e1 The 1st observation should fit the original model as: We could include this as the 1st observation for our estimation procedure but we must first transform it so that it has the same error variance as the other observations. with error variance: var(e1) = e 2 =  2 /(1-2 ). Note: The other observations all have error variance  2 .
  • 16. 16 y1 = 1 + 2x1 + e1 with error variance: var(e1) = e 2 =  2 / (1-2 ). The other observations all have error variance  2 . Given any constant c : var(ce1) = c2 var(e1). If c = 1-2 , then var( 1-2 e1) = (1-2 ) var(e1). = (1-2 ) e 2 = (1-2 )  2 /(1-2 ) =  2 The transformation 1 = 1-2 e1 has variance  2 .
  • 17. 17 y1 = 1 + 2x1 + e1 The transformed error 1 = 1-2 e1 has variance  2 . Multiply through by 1-2 to get: 1-2 y1 = 1-2 1 + 1-2 2x1 + 1-2 e1 This transformed first observation may now be added to the other (T-1) observations to obtain the fully restored set of T observations.
  • 18. 18 We can summarize these results by saying that, providing  is known, we can find the Best Linear Unbiased Estimator for 1 and 2 by applying least squares to the transformed mode where the transformed variables are defined by yt = 1 xt1 * + 2 xt2 * + t , * t =1, 2, 3, …, T. 1 2 * 12 2 * 11 1 2 * 1 1 , 1 , 1 x x x y y          for the first observation, and 1 * 2 * 1 1 * , 1 ,         t t t t t t t x x x x y y y    for the remaining t = 2, 3, …, T observations.
  • 19. 19 Estimating Unknown  Value et = et1 + t First, use least squares to estimate the model: If we had values for the et , we could estimate: yt = 1 + 2xt + et The residuals from this estimation are: et = yt  b1  b2xt ^
  • 20. 20 et = yt - b1 - b2xt ^ et = et1 + t ^ ^ Next, estimate the following by least squares: The least squares solution is: et et-1 et-1 T T t = 2 t = 2 2 ^ ^ ^ = ^
  • 21. 21 Durbin-Watson Test The Durbin-Watson test is by far the most important one for detecting AR(1) errors. It is assumed that the vt are independent random errors with distribution N(0, v 2 ). The assumption of normally distributed random errors is needed to derive the probability distribution of the test statistic used in the Durbin- Watson test.
  • 22. 22 The Durbin-Watson Test statistic, d, is : et et-1 et T T t = 2 t = 1 2 ^ ^ ^ d= 2 For a null hypothesis of no autocorrelation, we can use H0 :  = 0. For an alternative hypothesis we could use H1 :  > 0 or H1 :  < 0 or H1 :   0.
  • 23. 23 Testing for Autocorrelation The test statistic, d , is approximately related to  as: ^ 0  d  2(1)  4 When  = 0 , the Durbin-Watson statistic is d  2. ^ When  = 1 , the Durbin-Watson statistic is d  0. ^ Tables for critical values for d are not always readily available so it is easier to use the p-value that most computer programs provide for d. Reject H0 if p-value < , the significance level. When  = 1, the Durbin-Watson statistic is d  4. ^ ^
  • 24. 24 H1:  > 0 d < dL dL < d < dU d > dU Reject H0 Inclusive Do not reject H0 H1:  < 0 d > 4  dL 4  dU < d < 4  dL d < 4  dU d > 4  dL 4  dU < d < 4  dL H1:   0 or or dU < d < 4  dU d < dL dL < d < dU Test for the first-order autocorrelation Note: The lower and upper bounds (dL and dU) depend on sample size n and the number of explanatory variables k (not include intercept).
  • 25. 25  > 0  < 0  > 0  < 0 No evidence of positive autocorrelation No evidence of negative autocorrelation No evidence of autocorrelation Inclusive Inclusive Inclusive Inclusive 0 0 0 4 4 4 dL dL 4  dL 4  dL 2 2 2 dU 4  dU dU 4  dU A. Test for Positive Autocorrelation B. Test for Negative Autocorrelation C. Two-Sided Test for Autocorrelation
  • 26. 26 Prediction with AR(1) Errors When errors are autocorrelated, the previous period error may help us predict next period error. The best predictor, yT+1 , for next period is: yT+1 = 1 + 2xT+1 +  eT ^ ^ ^ ^ ~ where 1 and 2 are generalized least squares estimates and eT is given by: ~ ^ ^ eT = yT  1  2xT ^ ^ ~
  • 27. 27 yT+h = 1 + 2xT+h + h eT ^ ^ ^ ^ ~ For h periods ahead, the best predictor is: Assuming |  | < 1, the influence of h eT diminishes the further we go into the future (the larger h becomes). ^ ^ ~