SlideShare a Scribd company logo
405 ECONOMETRICS
Chapter # 13: AUTOCORRELATION: WHAT HAPPENS IF
THE ERROR TERMS ARE CORRELATED?
By Domodar N. Gujarati
Prof. M. El-Sakka
Dept of Economics Kuwait University
• In this chapter we take a critical look at the following questions:
• 1. What is the nature of autocorrelation?
• 2. What are the theoretical and practical consequences of autocorrelation?
• 3. Since the assumption of no autocorrelation relates to the unobservable
disturbances ut, how does one know that there is autocorrelation in any given
situation? Notice that we now use the subscript t to emphasize that we are
dealing with time series data.
• 4. How does one remedy the problem of autocorrelation?
• THE NATURE OF THE PROBLEM
• Autocorrelation may be defined as “correlation between members of series of
observations ordered in time [as in time series data] or space [as in cross-
sectional data].’’ the CLRM assumes that:
• E(uiuj ) = 0 i ≠ j (3.2.5)
• Put simply, the classical model assumes that the disturbance term relating
to any observation is not influenced by the disturbance term relating to any
other observation.
• For example, if we are dealing with quarterly time series data involving the
regression of output on labor and capital inputs and if, say, there is a labor
strike affecting output in one quarter, there is no reason to believe that this
disruption will be carried over to the next quarter. That is, if output is lower
this quarter, there is no reason to expect it to be lower next quarter.
Similarly, if we are dealing with cross-sectional data involving the
regression of family consumption expenditure on family income, the effect of
an increase of one family’s income on its consumption expenditure is not
expected to affect the consumption expenditure of another family.
• However, if there is autocorrelation,
• E(uiuj ) ≠ 0 i ≠ j (12.1.1)
• In this situation, the disruption caused by a strike this quarter may very
well affect output next quarter, or the increases in the consumption
expenditure of one family may very well prompt another family to increase
its consumption expenditure.
• In Figure 12.1. Figure 12.1a to d shows that there is a discernible pattern
among the u’s. Figure 12.1a shows a cyclical pattern; Figure 12.1b and c
suggests an upward or downward linear trend in the disturbances; whereas
Figure 12.1d indicates that both linear and quadratic trend terms are present
in the disturbances. Only Figure 12.1e indicates no systematic pattern,
supporting the non-autocorrelation assumption of the classical linear
regression model.
Econometrics_ch13.ppt
• Why does serial correlation occur? There are several reasons:
1. Inertia. As is well known, time series such as GNP, price indexes, production,
employment, and unemployment exhibit (business) cycles. Starting at the
bottom of the recession, when economic recovery starts, most of these series
start moving upward. In this upswing, the value of a series at one point in
time is greater than its previous value. Thus there is a momentum’’ built
into them, and it continues until something happens (e.g., increase in
interest rate or taxes or both) to slow them down.
2. Specification Bias: Excluded Variables Case. In empirical analysis the
researcher often starts with a plausible regression model that may not be
the most “perfect’’ one. For example, the researcher may plot the residuals
ˆui obtained from the fitted regression and may observe patterns such as those
shown in Figure 12.1a to d. These residuals (which are proxies for ui) may
suggest that some variables that were originally candidates but were not
included in the model for a variety of reasons should be included. Often the
inclusion of such variables removes the correlation pattern observed among
the residuals.
• For example, suppose we have the following demand model:
• Yt = β1 + β2X2t + β3X3t + β4X4t + ut (12.1.2)
• where Y = quantity of beef demanded, X2 = price of beef, X3 = consumer
• income, X4 = price of poultry, and t = time. However, for some reason we run
• the following regression:
• Yt = β1 + β2X2t + β3X3t + vt (12.1.3)
• Now if (12.1.2) is the “correct’’ model or the “truth’’ or true relation,
running (12.1.3) is tantamount to letting vt = β4X4t + ut. And to the extent the
• price of poultry affects the consumption of beef, the error or disturbance
term v will reflect a systematic pattern, thus creating (false) autocorrelation.
• A simple test of this would be to run both (12.1.2) and (12.1.3) and see
whether autocorrelation, if any, observed in model (12.1.3) disappears when
(12.1.2) is run.
3. Specification Bias: Incorrect Functional Form. Suppose the “true’’ or correct
model in a cost-output study is as follows:
• Marginal costi = β1 + β2 outputi + β3 output2
i + ui (12.1.4)
• but we fit the following model:
• Marginal costi = α1 + α2 outputi + vi (12.1.5)
• The marginal cost curve corresponding to the “true’’ model is shown in
Figure 12.2 along with the “incorrect’’ linear cost curve.
• As Figure 12.2 shows, between points A and B the linear marginal cost curve
will consistently overestimate the true marginal cost, whereas beyond these
points it will consistently underestimate the true marginal cost. This result
is to be expected, because the disturbance term vi is, in fact, equal to output2
+ ui , and hence will catch the systematic effect of the output2 term on
marginal cost. In this case, vi will reflect autocorrelation because of the use of
an incorrect functional form.
Econometrics_ch13.ppt
4. Cobweb Phenomenon. The supply of many agricultural commodities reflects
the so-called cobweb phenomenon, where supply reacts to price with a lag
of one time period because supply decisions take time to implement (the
gestation period). Thus, at the beginning of this year’s planting of crops,
farmers are influenced by the price prevailing last year, so that their supply
function is
• Supplyt = β1 + β2Pt−1 + ut (12.1.6)
• Suppose at the end of period t, price Pt turns out to be lower than Pt−1.
Therefore, in periodt + 1 farmers may very well decide to produce less than they
did in period t. Obviously, in this situation the disturbances ut are not expected
to be random because if the farmers overproduce in year t, they are likely to
reduce their production in t + 1, and so on, leading to a Cobweb pattern.
5. Lags. In a time series regression of consumption expenditure on income, it is
not uncommon to find that the consumption expenditure in the current
period depends, among other things, on the consumption expenditure of the
previous period. That is,
• Consumptiont = β1 + β2 incomet + β3 consumptiont−1 + ut (12.1.7)
• A regression such as (12.1.7) is known as autoregression because one of the
explanatory variables is the lagged value of the dependent variable. The
rationale for a model such as (12.1.7) is simple. Consumers do not change
their consumption habits readily for psychological, technological, or
institutional reasons. Now if we neglect the lagged term in (12.1.7), the
resulting error term will reflect a systematic pattern due to the influence of
lagged consumption on current consumption.
6. “Manipulation’’ of Data. In empirical analysis, the raw data are often
“manipulated.’’ For example, in time series regressions involving quarterly
data, such data are usually derived from the monthly data by simply adding
three monthly observations and dividing the sum by 3. This averaging
introduces smoothness into the data by dampening the fluctuations in the
monthly data. Therefore, the graph plotting the quarterly data looks much
smoother than the monthly data, and this smoothness may itself lend to a
systematic pattern in the disturbances, thereby introducing autocorrelation.
• Another source of manipulation is interpolation or extrapolation of data. For
example, the Census of Population is conducted every 10 years in this
country, the last being in 2000 and the one before that in 1990. Now if there
is a need to obtain data for some year within the intercensus period 1990–
2000, the common practice is to interpolate on the basis of some ad hoc
assumptions. All such data “massaging’’ techniques might impose upon the
data a systematic pattern that might not exist in the original data.
7. Data Transformation. consider the following model:
• Yt = β1 + β2Xt + ut (12.1.8)
• where, say, Y = consumption expenditure and X = income. Since (12.1.8)
holds true at every time period, it holds true also in the previous time
period, (t − 1). So, we can write (12.1.8) as:
• Yt−1 = β1 + β2Xt−1 + ut−1 (12.1.9)
• Yt−1, Xt−1, and ut−1 are known as the lagged values of Y, X, and u, respectively,
here lagged by one period. Now if we subtract (12.1.9) from (12.1.8), we
obtain
• ∆Yt = β2∆Xt + ∆ut (12.1.10)
• where ∆, known as the first difference operator, tells us to take successive
differences of the variables in question. Thus, Yt = (Yt − Yt−1), Xt = (Xt − Xt−1),
and ut = (ut − ut−1). For empirical purposes, we write (12.1.10) as
• Yt = β2Xt + vt (12.1.11)
• where vt = ut = (ut − ut−1).
• Equation (12.1.9) is known as the level form and Eq. (12.1.10) is known as
the (first) difference form. Both forms are often used in empirical analysis.
For example, if in (12.1.9) Y and X represent the logarithms of consumption
expenditure and income, then in (12.1.10) Y and X will represent changes in
the logs of consumption expenditure and income. But as we know, a change
in the log of a variable is a relative change, or a percentage change, if the
former is multiplied by 100. So, instead of studying relationships between
variables in the level form, we may be interested in their relationships in the
growth form.
• Now if the error term in (12.1.8) satisfies the standard OLS assumptions,
particularly the assumption of no autocorrelation, it can be shown that the
error term vt in (12.1.11) is autocorrelated. It may be noted here that models
like (12.1.11) are known as dynamic regression models, that is, models
involving lagged regressands.
• The point of the preceding example is that sometimes autocorrelation may
be induced as a result of transforming the original model.
8. Nonstationarity. We mentioned in Chapter 1 that, while dealing with time
series data, we may have to find out if a given time series is stationary.
• a time series is stationary if its characteristics (e.g., mean, variance, and
covariance) are time invariant; that is, they do not change over time. If that is
not the case, we have a nonstationary time series. In a regression model
such as
• Yt = β1 + β2Xt + ut (12.1.8)
• it is quite possible that both Y and X are nonstationary and therefore the
error u is also nonstationary. In that case, the error term will exhibit
autocorrelation.
• It should be noted also that autocorrelation can be positive (Figure 12.3a) as
well as negative, although most economic time series generally exhibit
positive autocorrelation because most of them either move upward or
downward over extended time periods and do not exhibit a constant up-
and-down movement such as that shown in Figure 12.3b.
Econometrics_ch13.ppt
OLS ESTIMATION IN THE PRESENCE OF AUTOCORRELATION
• What happens to the OLS estimators and their variances if we introduce
autocorrelation in the disturbances by assuming that E(utut+s) ≠ 0 (s ≠ 0) but
retain all the other assumptions of the classical model? We revert once
again to the two-variable regression model to explain the basic ideas
involved, namely,
• Yt = β1 + β2Xt + ut .
• To make any headway, we must assume the mechanism that generates ut, for
E(utut+s) ≠ 0 (s ≠ 0) is too general an assumption to be of any practical use.
As a starting point, or first approximation, one can assume that the
disturbance, or error, terms are generated by the following mechanism.
• ut = ρut−1 + εt -1 < ρ < 1 (12.2.1)
• where ρ ( = rho) is known as the coefficient of autocovariance and where εt is
the stochastic disturbance term such that it satisfied the standard OLS
assumptions, namely,
• E(εt) = 0
• var (εt) = σ2ε (12.2.2)
• cov (εt , εt+s) = 0 s ≠ 0
• In the engineering literature, an error term with the preceding properties is
often called a white noise error term. What (12.2.1) postulates is that the
value of the disturbance term in period t is equal to rho times its value in the
previous period plus a purely random error term.
• The scheme (12.2.1) is known as Markov first-order autoregressive
• scheme, or simply a first-order autoregressive scheme, usually denoted as
AR(1). It is first order because ut and its immediate past value are involved;
that is, the maximum lag is 1. If the model were ut = ρ1ut−1 + ρ2ut−2 + εt , it
would be an AR(2), or second-order, autoregressive scheme, and so on.
• In passing, note that rho, the coefficient of autocovariance in (12.2.1), can
also be interpreted as the first-order coefficient of autocorrelation, or more
accurately, the coefficient of autocorrelation at lag 1.
• Given the AR(1) scheme, it can be shown that (see Appendix 12A, Section
• 12A.2)
• Since ρ is a constant between −1 and +1, (12.2.3) shows that under the AR(1)
scheme, the variance of ut is still homoscedastic, but ut is correlated not only
with its immediate past value but its values several periods in the past. It is
critical to note that |ρ| < 1, that is, the absolute value of rho is less than one. If,
for example, rho is one, the variances and covariances listed above are not
defined.
• If |ρ| < 1, we say that the AR(1) process given in (12.2.1) is stationary; that is,
the mean, variance, and covariance of ut do not change over time. If |ρ| is less
than one, then it is clear from (12.2.4) that the value of the covariance will
decline as we go into the distant past.
• One reason we use the AR(1) process is not only because of its simplicity
compared to higher-order AR schemes, but also because in many
applications it has proved to be quite useful. Additionally, a considerable
amount of theoretical and empirical work has been done on the AR(1)
scheme.
• Now return to our two-variable regression model: Yt = β1 + β2Xt + ut. We
know from Chapter 3 that the OLS estimator of the slope coefficient is
• and its variance is given by
• where the small letters as usual denote deviation from the mean values.
• Now under the AR(1) scheme, it can be shown that the variance of this
estimator is:
• A comparison of (12.2.8) with (12.2.7) shows the former is equal to the latter
times a term that depends on ρ as well as the sample autocorrelations
between the values taken by the regressor X at various lags. And in general
we cannot foretell whether var (βˆ2) is less than or greater than var (βˆ2)AR1
[but see Eq. (12.4.1) below]. Of course, if rho is zero, the two formulas will
coincide, as they should (why?). Also, if the correlations among the
successive values of the regressor are very small, the usual OLS variance of
the slope estimator will not be seriously biased. But, as a general principle,
the two variances will not be the same.
• To give some idea about the difference between the variances given in
(12.2.7) and (12.2.8), assume that the regressor X also follows the first-order
autoregressive scheme with a coefficient of autocorrelation of r. Then it can
be shown that (12.2.8) reduces to:
• var (βˆ2)AR(1) = σ2 x2 t 1 + rρ 1 − rρ = var (βˆ2)OLS 1 + rρ 1 − rρ (12.2.9)
• If, for example, r = 0.6 and ρ = 0.8, using (12.2.9) we can check that var
(βˆ2)AR1 = 2.8461 var (βˆ2)OLS. To put it another way, var (βˆ2)OLS = 1
2.8461var (βˆ2)AR1 = 0.3513 var (βˆ2)AR1 . That is, the usual OLS formula
[i.e., (12.2.7)] will underestimate the variance of (βˆ2)AR1 by about 65
percent. As you will realize, this answer is specific for the given values of r
and ρ. But the point of this exercise is to warn you that a blind application of
the usual OLS formulas to compute the variances and standard errors of
the OLS estimators could give seriously misleading results.
• What now are the properties of βˆ2? βˆ2 is still linear and unbiased. Is βˆ2 still
BLUE? Unfortunately, it is not; in the class of linear unbiased estimators, it
does not have minimum variance.
RELATIONSHIP BETWEEN WAGES AND PRODUCTIVITY IN THE BUSINESS
SECTOR OF THE UNITED STATES, 1959–1998
• Now that we have discussed the consequences of autocorrelation, the
obvious question is, How do we detect it and how do we correct for it?
• Before we turn to these topics, it is useful to consider a concrete example.
Table 12.4 gives data on indexes of real compensation per hour (Y) and
output per hour (X) in the business sector of the U.S. economy for the period
1959–1998, the base of the indexes being 1992 = 100.
• First plotting the data on Y and X, we obtain Figure 12.7. Since the
relationship between real compensation and labor productivity is expected
to be positive, it is not surprising that the two variables are positively
related. What is surprising is that the relationship between the two is
almost linear, although there is some hint that at higher values of
productivity the relationship between the two may be slightly nonlinear.
Econometrics_ch13.ppt
Econometrics_ch13.ppt
• Therefore, we decided to estimate a linear as well as a log–linear model,
with the following results:
• Yˆt = 29.5192 + 0.7136Xt
• se = (1.9423) (0.0241)
• t = (15.1977) (29.6066) (12.5.1)
• r 2 = 0.9584 d = 0.1229 ˆσ = 2.6755
• where d is the Durbin–Watson statistic, which will be discussed shortly.
• ln Yt = 1.5239 + 0.6716 ln Xt
• se = (0.0762) (0.0175)
• t = (19.9945) (38.2892) (12.5.2)
• r 2 = 0.9747 d = 0.1542 ˆσ = 0.0260
• Qualitatively, both the models give similar results. In both cases the
estimated coefficients are “highly” significant, as indicated by the high t
values.
• In the linear model, if the index of productivity goes up by a unit, on
average, the index of compensation goes up by about 0.71 units. In the log–
linear model, the slope coefficient being elasticity, we find that if the index
of productivity goes up by 1 percent, on average, the index of real
compensation goes up by about 0.67 percent.
• How reliable are the results given in (12.5.1) and (12.5.2) if there is
autocorrelation? As stated previously, if there is autocorrelation, the
estimated standard errors are biased, as a result of which the estimated t ratios
are unreliable. We obviously need to find out if our data suffer from
autocorrelation. In the following section we discuss several methods of
detecting autocorrelation.
DETECTING AUTOCORRELATION
• I. Graphical Method
• Recall that the assumption of nonautocorrelation of the classical model
relates to the population disturbances ut, which are not directly observable.
What we have instead are their proxies, the residuals ˆut, which can be
obtained by the usual OLS procedure. Although the ˆut are not the same
thing as ut ,17 very often a visual examination of the ˆu’s gives us some clues
about the likely presence of autocorrelation in the u’s. Actually, a visual
examination of ˆut or ( ˆu2t ) can provide useful information about
autocorrelation, model inadequacy, or specification bias.
• There are various ways of examining the residuals. We can simply plot
them against time, the time sequence plot, as we have done in Figure 12.8,
which shows the residuals obtained from the wages–productivity regression
(12.5.1). The values of these residuals are given in Table 12.5 along with
some other data.
Econometrics_ch13.ppt
Econometrics_ch13.ppt
• To see this differently, we can plot ˆut against ˆut−1, that is, plot the residuals
at time t against their value at time (t − 1), a kind of empirical test of the
AR(1) scheme. If the residuals are nonrandom, we should obtain pictures
similar to those shown in Figure 12.3. This plot for our wages–productivity
regression is as shown in Figure 12.9; the underlying data are given in
Econometrics_ch13.ppt
• II. The Runs Test
• If we carefully examine Figure 12.8, we notice a peculiar feature: Initially,
we have several residuals that are negative, then there is a series of positive
residuals, and then there are several residuals that are negative. If these
residuals were purely random, could we observe such a pattern? Intuitively,
it seems unlikely. This intuition can be checked by the so-called runs test,
sometimes also know as the Geary test, a nonparametric test.
• To explain the runs test, let us simply note down the signs (+ or −) of the
residuals obtained from the wages–productivity regression, which are given
in the first column of Table 12.5.
• (−−−−−−−−−)(+++++++++++++++++++++)(−−−−−−−−−−) (12.6.1)
• Thus there are 9 negative residuals, followed by 21 positive residuals,
followed by 10 negative residuals, for a total of 40 observations.
• We now define a run as an uninterrupted sequence of one symbol or
attribute, such as + or −. We further define the length of a run as the
number of elements in it. In the sequence shown in (12.6.1), there are 3
runs: a run of 9 minuses (i.e., of length 9), a run of 21 pluses (i.e., of length
21) and a run of 10 minuses (i.e., of length 10). For a better visual effect, we
have presented the various runs in parentheses.
• By examining how runs behave in a strictly random sequence of
observations, one can derive a test of randomness of runs. We ask this
question: Are the 3 runs observed in our illustrative example consisting of
40 observations too many or too few compared with the number of runs
expected in a strictly random sequence of 40 observations? If there are too
many runs, it would mean that in our example the residuals change sign
frequently, thus indicating negative serial correlation (cf. Figure 12.3b).
Similarly, if there are too few runs, they may suggest positive
autocorrelation, as in Figure 12.3a. A priori, then, Figure 12.8 would indicate
positive correlation in the residuals.
• Now let
• N = total number of observations = N1 + N2
• N1 = number of + symbols (i.e., + residuals)
• N2 = number of − symbols (i.e., − residuals)
• R = number of runs
• Note: N = N1 + N2.
• If the null hypothesis of randomness is sustainable, following
the properties of the normal distribution, we should expect that
Prob [E(R) − 1.96σR ≤ R ≤ E(R) + 1.96σR] = 0.95 (12.6.3)
• Using the formulas given in (12.6.2), we obtain
• The 95% confidence interval for R in our example is thus:
[10.975 ± 1.96(3.1134)] = (4.8728, 17.0722)
• Durbin–Watson d Test
• The most celebrated test for detecting serial correlation is that developed
• by statisticians Durbin and Watson. It is popularly known as the Durbin–
• Watson d statistic, which is defined as
• it is important to note the assumptions underlying the d statistic.
• 1. The regression model includes the intercept term. If it is not present, as in
the case of the regression through the origin, it is essential to rerun the
regression including the intercept term to obtain the RSS.
• 2. The explanatory variables, the X’s, are nonstochastic, or fixed in repeated
sampling.
• 3. The disturbances ut are generated by the first-order autoregressive
scheme: ut = ρut−1 + εt. Therefore, it cannot be used to detect higher-order
autoregressive schemes.
• 4. The error term ut is assumed to be normally distributed.
• 5. The regression model does not include the lagged value(s) of the
dependent variable as one of the explanatory variables. Thus, the test is
inapplicable in models of the following type:
• Yt = β1 + β2X2t + β3X3t + ·· ·+βkXkt + γYt−1 + ut (12.6.6)
• where Yt−1 is the one period lagged value of Y.
• 6. There are no missing observations in the data. Thus, in our wages–
productivity regression for the period 1959–1998, if observations for, say,
1978 and 1982 were missing for some reason, the d statistic makes no
allowance for such missing observations
• d ≈ 2(1−ρˆ) (12.6.10)
• But since −1 ≤ ρ ≤ 1, (12.6.10) implies that
0 ≤ d ≤ 4 (12.6.11)
• These are the bounds of d; any estimated d value must lie within these
• limits.
Econometrics_ch13.ppt
• The mechanics of the Durbin–Watson test are as follows, assuming that the
assumptions underlying the test are fulfilled:
• 1. Run the OLS regression and obtain the residuals.
• 2. Compute d from (12.6.5). (Most computer programs now do this routinely.)
• 3. For the given sample size and given number of explanatory variables,
find out the critical dL and dU values.
• 4. Now follow the decision rules given in Table 12.6. For ease of reference,
these decision rules are also depicted in Figure 12.10.
• To illustrate the mechanics, let us return to our wages–productivity
regression. From the data given in Table 12.5 the estimated d value can be
shown to be 0.1229, suggesting that there is positive serial correlation in the
residuals. From the Durbin–Watson tables, we find that for 40 observations
and one explanatory variable, dL = 1.44 and dU = 1.54 at the 5 percent level.
Since the computed d of 0.1229 lies below dL, we cannot reject the hypothesis
that there is positive serial correlations in the residuals.
Econometrics_ch13.ppt
Econometrics_ch13.ppt
Econometrics_ch13.ppt
Econometrics_ch13.ppt
Econometrics_ch13.ppt

More Related Content

PPT
autocorrelation from basicc econometrics
PDF
auto correlation.pdf
PPT
Econometrics ch13
PPTX
Autocorrelation
PPTX
Chapter 2 Simple Linear Regression Model.pptx
DOCX
Autocorrelation
PDF
Time series analysis- Part 2
PPTX
Presentation related to time series and the problem of autocorrelation
autocorrelation from basicc econometrics
auto correlation.pdf
Econometrics ch13
Autocorrelation
Chapter 2 Simple Linear Regression Model.pptx
Autocorrelation
Time series analysis- Part 2
Presentation related to time series and the problem of autocorrelation

Similar to Econometrics_ch13.ppt (20)

DOCX
8Simple RegressionCHAPTER OUTLINEThe Regression Mode.docx
PDF
Time Series, Vitalii Radchenko
DOC
Ch 12 Slides.doc. Introduction of science of business
PPTX
MModule 1 ppt.pptx
PDF
econometrics
PPTX
Introduction to Econometrics
PPT
Autocorrelation- Concept, Causes and Consequences
PDF
Essentials of Econometrics 4th Edition Gujarati Solutions Manual
PPT
Estimation of Dynamic Causal Effects -Introduction to Economics
PPTX
Autocorrelation in applied statistics.pptx
PPTX
Advanced Econometrics L9.pptx
PPTX
1.1.Introduction Econometrics.pptx
PPTX
Macroeconomic modelling
PPTX
Final Time series analysis part 2. pptx
DOCX
Chapter 2.docxnjnjnijijijijijijoiopooutdhuj
DOC
Notes part iii
PDF
Advanced Applied Econometrics Panel-Dynamic
PPTX
Effect of global market on indian market
PPTX
Macroeconomic modelling using Eviews
PDF
8Simple RegressionCHAPTER OUTLINEThe Regression Mode.docx
Time Series, Vitalii Radchenko
Ch 12 Slides.doc. Introduction of science of business
MModule 1 ppt.pptx
econometrics
Introduction to Econometrics
Autocorrelation- Concept, Causes and Consequences
Essentials of Econometrics 4th Edition Gujarati Solutions Manual
Estimation of Dynamic Causal Effects -Introduction to Economics
Autocorrelation in applied statistics.pptx
Advanced Econometrics L9.pptx
1.1.Introduction Econometrics.pptx
Macroeconomic modelling
Final Time series analysis part 2. pptx
Chapter 2.docxnjnjnijijijijijijoiopooutdhuj
Notes part iii
Advanced Applied Econometrics Panel-Dynamic
Effect of global market on indian market
Macroeconomic modelling using Eviews
Ad

Recently uploaded (20)

PDF
financing insitute rbi nabard adb imf world bank insurance and credit gurantee
PDF
how_to_earn_50k_monthly_investment_guide.pdf
DOCX
marketing plan Elkhabiry............docx
PPTX
kyc aml guideline a detailed pt onthat.pptx
PDF
Corporate Finance Fundamentals - Course Presentation.pdf
PDF
Bitcoin Layer August 2025: Power Laws of Bitcoin: The Core and Bubbles
PPTX
Antihypertensive_Drugs_Presentation_Poonam_Painkra.pptx
PDF
1a In Search of the Numbers ssrn 1488130 Oct 2009.pdf
PPTX
Session 11-13. Working Capital Management and Cash Budget.pptx
PPTX
Session 3. Time Value of Money.pptx_finance
PDF
ABriefOverviewComparisonUCP600_ISP8_URDG_758.pdf
PDF
Lecture1.pdf buss1040 uses economics introduction
PPTX
The discussion on the Economic in transportation .pptx
PDF
Q2 2025 :Lundin Gold Conference Call Presentation_Final.pdf
PDF
ssrn-3708.kefbkjbeakjfiuheioufh ioehoih134.pdf
PPTX
Unilever_Financial_Analysis_Presentation.pptx
PDF
ECONOMICS AND ENTREPRENEURS LESSONSS AND
PPTX
EABDM Slides for Indifference curve.pptx
PDF
Mathematical Economics 23lec03slides.pdf
PDF
Spending, Allocation Choices, and Aging THROUGH Retirement. Are all of these ...
financing insitute rbi nabard adb imf world bank insurance and credit gurantee
how_to_earn_50k_monthly_investment_guide.pdf
marketing plan Elkhabiry............docx
kyc aml guideline a detailed pt onthat.pptx
Corporate Finance Fundamentals - Course Presentation.pdf
Bitcoin Layer August 2025: Power Laws of Bitcoin: The Core and Bubbles
Antihypertensive_Drugs_Presentation_Poonam_Painkra.pptx
1a In Search of the Numbers ssrn 1488130 Oct 2009.pdf
Session 11-13. Working Capital Management and Cash Budget.pptx
Session 3. Time Value of Money.pptx_finance
ABriefOverviewComparisonUCP600_ISP8_URDG_758.pdf
Lecture1.pdf buss1040 uses economics introduction
The discussion on the Economic in transportation .pptx
Q2 2025 :Lundin Gold Conference Call Presentation_Final.pdf
ssrn-3708.kefbkjbeakjfiuheioufh ioehoih134.pdf
Unilever_Financial_Analysis_Presentation.pptx
ECONOMICS AND ENTREPRENEURS LESSONSS AND
EABDM Slides for Indifference curve.pptx
Mathematical Economics 23lec03slides.pdf
Spending, Allocation Choices, and Aging THROUGH Retirement. Are all of these ...
Ad

Econometrics_ch13.ppt

  • 1. 405 ECONOMETRICS Chapter # 13: AUTOCORRELATION: WHAT HAPPENS IF THE ERROR TERMS ARE CORRELATED? By Domodar N. Gujarati Prof. M. El-Sakka Dept of Economics Kuwait University
  • 2. • In this chapter we take a critical look at the following questions: • 1. What is the nature of autocorrelation? • 2. What are the theoretical and practical consequences of autocorrelation? • 3. Since the assumption of no autocorrelation relates to the unobservable disturbances ut, how does one know that there is autocorrelation in any given situation? Notice that we now use the subscript t to emphasize that we are dealing with time series data. • 4. How does one remedy the problem of autocorrelation?
  • 3. • THE NATURE OF THE PROBLEM • Autocorrelation may be defined as “correlation between members of series of observations ordered in time [as in time series data] or space [as in cross- sectional data].’’ the CLRM assumes that: • E(uiuj ) = 0 i ≠ j (3.2.5) • Put simply, the classical model assumes that the disturbance term relating to any observation is not influenced by the disturbance term relating to any other observation. • For example, if we are dealing with quarterly time series data involving the regression of output on labor and capital inputs and if, say, there is a labor strike affecting output in one quarter, there is no reason to believe that this disruption will be carried over to the next quarter. That is, if output is lower this quarter, there is no reason to expect it to be lower next quarter. Similarly, if we are dealing with cross-sectional data involving the regression of family consumption expenditure on family income, the effect of an increase of one family’s income on its consumption expenditure is not expected to affect the consumption expenditure of another family.
  • 4. • However, if there is autocorrelation, • E(uiuj ) ≠ 0 i ≠ j (12.1.1) • In this situation, the disruption caused by a strike this quarter may very well affect output next quarter, or the increases in the consumption expenditure of one family may very well prompt another family to increase its consumption expenditure. • In Figure 12.1. Figure 12.1a to d shows that there is a discernible pattern among the u’s. Figure 12.1a shows a cyclical pattern; Figure 12.1b and c suggests an upward or downward linear trend in the disturbances; whereas Figure 12.1d indicates that both linear and quadratic trend terms are present in the disturbances. Only Figure 12.1e indicates no systematic pattern, supporting the non-autocorrelation assumption of the classical linear regression model.
  • 6. • Why does serial correlation occur? There are several reasons: 1. Inertia. As is well known, time series such as GNP, price indexes, production, employment, and unemployment exhibit (business) cycles. Starting at the bottom of the recession, when economic recovery starts, most of these series start moving upward. In this upswing, the value of a series at one point in time is greater than its previous value. Thus there is a momentum’’ built into them, and it continues until something happens (e.g., increase in interest rate or taxes or both) to slow them down. 2. Specification Bias: Excluded Variables Case. In empirical analysis the researcher often starts with a plausible regression model that may not be the most “perfect’’ one. For example, the researcher may plot the residuals ˆui obtained from the fitted regression and may observe patterns such as those shown in Figure 12.1a to d. These residuals (which are proxies for ui) may suggest that some variables that were originally candidates but were not included in the model for a variety of reasons should be included. Often the inclusion of such variables removes the correlation pattern observed among the residuals.
  • 7. • For example, suppose we have the following demand model: • Yt = β1 + β2X2t + β3X3t + β4X4t + ut (12.1.2) • where Y = quantity of beef demanded, X2 = price of beef, X3 = consumer • income, X4 = price of poultry, and t = time. However, for some reason we run • the following regression: • Yt = β1 + β2X2t + β3X3t + vt (12.1.3) • Now if (12.1.2) is the “correct’’ model or the “truth’’ or true relation, running (12.1.3) is tantamount to letting vt = β4X4t + ut. And to the extent the • price of poultry affects the consumption of beef, the error or disturbance term v will reflect a systematic pattern, thus creating (false) autocorrelation. • A simple test of this would be to run both (12.1.2) and (12.1.3) and see whether autocorrelation, if any, observed in model (12.1.3) disappears when (12.1.2) is run.
  • 8. 3. Specification Bias: Incorrect Functional Form. Suppose the “true’’ or correct model in a cost-output study is as follows: • Marginal costi = β1 + β2 outputi + β3 output2 i + ui (12.1.4) • but we fit the following model: • Marginal costi = α1 + α2 outputi + vi (12.1.5) • The marginal cost curve corresponding to the “true’’ model is shown in Figure 12.2 along with the “incorrect’’ linear cost curve. • As Figure 12.2 shows, between points A and B the linear marginal cost curve will consistently overestimate the true marginal cost, whereas beyond these points it will consistently underestimate the true marginal cost. This result is to be expected, because the disturbance term vi is, in fact, equal to output2 + ui , and hence will catch the systematic effect of the output2 term on marginal cost. In this case, vi will reflect autocorrelation because of the use of an incorrect functional form.
  • 10. 4. Cobweb Phenomenon. The supply of many agricultural commodities reflects the so-called cobweb phenomenon, where supply reacts to price with a lag of one time period because supply decisions take time to implement (the gestation period). Thus, at the beginning of this year’s planting of crops, farmers are influenced by the price prevailing last year, so that their supply function is • Supplyt = β1 + β2Pt−1 + ut (12.1.6) • Suppose at the end of period t, price Pt turns out to be lower than Pt−1. Therefore, in periodt + 1 farmers may very well decide to produce less than they did in period t. Obviously, in this situation the disturbances ut are not expected to be random because if the farmers overproduce in year t, they are likely to reduce their production in t + 1, and so on, leading to a Cobweb pattern.
  • 11. 5. Lags. In a time series regression of consumption expenditure on income, it is not uncommon to find that the consumption expenditure in the current period depends, among other things, on the consumption expenditure of the previous period. That is, • Consumptiont = β1 + β2 incomet + β3 consumptiont−1 + ut (12.1.7) • A regression such as (12.1.7) is known as autoregression because one of the explanatory variables is the lagged value of the dependent variable. The rationale for a model such as (12.1.7) is simple. Consumers do not change their consumption habits readily for psychological, technological, or institutional reasons. Now if we neglect the lagged term in (12.1.7), the resulting error term will reflect a systematic pattern due to the influence of lagged consumption on current consumption.
  • 12. 6. “Manipulation’’ of Data. In empirical analysis, the raw data are often “manipulated.’’ For example, in time series regressions involving quarterly data, such data are usually derived from the monthly data by simply adding three monthly observations and dividing the sum by 3. This averaging introduces smoothness into the data by dampening the fluctuations in the monthly data. Therefore, the graph plotting the quarterly data looks much smoother than the monthly data, and this smoothness may itself lend to a systematic pattern in the disturbances, thereby introducing autocorrelation. • Another source of manipulation is interpolation or extrapolation of data. For example, the Census of Population is conducted every 10 years in this country, the last being in 2000 and the one before that in 1990. Now if there is a need to obtain data for some year within the intercensus period 1990– 2000, the common practice is to interpolate on the basis of some ad hoc assumptions. All such data “massaging’’ techniques might impose upon the data a systematic pattern that might not exist in the original data.
  • 13. 7. Data Transformation. consider the following model: • Yt = β1 + β2Xt + ut (12.1.8) • where, say, Y = consumption expenditure and X = income. Since (12.1.8) holds true at every time period, it holds true also in the previous time period, (t − 1). So, we can write (12.1.8) as: • Yt−1 = β1 + β2Xt−1 + ut−1 (12.1.9) • Yt−1, Xt−1, and ut−1 are known as the lagged values of Y, X, and u, respectively, here lagged by one period. Now if we subtract (12.1.9) from (12.1.8), we obtain • ∆Yt = β2∆Xt + ∆ut (12.1.10) • where ∆, known as the first difference operator, tells us to take successive differences of the variables in question. Thus, Yt = (Yt − Yt−1), Xt = (Xt − Xt−1), and ut = (ut − ut−1). For empirical purposes, we write (12.1.10) as • Yt = β2Xt + vt (12.1.11) • where vt = ut = (ut − ut−1).
  • 14. • Equation (12.1.9) is known as the level form and Eq. (12.1.10) is known as the (first) difference form. Both forms are often used in empirical analysis. For example, if in (12.1.9) Y and X represent the logarithms of consumption expenditure and income, then in (12.1.10) Y and X will represent changes in the logs of consumption expenditure and income. But as we know, a change in the log of a variable is a relative change, or a percentage change, if the former is multiplied by 100. So, instead of studying relationships between variables in the level form, we may be interested in their relationships in the growth form. • Now if the error term in (12.1.8) satisfies the standard OLS assumptions, particularly the assumption of no autocorrelation, it can be shown that the error term vt in (12.1.11) is autocorrelated. It may be noted here that models like (12.1.11) are known as dynamic regression models, that is, models involving lagged regressands. • The point of the preceding example is that sometimes autocorrelation may be induced as a result of transforming the original model.
  • 15. 8. Nonstationarity. We mentioned in Chapter 1 that, while dealing with time series data, we may have to find out if a given time series is stationary. • a time series is stationary if its characteristics (e.g., mean, variance, and covariance) are time invariant; that is, they do not change over time. If that is not the case, we have a nonstationary time series. In a regression model such as • Yt = β1 + β2Xt + ut (12.1.8) • it is quite possible that both Y and X are nonstationary and therefore the error u is also nonstationary. In that case, the error term will exhibit autocorrelation. • It should be noted also that autocorrelation can be positive (Figure 12.3a) as well as negative, although most economic time series generally exhibit positive autocorrelation because most of them either move upward or downward over extended time periods and do not exhibit a constant up- and-down movement such as that shown in Figure 12.3b.
  • 17. OLS ESTIMATION IN THE PRESENCE OF AUTOCORRELATION • What happens to the OLS estimators and their variances if we introduce autocorrelation in the disturbances by assuming that E(utut+s) ≠ 0 (s ≠ 0) but retain all the other assumptions of the classical model? We revert once again to the two-variable regression model to explain the basic ideas involved, namely, • Yt = β1 + β2Xt + ut . • To make any headway, we must assume the mechanism that generates ut, for E(utut+s) ≠ 0 (s ≠ 0) is too general an assumption to be of any practical use. As a starting point, or first approximation, one can assume that the disturbance, or error, terms are generated by the following mechanism. • ut = ρut−1 + εt -1 < ρ < 1 (12.2.1) • where ρ ( = rho) is known as the coefficient of autocovariance and where εt is the stochastic disturbance term such that it satisfied the standard OLS assumptions, namely, • E(εt) = 0 • var (εt) = σ2ε (12.2.2) • cov (εt , εt+s) = 0 s ≠ 0
  • 18. • In the engineering literature, an error term with the preceding properties is often called a white noise error term. What (12.2.1) postulates is that the value of the disturbance term in period t is equal to rho times its value in the previous period plus a purely random error term. • The scheme (12.2.1) is known as Markov first-order autoregressive • scheme, or simply a first-order autoregressive scheme, usually denoted as AR(1). It is first order because ut and its immediate past value are involved; that is, the maximum lag is 1. If the model were ut = ρ1ut−1 + ρ2ut−2 + εt , it would be an AR(2), or second-order, autoregressive scheme, and so on. • In passing, note that rho, the coefficient of autocovariance in (12.2.1), can also be interpreted as the first-order coefficient of autocorrelation, or more accurately, the coefficient of autocorrelation at lag 1.
  • 19. • Given the AR(1) scheme, it can be shown that (see Appendix 12A, Section • 12A.2) • Since ρ is a constant between −1 and +1, (12.2.3) shows that under the AR(1) scheme, the variance of ut is still homoscedastic, but ut is correlated not only with its immediate past value but its values several periods in the past. It is critical to note that |ρ| < 1, that is, the absolute value of rho is less than one. If, for example, rho is one, the variances and covariances listed above are not defined.
  • 20. • If |ρ| < 1, we say that the AR(1) process given in (12.2.1) is stationary; that is, the mean, variance, and covariance of ut do not change over time. If |ρ| is less than one, then it is clear from (12.2.4) that the value of the covariance will decline as we go into the distant past. • One reason we use the AR(1) process is not only because of its simplicity compared to higher-order AR schemes, but also because in many applications it has proved to be quite useful. Additionally, a considerable amount of theoretical and empirical work has been done on the AR(1) scheme. • Now return to our two-variable regression model: Yt = β1 + β2Xt + ut. We know from Chapter 3 that the OLS estimator of the slope coefficient is • and its variance is given by • where the small letters as usual denote deviation from the mean values.
  • 21. • Now under the AR(1) scheme, it can be shown that the variance of this estimator is: • A comparison of (12.2.8) with (12.2.7) shows the former is equal to the latter times a term that depends on ρ as well as the sample autocorrelations between the values taken by the regressor X at various lags. And in general we cannot foretell whether var (βˆ2) is less than or greater than var (βˆ2)AR1 [but see Eq. (12.4.1) below]. Of course, if rho is zero, the two formulas will coincide, as they should (why?). Also, if the correlations among the successive values of the regressor are very small, the usual OLS variance of the slope estimator will not be seriously biased. But, as a general principle, the two variances will not be the same.
  • 22. • To give some idea about the difference between the variances given in (12.2.7) and (12.2.8), assume that the regressor X also follows the first-order autoregressive scheme with a coefficient of autocorrelation of r. Then it can be shown that (12.2.8) reduces to: • var (βˆ2)AR(1) = σ2 x2 t 1 + rρ 1 − rρ = var (βˆ2)OLS 1 + rρ 1 − rρ (12.2.9) • If, for example, r = 0.6 and ρ = 0.8, using (12.2.9) we can check that var (βˆ2)AR1 = 2.8461 var (βˆ2)OLS. To put it another way, var (βˆ2)OLS = 1 2.8461var (βˆ2)AR1 = 0.3513 var (βˆ2)AR1 . That is, the usual OLS formula [i.e., (12.2.7)] will underestimate the variance of (βˆ2)AR1 by about 65 percent. As you will realize, this answer is specific for the given values of r and ρ. But the point of this exercise is to warn you that a blind application of the usual OLS formulas to compute the variances and standard errors of the OLS estimators could give seriously misleading results.
  • 23. • What now are the properties of βˆ2? βˆ2 is still linear and unbiased. Is βˆ2 still BLUE? Unfortunately, it is not; in the class of linear unbiased estimators, it does not have minimum variance.
  • 24. RELATIONSHIP BETWEEN WAGES AND PRODUCTIVITY IN THE BUSINESS SECTOR OF THE UNITED STATES, 1959–1998 • Now that we have discussed the consequences of autocorrelation, the obvious question is, How do we detect it and how do we correct for it? • Before we turn to these topics, it is useful to consider a concrete example. Table 12.4 gives data on indexes of real compensation per hour (Y) and output per hour (X) in the business sector of the U.S. economy for the period 1959–1998, the base of the indexes being 1992 = 100. • First plotting the data on Y and X, we obtain Figure 12.7. Since the relationship between real compensation and labor productivity is expected to be positive, it is not surprising that the two variables are positively related. What is surprising is that the relationship between the two is almost linear, although there is some hint that at higher values of productivity the relationship between the two may be slightly nonlinear.
  • 27. • Therefore, we decided to estimate a linear as well as a log–linear model, with the following results: • Yˆt = 29.5192 + 0.7136Xt • se = (1.9423) (0.0241) • t = (15.1977) (29.6066) (12.5.1) • r 2 = 0.9584 d = 0.1229 ˆσ = 2.6755 • where d is the Durbin–Watson statistic, which will be discussed shortly. • ln Yt = 1.5239 + 0.6716 ln Xt • se = (0.0762) (0.0175) • t = (19.9945) (38.2892) (12.5.2) • r 2 = 0.9747 d = 0.1542 ˆσ = 0.0260
  • 28. • Qualitatively, both the models give similar results. In both cases the estimated coefficients are “highly” significant, as indicated by the high t values. • In the linear model, if the index of productivity goes up by a unit, on average, the index of compensation goes up by about 0.71 units. In the log– linear model, the slope coefficient being elasticity, we find that if the index of productivity goes up by 1 percent, on average, the index of real compensation goes up by about 0.67 percent. • How reliable are the results given in (12.5.1) and (12.5.2) if there is autocorrelation? As stated previously, if there is autocorrelation, the estimated standard errors are biased, as a result of which the estimated t ratios are unreliable. We obviously need to find out if our data suffer from autocorrelation. In the following section we discuss several methods of detecting autocorrelation.
  • 29. DETECTING AUTOCORRELATION • I. Graphical Method • Recall that the assumption of nonautocorrelation of the classical model relates to the population disturbances ut, which are not directly observable. What we have instead are their proxies, the residuals ˆut, which can be obtained by the usual OLS procedure. Although the ˆut are not the same thing as ut ,17 very often a visual examination of the ˆu’s gives us some clues about the likely presence of autocorrelation in the u’s. Actually, a visual examination of ˆut or ( ˆu2t ) can provide useful information about autocorrelation, model inadequacy, or specification bias. • There are various ways of examining the residuals. We can simply plot them against time, the time sequence plot, as we have done in Figure 12.8, which shows the residuals obtained from the wages–productivity regression (12.5.1). The values of these residuals are given in Table 12.5 along with some other data.
  • 32. • To see this differently, we can plot ˆut against ˆut−1, that is, plot the residuals at time t against their value at time (t − 1), a kind of empirical test of the AR(1) scheme. If the residuals are nonrandom, we should obtain pictures similar to those shown in Figure 12.3. This plot for our wages–productivity regression is as shown in Figure 12.9; the underlying data are given in
  • 34. • II. The Runs Test • If we carefully examine Figure 12.8, we notice a peculiar feature: Initially, we have several residuals that are negative, then there is a series of positive residuals, and then there are several residuals that are negative. If these residuals were purely random, could we observe such a pattern? Intuitively, it seems unlikely. This intuition can be checked by the so-called runs test, sometimes also know as the Geary test, a nonparametric test. • To explain the runs test, let us simply note down the signs (+ or −) of the residuals obtained from the wages–productivity regression, which are given in the first column of Table 12.5. • (−−−−−−−−−)(+++++++++++++++++++++)(−−−−−−−−−−) (12.6.1)
  • 35. • Thus there are 9 negative residuals, followed by 21 positive residuals, followed by 10 negative residuals, for a total of 40 observations. • We now define a run as an uninterrupted sequence of one symbol or attribute, such as + or −. We further define the length of a run as the number of elements in it. In the sequence shown in (12.6.1), there are 3 runs: a run of 9 minuses (i.e., of length 9), a run of 21 pluses (i.e., of length 21) and a run of 10 minuses (i.e., of length 10). For a better visual effect, we have presented the various runs in parentheses. • By examining how runs behave in a strictly random sequence of observations, one can derive a test of randomness of runs. We ask this question: Are the 3 runs observed in our illustrative example consisting of 40 observations too many or too few compared with the number of runs expected in a strictly random sequence of 40 observations? If there are too many runs, it would mean that in our example the residuals change sign frequently, thus indicating negative serial correlation (cf. Figure 12.3b). Similarly, if there are too few runs, they may suggest positive autocorrelation, as in Figure 12.3a. A priori, then, Figure 12.8 would indicate positive correlation in the residuals.
  • 36. • Now let • N = total number of observations = N1 + N2 • N1 = number of + symbols (i.e., + residuals) • N2 = number of − symbols (i.e., − residuals) • R = number of runs • Note: N = N1 + N2. • If the null hypothesis of randomness is sustainable, following the properties of the normal distribution, we should expect that Prob [E(R) − 1.96σR ≤ R ≤ E(R) + 1.96σR] = 0.95 (12.6.3)
  • 37. • Using the formulas given in (12.6.2), we obtain • The 95% confidence interval for R in our example is thus: [10.975 ± 1.96(3.1134)] = (4.8728, 17.0722)
  • 38. • Durbin–Watson d Test • The most celebrated test for detecting serial correlation is that developed • by statisticians Durbin and Watson. It is popularly known as the Durbin– • Watson d statistic, which is defined as • it is important to note the assumptions underlying the d statistic. • 1. The regression model includes the intercept term. If it is not present, as in the case of the regression through the origin, it is essential to rerun the regression including the intercept term to obtain the RSS. • 2. The explanatory variables, the X’s, are nonstochastic, or fixed in repeated sampling.
  • 39. • 3. The disturbances ut are generated by the first-order autoregressive scheme: ut = ρut−1 + εt. Therefore, it cannot be used to detect higher-order autoregressive schemes. • 4. The error term ut is assumed to be normally distributed. • 5. The regression model does not include the lagged value(s) of the dependent variable as one of the explanatory variables. Thus, the test is inapplicable in models of the following type: • Yt = β1 + β2X2t + β3X3t + ·· ·+βkXkt + γYt−1 + ut (12.6.6) • where Yt−1 is the one period lagged value of Y. • 6. There are no missing observations in the data. Thus, in our wages– productivity regression for the period 1959–1998, if observations for, say, 1978 and 1982 were missing for some reason, the d statistic makes no allowance for such missing observations
  • 40. • d ≈ 2(1−ρˆ) (12.6.10) • But since −1 ≤ ρ ≤ 1, (12.6.10) implies that 0 ≤ d ≤ 4 (12.6.11) • These are the bounds of d; any estimated d value must lie within these • limits.
  • 42. • The mechanics of the Durbin–Watson test are as follows, assuming that the assumptions underlying the test are fulfilled: • 1. Run the OLS regression and obtain the residuals. • 2. Compute d from (12.6.5). (Most computer programs now do this routinely.) • 3. For the given sample size and given number of explanatory variables, find out the critical dL and dU values. • 4. Now follow the decision rules given in Table 12.6. For ease of reference, these decision rules are also depicted in Figure 12.10. • To illustrate the mechanics, let us return to our wages–productivity regression. From the data given in Table 12.5 the estimated d value can be shown to be 0.1229, suggesting that there is positive serial correlation in the residuals. From the Durbin–Watson tables, we find that for 40 observations and one explanatory variable, dL = 1.44 and dU = 1.54 at the 5 percent level. Since the computed d of 0.1229 lies below dL, we cannot reject the hypothesis that there is positive serial correlations in the residuals.