SlideShare a Scribd company logo
Let ξ denote the random outcome of an experiment. To ev-
ery such outcome suppose a waveform X(t, ξ) is assigned.
The collection of such waveforms form a stochastic process.
The set of {ξk} and the time index t can be continuous or
discrete (countably infinite or finite) as well. For fixed ξk ∈ S
(the set of all experimental outcomes), X(t, ξ) is a specific
time function. For fixed t, X1 = X(t1, ξi) is a random vari-
able.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.1/105
The ensemble of all such realizations X(t, ξ) over time
represents the stochastic process X(t). For example
X(t) = a cos(ω0t + φ), φ ∼ U(0, 2π)
Stochastic processes are everywhere: Brownian motion,
stock market fluctuations, various queuing systems all
represent stochastic phenomena.
If X(t) is a stochastic process, then for fixed t, X(t) repre-
sents a random variable.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.2/105
Its distribution function is given by
FX(x, t) = P{X(t) ≤ x}
Notice that FX(x, t) depends on t, since for a different t, we
obtain a different random variable. Further
fX(x, t) =
dFX(x, t)
dx
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.3/105
FX(x1, x2, t1, t2) = P{X(t1) ≤ x1, X(t2) ≤ x2}
fX(x1, x2, t1, t2) =
∂2
FX(x1, x2, t1, t2)
∂x1∂x2
represents the 2nd order density function of the process
X(t).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.4/105
Similarly fX(x1, x2, · · · xn, t1, t2 · · · , tn)represents the nth or-
der density function of the process X(t). Complete specifica-
tion of the stochastic process X(t) requires the knowledge
of fX(x1, x2, · · · xn, t1, t2 · · · , tn) for all ti, i = 1, 2, · · · , n
and for all n, an almost impossible task in reality.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.5/105
µ(t) = E{X(t)} =
Z +∞
−∞
xfX(x, t)dx
In general, the mean of a process can depend on the time
index t.
RXX(t1, t2)=E{X(t1)X∗
(t2)}=
Z Z
x1x∗
2fX(x1, x2, t1, t2)dx1dx2
It represents the interrelationship between the random vari-
ables X1 = X(t1) and X2 = X(t2) generated from the pro-
cess X(t).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.6/105
1. RXX(t1, t2) = R∗
XX(t2, t1) = [E{X(t2)X∗
(t1)}]∗
2. RXX(t, t) = E{|X(t)|2
} > 0.
3.
n
P
i=1
n
P
j=1
aia∗
j RXX(ti, tj) ≥ 0.
CXX(t1, t2) = RXX(t1, t2) − µX(t1)µ∗
X(t2)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.7/105
Example:
z =
Z T
−T
X(t)dt
E[|z|2
] =
Z T
−T
Z T
−T
E{X(t1)X∗
(t2)}dt1dt2
=
Z T
−T
Z T
−T
RXX(t1, t2)dt1dt2
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.8/105
Example:
X(t) = a cos(ω0t + ϕ), ϕ ∼ U(0, 2π).
µX(t) = E{X(t)} = aE{cos(ω0t + ϕ)}
= a cos ω0tE{cos ϕ} − a sin ω0tE{sin ϕ} = 0,
E{cos ϕ} = 1
2π
Z 2π
0
cos ϕdϕ = 0 = E{sin ϕ}.
RXX(t1, t2) = a2
E{cos(ω0t1 + ϕ) cos(ω0t2 + ϕ)}
= a2
2
E{cos ω0(t1 − t2) + cos(ω0(t1 + t2) + 2ϕ)}
= a2
2
cos ω0(t1 − t2).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.9/105
Stationary Stochastic Processes
Stationary processes exhibit statistical properties that are
invariant to shift in the time index. Thus, for example,
second-order stationarity implies that the statistical proper-
ties of the pairs {X(t1), X(t2)} and {X(t1 +c), X(t2 +c)} are
the same for any c. Similarly first-order stationarity implies
that the statistical properties of X(ti) and X(ti + c) are the
same for any c.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.10/105
Strict-Sense Stationary (S.S.S)
if
fX(x1, · · · xn, t1, · · · , tn) ≡ fX(x1, · · · xn, t1 +c, · · · , tn +c) (1)
for any c, where the left side represents the joint density
function of the random variables
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.11/105
X1 = X(t1), X2 = X(t2), · · · , Xn = X(tn) and the right side
corresponds to the joint density function of the random
variables
X′
1 = X(t1 + c), X′
2 = X(t2 + c), · · · , X′
n = X(tn + c).
A process X(t) is said to be strict-sense stationary if (1) is
true for all ti, i = 1, 2, · · · , n, n = 1, 2, · · · , for any c.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.12/105
For a first-order strict sense stationary process,
fX(x, t) ≡ fX(x, t + c)
for c = −t,
fX(x, t) = fX(x) (2)
the first-order density of X(t) is independent of t. In that
case
E[X(t)] =
Z +∞
−∞
xf(x)dx = µ, a constant.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.13/105
Similarly, for a 2nd-order strict-sense stationary process
fX(x1, x2, t1, t2) ≡ fX(x1, x2, t1 + c, t2 + c)
For c = −t2:
fX(x1, x2, t1, t2) ≡ fX(x1, x2, t1 − t2) (3)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.14/105
the second order density function of a strict sense
stationary process depends only on the difference of the
time indices t2 − t2 = τ. In that case the autocorrelation
function is given by
E{X(t1)X∗
(t2)} =
Z Z
x1x∗
2fX(x1, x2, τ = t1 − t2)dx1dx2
= RXX(t1 − t2) = RXX(τ) = R∗
XX(−τ),
the E{X(t1)X∗
(t2)} of a 2nd SSS depends only on τ.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.15/105
The conditions for 1st and 2nd order SSS are in equations
(2), (3), these are usually hard to verify. For this reason we
resort to wide-sense stationarity WSS.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.16/105
WSS:
1. E{X(t)} = µ
2. E{X(t1)X∗
(t2)} = RXX(|t2 − t1|)
SSS always implies WSS, but the converse is not true, but,
the only exception are Gaussian processes.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.17/105
This follows, since if X(t) is a Gaussian process, then by
definition
X1 = X(t1), X2 = X(t2), · · · , Xn = X(tn)
are jointly Gaussian random variables for any t1, t2 · · · , tn
whose joint characteristic function is given by
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.18/105
φX(ω1, ω2, · · · , ωn) = e
j
n
P
k=1
µ(tk)ωk−0.5
n
P
l=1
n
P
k=1
CXX (tl,tk)ωlωk
CXX(ti, tk) is the element of covariance matrix, for WSS
process CXX(ti, tk) = CXX(ti − tk)
φX(ω1, ω2, · · · , ωn) = e
j
n
P
k=1
µωk−0.5
n
P
l=1
n
P
k=1
CXX (tl−tk)ωlωk
(4)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.19/105
and hence if the set of time indices are shifted by a
constant c to generate a new set of jointly Gaussian
random variables X′
2 = X(t2 + c), · · · ,X′
n = X(tn + c) then
their joint characteristic function is identical to (4). Thus the
set of random variables {Xi}n
i=1 and {X′
i}n
i=1 have the
same JPDF for all n and all c, establishing the SSS of
Gaussian processes from it WSS.
For Gaussian processes: WSS⇒SSS
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.20/105
Why GPs?
1. Easy prior assumption
2. Regression
3. Classification
4. Optimization
5. Unimodal
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.21/105
N-dim Gaussian
e−0.5(x−µ)H C−1(x−µ)
p
(2π)N |C|
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.22/105
N-dim Gaussian
C=[.5,-.15;-.15,.2]; mu=[1 -1];
[X1,X2] = meshgrid(linspace(-2,2,100)’,
linspace(-2,2,100)’);
X = [X1(:) X2(:)];
p = mvnpdf(X, mu, C);
surf(X1,X2,reshape(p,100,100));
contour(X1,X2,reshape(p,100,100));
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.23/105
−2
−1
0
1
2
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
0
0.2
0.4
0.6
0.8
1
2D Gaussian PDF, surface level plot
X
2
X1
PDF
Figure 1: Multivariate Gaussian PDF.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.24/105
2D Gaussian PDF, contour plot
X1
X
2
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
Figure 2: Multivariate Gaussian PDF.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.25/105
C = V DV H
, V = [V1 V2]
C =

0.5 −.15
−0.15 0.2

V =

0.31625 −.9487
−0.9487 0.3162

D =

0.05 0
0 0.55

AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.26/105
.
. σ2
2
V2
σ
1
2
V
1
X1
X
2
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
Figure 3: Multivariate Gaussian PDF.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.27/105
For LTI sytems:
Y (t) =
Z +∞
−∞
h(t − τ)X(τ)dτ
=
Z +∞
−∞
h(τ)X(t − τ)dτ
X(t) =
Z +∞
−∞
X(τ)δ(t − τ)dτ
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.28/105
Y (t) = L{X(t)} = L{
Z +∞
−∞
X(τ)δ(t − τ)dτ}
=
Z +∞
−∞
L{X(τ)δ(t − τ)dτ}
=
Z +∞
−∞
X(τ)L{δ(t − τ)}dτ
=
Z +∞
−∞
X(τ)h(t − τ)dτ =
Z +∞
−∞
h(τ)X(t − τ)dτ
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.29/105
Output statistics:
µY (t) = E{Y (t)} =
Z +∞
−∞
E{X(τ)h(t − τ)dτ}
=
Z +∞
−∞
µX(τ)h(t − τ)dτ = µX(t) ∗ h(t).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.30/105
RXY (t1, t2) = E{X(t1)Y ∗
(t2)}
= E{X(t1)
Z +∞
−∞
X∗
(t2 − α)h∗
(α)dα}
=
Z +∞
−∞
E{X(t1)X∗
(t2 − α)}h∗
(α)dα
=
Z +∞
−∞
RXX(t1, t2 − α)h∗
(α)dα
= RXX(t1, t2) ∗ h∗
(t2)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.31/105
RY Y (t1, t2) = E{Y (t1)Y ∗
(t2)}
= E{
Z +∞
−∞
X(t1 − β)h(β)dβY ∗
(t2)}
=
Z +∞
−∞
E{X(t1 − β)Y ∗
(t2)}h(β)dβ
=
Z +∞
−∞
RXY (t1 − β, t2)h(β)dβ
= RXY (t1, t2) ∗ h(t1),
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.32/105
RY Y (t1, t2) = RXX(t1, t2) ∗ h∗
(t2) ∗ h(t1).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.33/105
If X(t) is WSS:
µY (t) = µX
Z +∞
−∞
h(τ)dτ = µXc, a constant .
RXY (t1, t2) =
Z +∞
−∞
RXX(t1 − t2 + α)h∗
(α)dα
= RXX(τ) ∗ h∗
(−τ) = RXY (τ), τ = t1 − t2
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.34/105
RY Y (t1, t2) =
Z +∞
−∞
RXY (t1 − β − t2)h(β)dβ, τ = t1 − t2
= RXY (τ) ∗ h(τ) = RY Y (τ)
RY Y (τ) = RXX(τ) ∗ h∗
(−τ) ∗ h(τ).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.35/105
Example: If X(t) is WSS:
z =
Z T
−T
X(t)dt.
Y (t) = X(t) ∗ h(t), h(t) = u(t + T) − u(t − T)
RY Y (τ) = RXX(τ) ∗ h(τ) ∗ h(−τ)
h(τ) ∗ h(−τ) = (2T − |τ|), |τ| ≤ 2T
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.36/105
Y (t) = g{X(t)}
SSS input SSS output
Memoryless
system
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.37/105
Y (t) = g{X(t)}
WSS input Can’t say about
stationarity
in any sense
Memoryless
system
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.38/105
Y (t) = g{X(t)}
Gaussian input Not Gaussian
Memoryless
system
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.39/105
If X(t) is a zero mean stationary Gaussian process, and
Y (t) = g{X(t)}, where g(·) represents a nonlinear
memoryless device, then
RXY (τ) = ηRXX(τ), η = E{g′
(X)}.
Proof:
RXY (τ) = E{X(t)Y (t − τ)} = E[X(t)g{X(t − τ)}]
=
Z Z
x1g(x2)fX1X2 (x1, x2)dx1dx2, (5)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.40/105
where X1 = X(t) and X2 = X(t − τ) are jointly Gaussian
RVs.
fX1X2 (x1, x2) = 1
2π
√
|A|
e−x∗A−1x/2
X = (X1, X2)T
, x = (x1, x2)T
A = E{X X∗
} =

RXX(0) RXX(τ)
RXX(τ) RXX(0)

= LL∗
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.41/105
where L is an upper triangular factor matrix with positive
diagonal entries.
L =

ℓ11 ℓ12
0 ℓ22

.
Consider the transformation:
Z = L−1
X = (Z1, Z2)T
, z = L−1
x = (z1, z2)T
so that
E{Z Z∗
} = L−1
E{XX∗
}L∗−1
= L−1
AL∗−1
= I
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.42/105
and hence Z1, Z2 are zero mean independent Gaussian
random variables. Also
x = Lz ⇒ x1 = ℓ11z1 + ℓ12z2, x2 = ℓ22z2
and hence,
x∗
A−1
x = z∗
L∗
A−1
Lz = z∗
z = z2
1 + z2
2
then the Jacobian of transformation
|J| = |L−1
| = |A|−1/2
.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.43/105
RXY (τ) =
Z +∞
−∞
Z +∞
−∞
(ℓ11z1 + ℓ12z2)g(ℓ22z2) 1
|J|
1
2π|A|1/2 e−z2
1/2
e−z2
2/2
= ℓ11
Z +∞
−∞
Z +∞
−∞
z1g(ℓ22z2)fz1 (z1)fz2 (z2)dz1dz2
+ ℓ12
Z +∞
−∞
Z +∞
−∞
z2g(ℓ22z2)fz1 (z1)fz2 (z2)dz1dz2
= ℓ11
Z +∞
−∞
z1fz1 (z1)dz1
Z +∞
−∞
g(ℓ22z2)fz2 (z2)dz2
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.44/105
+ ℓ12
Z +∞
−∞
z2g(ℓ22z2) fz2 (z2)
| {z }
1
√
2π
e−z2
2/2
dz2
= ℓ12
ℓ2
22
Z +∞
−∞
ug(u) 1
√
2π
e−u2/2ℓ2
22 du
where u = ℓ22z2
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.45/105
This gives
RXY (τ) = ℓ12ℓ22
R +∞
−∞
g(u) u
ℓ2
22
fu(u)
z }| {
1
√
2πℓ2
22
e−u2/2ℓ2
22
| {z }
−
dfu(u)
du
=−f′
u(u)
du
= −RXX(τ)
R +∞
−∞
g(u)f′
u(u)du,
Since A = LL∗
gives ℓ12ℓ22 = RXX(τ). Hence,
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.46/105
RXY (τ) = RXX(τ){−g(u)fu(u)|+∞
−∞ +
R +∞
−∞
g′
(u)fu(u)du}
= RXX(τ)E{g′
(X)} = ηRXX(τ),
This is the desired result, where η = E{g′
(X)}. Thus if the
input to a memoryless device is stationary Gaussian, the
cross correlation function between the input and the output
is proportional to the input autocorrelation function.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.47/105
WSS input WSS output
LTI system
h(t)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.48/105
SSS input SSS output
LTI system
h(t)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.49/105
Gaussian input Gaussian output
LTI system
h(t)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.50/105
White noise W(t) is said to be a white noise process if
RWW (t1, t2) = q(t1)δ(t2 − t1)
W(t) is said to be WSS white noise if E{W(t)} = constant,
and
RWW (t1, t2) = qδ(t1 − t2) = qδ(τ).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.51/105
If W(t) is also a Gaussian process (white Gaussian pro-
cess), then all of its samples are independent random vari-
ables(Why?)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.52/105
White noise Colored noise
W(t) N(t) = h(t) ∗ W(t)
LTI system
h(t)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.53/105
E[N(t)] = µW
Z +∞
−∞
h(τ)dτ = a constant
RNN (τ) = qδ(τ) ∗ h∗
(−τ) ∗ h(τ)
= qh∗
(−τ) ∗ h(τ) = qρ(τ)
ρ(τ) = h(τ) ∗ h∗
(−τ) =
Z +∞
−∞
h(α)h∗
(α + τ)dα.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.54/105
Thus the output of a white noise process through an LTI
system represents a (colored) noise process.
Note: White noise need not be Gaussian.
“White” and “Gaussian” are two different concepts!
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.55/105
♦ ♦
  ♦

X(t)
t
Figure 4: Upcrossing (♦), Downcrossing ()
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.56/105
Upcrossings and Downcrossings of a stationary Gaussian pro-
cess: Consider a zero mean stationary Gaussian process
X(t) with autocorrelation function RXX(τ). An upcrossing
over the mean value occurs whenever the realization X(t)
passes through zero with positive slope. Let ρ∆t represent
the probability of such an upcrossing in the interval (t, t+∆t)
We wish to determine ρ.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.57/105
Since X(t) is a stationary Gaussian process, its derivative
process X′
(t) is also zero mean stationary Gaussian with
autocorrelation function
RXX(τ) =
Z ∞
−∞
SXX(ω)ejωτ dω
2π
R
′′
XX(τ) =
Z ∞
−∞
(jω)2
SXX(ω)ejωτ dω
2π
SX′X′ (ω) = |jω|2
SXX(ω) ⇒
RX′X′ (τ) = −R
′′
XX(τ)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.58/105
Further X(t) and X′
(t) and are jointly Gaussian stationary
processes, and since
RXX′ (τ) = −
dRXX(τ)
dτ
,
RXX′ (−τ) = −
dRXX(−τ)
d(−τ)
=
dRXX(τ)
dτ
= −RXX′ (τ)
which for τ = 0 gives
RXX′ (0) = 0 ⇒ E[X(t)X′
(t)] = 0
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.59/105
the jointly Gaussian zero mean random variables
X1 = X(t), X2 = X′
(t)
are uncorrelated hence independent with variances
σ2
1 = RXX(0) and σ2
2 = RX′X′ (0) = −R′′
XX(0)  0
fX1X2 (x1, x2) = fX(x1)fX(x2) =
1
2πσ1σ2
e
−
x2
1
2σ2
1
+
x2
1
2σ2
2
!
.
To determine ρ the probability of upcrossing rate,
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.60/105
we argue as follows: In an interval (t, t + ∆t) the realization
moves from X(t) = X1 to
X(t + ∆t) = X(t) + X′
(t)∆t = X1 + X2∆t, and hence the
realization intersects with the zero level somewhere in that
interval if
X1  0, X2  0, and X(t + ∆t) = X1 + X2∆t  0
i.e., X1  −X2∆t Hence the probability of upcrossing in
(t, t + ∆t) is given by
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.61/105
ρ∆t =
Z ∞
x2=0
Z 0
x1=−x2∆t
fX1X2 (x1, x2)dx1dx2
=
Z ∞
0
fX2 (x2)dx2
Z ∞
−x2∆t
fX1 (x1)dx1. (6)
Differentiating both sides of 6 with respect to ∆t we get
ρ =
Z ∞
0
fX2 (x2)x2fX1 (−x2∆t)dx2
and letting ∆t → 0, we have
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.62/105
c
ρ =
Z ∞
0
x2fX(x2)fX(0)dx2 =
1
p
2πRXX(0)
Z ∞
0
x2fX(x2)dx2
=
1
p
2πRXX(0)
1
2
(σ2
p
2/π) =
1
2π
s
−R′′
XX(0)
RXX(0)
There is an equal probability for downcrossings, and hence
the total probability for crossing the zero line in an interval
(t, t + ∆t) equals ρ0∆t where
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.63/105
ρ0 =
1
π
q
−R′′
XX(0)/RXX(0)  0.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.64/105
A discrete time stochastic process Xn = X(nT) is a
sequence of random variables. The mean, autocorrelation
and auto-covariance functions of a discrete-time process
are gives by
µn = E{X(nT)}
R(n1, n2) = E{X(n1T)X∗
(n2T)}
C(n1, n2) = R(n1, n2) − µn1 µ∗
n2
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.65/105
WSS:
E{X(nT)} = µ, a constant
E[X{(k + n)T}X∗
{(k)T}] = R(n) = rn = r∗
−n
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.66/105
The positive-definite property of the autocorrelation
sequence can be expressed in terms of certain
Hermitian-Toeplitz matrices as follows:
Theorem: A sequence {rn} |∞
−∞ forms an autocorrelation se-
quence of a wide sense stationary stochastic process if and
only if every Hermitian-Toeplitz matrix Tn given by
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.67/105
Tn =





r0 r1 r2 · · · rn
r∗
1 r0 r1 · · · rn−1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
r∗
n r∗
n−1 · · · r∗
1 r0





= T∗
n
is non-negative (positive) definite for n = 0, 1, 2 · · · , ∞
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.68/105
a∗
Tna =
n
X
i=0
n
X
k=0
aia∗
krk−i, ∀~
a
a∗
Tna =
n
X
i=0
n
X
k=0
aia∗
kE{X(kT)X∗
(iT)}
= E



Stochastic Processes - part 1
Stochastic Processes - part 1
Stochastic Processes - part 1
Stochastic Processes - part 1
n
X
k=0
a∗
kX(kT)
Stochastic Processes - part 1
Stochastic Processes - part 1
Stochastic Processes - part 1
Stochastic Processes - part 1
2



≥ 0.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.69/105
Systems and discrete WSS processes:
RXY (n) = RXX(n) ∗ h∗
(−n)
RY Y (n) = RXY (n) ∗ h(n)
RY Y (n) = RXX(n) ∗ h∗
(−n) ∗ h(n).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.70/105
ARMA processes: input: W(n), output: X(n)
X(n) = −
p
X
k=1
akX(n − k) +
q
X
k=0
bkW(n − k),
X(z)
p
X
k=0
akz−k
= W(z)
q
X
k=0
bkz−k
, a0 ≡ 1
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.71/105
H(z) =
∞
X
k=0
h(k)z−k
=
X(z)
W(z)
=
b0 + b1z−1
+ b2z−2
+ · · · + bqz−q
1 + a1z−1 + a2z−2 + · · · + apz−p
=
B(z)
A(z)
X(n) =
∞
X
k=0
h(n − k)W(k)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.72/105
the transfer function H(z) is rational with p poles and q zeros
that determine the model order of the underlying system.
X(n) undergoes regression over p of its previous values and
at the same time a moving average based on W(n), W(n −
1), · · · , W(n − q) of the input over (q + 1) values is added
to it, thus generating an Auto Regressive Moving Average
(ARMA (p, q)) process X(n).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.73/105
Generally the input {W(n)} represents a sequence of
uncorrelated random variables of zero mean and constant
variance σ2
W so that
RWW (n) = σ2
W δ(n).
If in addition, {W(n)} is normally distributed then the
output {X(n)} also represents a strict-sense stationary
normal process.
If q = 0, then we have an AR(p) process (all-pole process),
and if p = 0, then an MA(q)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.74/105
AR(1) process: An AR(1) process has the form
X(n) = aX(n − 1) + W(n)
H(z) =
1
1 − az−1
=
∞
X
n=0
an
z−n
h(n) = an
, |a|  1
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.75/105
RXX(n) = σ2
W δ(n) ∗ {a−n
} ∗ {an
} = σ2
W
∞
X
k=0
a|n|+k
ak
= σ2
W
a|n|
1 − a2
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.76/105
The normalized autocorrelation sequence:
ρX(n) =
RXX(n)
RXX(0)
= a|n|
, |n| ≥ 0. (7)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.77/105
It is instructive to compare an AR(1) model discussed
above by superimposing a random component to it, which
may be an error term associated with observing a first
order AR process X(n). Thus
Y (n) = X(n) + V (n), X(n) ∼ AR(1), V (n) ∼ White noise (8)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.78/105
RY Y (n) = RXX(n) + RV V (n) = RXX(n) + σ2
V δ(n)
= σ2
W
a|n|
1 − a2
+ σ2
V δ(n)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.79/105
ρY (n) =
RY Y (n)
RY Y (0)
=

1, n = 0
ca|n|
, n = ±1, ±2, · · ·
(9)
c =
σ2
W
σ2
W + σ2
V (1 − a2)
 1.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.80/105
Eqs. (7) and (8) demonstrate the effect of superimposing
an error sequence on an AR(1) model. For non-zero lags,
the autocorrelation of the observed sequence {Y (n)} is re-
duced by a constant factor compared to the original process
{X(n)}. From (8), the superimposed error sequence V (n)
only affects the corresponding term in Y (n) (term by term).
However, a particular term in the “input sequence” W(n)
affects X(n) and Y (n) as well as all subsequent observa-
tions.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.81/105
AR(2) Process: An AR(2) process has the form, W(n) is a
zero-mean white process
X(n) = a1X(n − 1) + a2X(n − 2) + W(n)
H(z) =
∞
X
n=0
h(n)z−n
=
1
1 − a1z−1 − a2z−2
=
b1
1 − λ1z−1
+
b2
1 − λ2z−1
h(0) = 1, h(1) = a1, h(n) = a1h(n − 1) + a2h(n − 2), n ≥ 2
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.82/105
h(n) = b1λn
1 + b2λn
2 , n ≥ 0
b1 + b2 = 1, b1λ1 + b2λ2 = a1.
λ1 + λ2 = a1, λ1λ2 = −a2,
Stability condition:
|λ1|  1, |λ2|  1.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.83/105
RXX(n) = E{X(n + m)X∗
(m)}
= E{[a1X(n + m − 1) + a2X(n + m − 2)]X∗
(m)}
+ E{W(n + m)X∗
(m)}
= a1RXX(n − 1) + a2RXX(n − 2)
Because W(n) and X(n) are uncorrelated
E{W(n + m)X∗
(m)} = 0
ρX(n) =
RXX(n)
RXX(0)
= a1ρX(n − 1) + a2ρX(n − 2).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.84/105
RXX(n) = RWW (n) ∗ h∗
(−n) ∗ h(n) = σ2
W h∗
(−n) ∗ h(n)
= σ2
W
∞
X
k=0
h∗
(n + k)h(k)
= σ2
W

|b1|2
(λ∗
1)n
1 − |λ1|2
+
b∗
1b2(λ∗
1)n
1 − λ∗
1λ2
+
b1b∗
2(λ∗
2)n
1 − λ1λ∗
2
+
|b2|2
(λ∗
2)n
1 − |λ2|2

AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.85/105
The normalized autocorrelation function:
ρX(n) =
RXX(n)
RXX(0)
= c1λ∗n
1 + c2λ∗n
2
where c1 and c2 are the appropriate constant.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.86/105
ARMA(p, q) system has only p + q + 1 independent
coefficients, and hence its impulse response sequence
{hk} also must exhibit a similar dependence among them.
An old result due to Kronecker (1881) states that the nec-
essary and sufficient condition for H(z) =
P∞
k=0 h(n)z−n
to
represent a rational system (ARMA) is that
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.87/105
det Hn = 0, n ≥ N (for all sufficiently large n),
where
Hn =





h0 h1 h2 · · · hn
h1 h2 h3 · · · hn+1
.
.
.
hn hn+1 hn+2 · · · h2n





. (10)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.88/105
In the case of rational systems for all sufficiently large n, the
Hankel matrices Hn in (10) all have the same rank.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.89/105
The necessary part easily follows from
∞
X
k=0
h(n)z−n
=
Pq
k=0 bkz−k
Pp
k=0 akz−k
by cross multiplying and equating coefficients of like
powers of z−n
for n = 0, 1, 2, · · · .
We have
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.90/105
b0 = h0
b1 = h0a1 + h1
.
.
.
bq = h0aq + h1aq−1 + · · · + hm
0 = h0aq+i + h1aq+i−1 + · · · + hq+i−1a1 + hq+i, i ≥ 1.
For systems with q ≤ p−1, letting i = p−q, p−q+1, · · · , 2p−q
in the last equation we get
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.91/105
h0ap + h1ap−1 + · · · + hp−1a1 + hp = 0
.
.
.
hpap + hp+1ap−1 + · · · + h2p−1a1 + h2p = 0
which gives det(Hp) = 0, and similarly for i = p − q + 1, · · · .
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.92/105
h0ap+1 + h1ap + · · · + hp+1 = 0
h1ap+1 + h2ap + · · · + hp+2 = 0
.
.
.
hp+1ap+1 + hp+2ap + · · · + h2p+2 = 0, †
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.93/105
the ARMA (p, q) model, the input white noise process W(n)
is uncorrelated with its own past sample values as well as
the past values of the system output. This gives
E{W(n)W∗
(n − k)} = 0, k ≥ 1
E{W(n)X∗
(n − k)} = 0, k ≥ 1.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.94/105
ri = E{x(n)x∗
(n − i)}
= −
p
X
k=1
ak{x(n − k)x∗
(n − i)} +
q
X
k=0
bk{w(n − k)w∗
(n − i)}
= −
p
X
k=1
akri−k +
q
X
k=0
bk{w(n − k)x∗
(n − i)}
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.95/105

More Related Content

PDF
Singular Value Decompostion (SVD): Worked example 2
PDF
Hidden Markov Models
PDF
Recurrent neural networks
PDF
Fourier series and transforms
PDF
Linguistic hedges in fuzzy logic
PPTX
Euler and runge kutta method
PDF
Quadratic programming (Tool of optimization)
PPTX
Gram-Schmidt process
Singular Value Decompostion (SVD): Worked example 2
Hidden Markov Models
Recurrent neural networks
Fourier series and transforms
Linguistic hedges in fuzzy logic
Euler and runge kutta method
Quadratic programming (Tool of optimization)
Gram-Schmidt process

What's hot (20)

PPT
Markov Chains
PDF
Numerical Differentiation and Integration
PPTX
Euler and improved euler method
PDF
Recurrent Neural Networks. Part 1: Theory
PDF
Digital Signal Processing Tutorial:Chapt 3 frequency analysis
PDF
Lesson 11: Markov Chains
PPT
Continuous Random variable
PDF
Singular Value Decompostion (SVD): Worked example 3
PDF
Introduction to MCMC methods
PPTX
Fourier series
PDF
3.3 modulation formats msk and gmsk
PPTX
Quadratic Programming : KKT conditions with inequality constraints
PDF
Lecture123
PPTX
Classical relations and fuzzy relations
PDF
Newton's Forward/Backward Difference Interpolation
PPTX
Adaptive filter
PDF
1. Random walk.pdf
PPTX
Gauss Forward And Backward Central Difference Interpolation Formula
PDF
If then rule in fuzzy logic and fuzzy implications
PDF
Chapter2 - Linear Time-Invariant System
Markov Chains
Numerical Differentiation and Integration
Euler and improved euler method
Recurrent Neural Networks. Part 1: Theory
Digital Signal Processing Tutorial:Chapt 3 frequency analysis
Lesson 11: Markov Chains
Continuous Random variable
Singular Value Decompostion (SVD): Worked example 3
Introduction to MCMC methods
Fourier series
3.3 modulation formats msk and gmsk
Quadratic Programming : KKT conditions with inequality constraints
Lecture123
Classical relations and fuzzy relations
Newton's Forward/Backward Difference Interpolation
Adaptive filter
1. Random walk.pdf
Gauss Forward And Backward Central Difference Interpolation Formula
If then rule in fuzzy logic and fuzzy implications
Chapter2 - Linear Time-Invariant System
Ad

Similar to Stochastic Processes - part 1 (20)

PDF
Stochastic Processes - part 3
PDF
Introduction to queueing systems with telecommunication applications
PDF
Introduction to queueing systems with telecommunication applications
PPT
Scholastic process and explaination lectr14.ppt
PPT
lecture about different methods and explainanation.ppt
PDF
Unit 1 PPT .d ocx.pdf ppt presentation
PDF
Lecture_Random Process_Part-1_July-Dec 2023.pdf
PDF
Stochastic Processes - part 2
PPT
Introduction - Time Series Analysis
PDF
Stochastic Processes - part 5
DOC
Unit v rpq1
PPTX
Av 738- Adaptive Filtering - Background Material
PPT
stochastic processes and properties -2.ppt
PPT
stochastic processes-2.ppt
PDF
On estimating the integrated co volatility using
PDF
Digital Communication - Stochastic Process
PDF
Section4 stochastic
PDF
160511 hasegawa lab_seminar
PDF
Section5 stochastic
PDF
Introduction to Statistical Methods for Financial Models 1st Severini Solutio...
Stochastic Processes - part 3
Introduction to queueing systems with telecommunication applications
Introduction to queueing systems with telecommunication applications
Scholastic process and explaination lectr14.ppt
lecture about different methods and explainanation.ppt
Unit 1 PPT .d ocx.pdf ppt presentation
Lecture_Random Process_Part-1_July-Dec 2023.pdf
Stochastic Processes - part 2
Introduction - Time Series Analysis
Stochastic Processes - part 5
Unit v rpq1
Av 738- Adaptive Filtering - Background Material
stochastic processes and properties -2.ppt
stochastic processes-2.ppt
On estimating the integrated co volatility using
Digital Communication - Stochastic Process
Section4 stochastic
160511 hasegawa lab_seminar
Section5 stochastic
Introduction to Statistical Methods for Financial Models 1st Severini Solutio...
Ad

More from HAmindavarLectures (10)

PDF
"Digital communications" undergarduate course lecture notes
PDF
Wavelet Signal Processing
PDF
Stochastic Processes - part 6
PDF
Stochastic Processes - part 4
PDF
Random Variables
PDF
Detection & Estimation Theory
PDF
Cyclo-stationary processes
PDF
Multivariate Gaussin, Rayleigh & Rician distributions
PDF
Introduction to communication systems
PDF
Advanced Communications Theory
"Digital communications" undergarduate course lecture notes
Wavelet Signal Processing
Stochastic Processes - part 6
Stochastic Processes - part 4
Random Variables
Detection & Estimation Theory
Cyclo-stationary processes
Multivariate Gaussin, Rayleigh & Rician distributions
Introduction to communication systems
Advanced Communications Theory

Recently uploaded (20)

PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
Complications of Minimal Access Surgery at WLH
PPTX
GDM (1) (1).pptx small presentation for students
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PPTX
Pharma ospi slides which help in ospi learning
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PPTX
master seminar digital applications in india
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
Insiders guide to clinical Medicine.pdf
PPTX
Institutional Correction lecture only . . .
PDF
Computing-Curriculum for Schools in Ghana
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PPTX
Cell Structure & Organelles in detailed.
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
RMMM.pdf make it easy to upload and study
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
Classroom Observation Tools for Teachers
FourierSeries-QuestionsWithAnswers(Part-A).pdf
Complications of Minimal Access Surgery at WLH
GDM (1) (1).pptx small presentation for students
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
Pharma ospi slides which help in ospi learning
human mycosis Human fungal infections are called human mycosis..pptx
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
Microbial diseases, their pathogenesis and prophylaxis
master seminar digital applications in india
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
2.FourierTransform-ShortQuestionswithAnswers.pdf
Insiders guide to clinical Medicine.pdf
Institutional Correction lecture only . . .
Computing-Curriculum for Schools in Ghana
Renaissance Architecture: A Journey from Faith to Humanism
Cell Structure & Organelles in detailed.
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
RMMM.pdf make it easy to upload and study
Anesthesia in Laparoscopic Surgery in India
Classroom Observation Tools for Teachers

Stochastic Processes - part 1

  • 1. Let ξ denote the random outcome of an experiment. To ev- ery such outcome suppose a waveform X(t, ξ) is assigned. The collection of such waveforms form a stochastic process. The set of {ξk} and the time index t can be continuous or discrete (countably infinite or finite) as well. For fixed ξk ∈ S (the set of all experimental outcomes), X(t, ξ) is a specific time function. For fixed t, X1 = X(t1, ξi) is a random vari- able. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.1/105
  • 2. The ensemble of all such realizations X(t, ξ) over time represents the stochastic process X(t). For example X(t) = a cos(ω0t + φ), φ ∼ U(0, 2π) Stochastic processes are everywhere: Brownian motion, stock market fluctuations, various queuing systems all represent stochastic phenomena. If X(t) is a stochastic process, then for fixed t, X(t) repre- sents a random variable. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.2/105
  • 3. Its distribution function is given by FX(x, t) = P{X(t) ≤ x} Notice that FX(x, t) depends on t, since for a different t, we obtain a different random variable. Further fX(x, t) = dFX(x, t) dx AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.3/105
  • 4. FX(x1, x2, t1, t2) = P{X(t1) ≤ x1, X(t2) ≤ x2} fX(x1, x2, t1, t2) = ∂2 FX(x1, x2, t1, t2) ∂x1∂x2 represents the 2nd order density function of the process X(t). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.4/105
  • 5. Similarly fX(x1, x2, · · · xn, t1, t2 · · · , tn)represents the nth or- der density function of the process X(t). Complete specifica- tion of the stochastic process X(t) requires the knowledge of fX(x1, x2, · · · xn, t1, t2 · · · , tn) for all ti, i = 1, 2, · · · , n and for all n, an almost impossible task in reality. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.5/105
  • 6. µ(t) = E{X(t)} = Z +∞ −∞ xfX(x, t)dx In general, the mean of a process can depend on the time index t. RXX(t1, t2)=E{X(t1)X∗ (t2)}= Z Z x1x∗ 2fX(x1, x2, t1, t2)dx1dx2 It represents the interrelationship between the random vari- ables X1 = X(t1) and X2 = X(t2) generated from the pro- cess X(t). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.6/105
  • 7. 1. RXX(t1, t2) = R∗ XX(t2, t1) = [E{X(t2)X∗ (t1)}]∗ 2. RXX(t, t) = E{|X(t)|2 } > 0. 3. n P i=1 n P j=1 aia∗ j RXX(ti, tj) ≥ 0. CXX(t1, t2) = RXX(t1, t2) − µX(t1)µ∗ X(t2) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.7/105
  • 8. Example: z = Z T −T X(t)dt E[|z|2 ] = Z T −T Z T −T E{X(t1)X∗ (t2)}dt1dt2 = Z T −T Z T −T RXX(t1, t2)dt1dt2 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.8/105
  • 9. Example: X(t) = a cos(ω0t + ϕ), ϕ ∼ U(0, 2π). µX(t) = E{X(t)} = aE{cos(ω0t + ϕ)} = a cos ω0tE{cos ϕ} − a sin ω0tE{sin ϕ} = 0, E{cos ϕ} = 1 2π Z 2π 0 cos ϕdϕ = 0 = E{sin ϕ}. RXX(t1, t2) = a2 E{cos(ω0t1 + ϕ) cos(ω0t2 + ϕ)} = a2 2 E{cos ω0(t1 − t2) + cos(ω0(t1 + t2) + 2ϕ)} = a2 2 cos ω0(t1 − t2). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.9/105
  • 10. Stationary Stochastic Processes Stationary processes exhibit statistical properties that are invariant to shift in the time index. Thus, for example, second-order stationarity implies that the statistical proper- ties of the pairs {X(t1), X(t2)} and {X(t1 +c), X(t2 +c)} are the same for any c. Similarly first-order stationarity implies that the statistical properties of X(ti) and X(ti + c) are the same for any c. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.10/105
  • 11. Strict-Sense Stationary (S.S.S) if fX(x1, · · · xn, t1, · · · , tn) ≡ fX(x1, · · · xn, t1 +c, · · · , tn +c) (1) for any c, where the left side represents the joint density function of the random variables AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.11/105
  • 12. X1 = X(t1), X2 = X(t2), · · · , Xn = X(tn) and the right side corresponds to the joint density function of the random variables X′ 1 = X(t1 + c), X′ 2 = X(t2 + c), · · · , X′ n = X(tn + c). A process X(t) is said to be strict-sense stationary if (1) is true for all ti, i = 1, 2, · · · , n, n = 1, 2, · · · , for any c. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.12/105
  • 13. For a first-order strict sense stationary process, fX(x, t) ≡ fX(x, t + c) for c = −t, fX(x, t) = fX(x) (2) the first-order density of X(t) is independent of t. In that case E[X(t)] = Z +∞ −∞ xf(x)dx = µ, a constant. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.13/105
  • 14. Similarly, for a 2nd-order strict-sense stationary process fX(x1, x2, t1, t2) ≡ fX(x1, x2, t1 + c, t2 + c) For c = −t2: fX(x1, x2, t1, t2) ≡ fX(x1, x2, t1 − t2) (3) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.14/105
  • 15. the second order density function of a strict sense stationary process depends only on the difference of the time indices t2 − t2 = τ. In that case the autocorrelation function is given by E{X(t1)X∗ (t2)} = Z Z x1x∗ 2fX(x1, x2, τ = t1 − t2)dx1dx2 = RXX(t1 − t2) = RXX(τ) = R∗ XX(−τ), the E{X(t1)X∗ (t2)} of a 2nd SSS depends only on τ. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.15/105
  • 16. The conditions for 1st and 2nd order SSS are in equations (2), (3), these are usually hard to verify. For this reason we resort to wide-sense stationarity WSS. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.16/105
  • 17. WSS: 1. E{X(t)} = µ 2. E{X(t1)X∗ (t2)} = RXX(|t2 − t1|) SSS always implies WSS, but the converse is not true, but, the only exception are Gaussian processes. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.17/105
  • 18. This follows, since if X(t) is a Gaussian process, then by definition X1 = X(t1), X2 = X(t2), · · · , Xn = X(tn) are jointly Gaussian random variables for any t1, t2 · · · , tn whose joint characteristic function is given by AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.18/105
  • 19. φX(ω1, ω2, · · · , ωn) = e j n P k=1 µ(tk)ωk−0.5 n P l=1 n P k=1 CXX (tl,tk)ωlωk CXX(ti, tk) is the element of covariance matrix, for WSS process CXX(ti, tk) = CXX(ti − tk) φX(ω1, ω2, · · · , ωn) = e j n P k=1 µωk−0.5 n P l=1 n P k=1 CXX (tl−tk)ωlωk (4) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.19/105
  • 20. and hence if the set of time indices are shifted by a constant c to generate a new set of jointly Gaussian random variables X′ 2 = X(t2 + c), · · · ,X′ n = X(tn + c) then their joint characteristic function is identical to (4). Thus the set of random variables {Xi}n i=1 and {X′ i}n i=1 have the same JPDF for all n and all c, establishing the SSS of Gaussian processes from it WSS. For Gaussian processes: WSS⇒SSS AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.20/105
  • 21. Why GPs? 1. Easy prior assumption 2. Regression 3. Classification 4. Optimization 5. Unimodal AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.21/105
  • 22. N-dim Gaussian e−0.5(x−µ)H C−1(x−µ) p (2π)N |C| AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.22/105
  • 23. N-dim Gaussian C=[.5,-.15;-.15,.2]; mu=[1 -1]; [X1,X2] = meshgrid(linspace(-2,2,100)’, linspace(-2,2,100)’); X = [X1(:) X2(:)]; p = mvnpdf(X, mu, C); surf(X1,X2,reshape(p,100,100)); contour(X1,X2,reshape(p,100,100)); AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.23/105
  • 24. −2 −1 0 1 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 0 0.2 0.4 0.6 0.8 1 2D Gaussian PDF, surface level plot X 2 X1 PDF Figure 1: Multivariate Gaussian PDF. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.24/105
  • 25. 2D Gaussian PDF, contour plot X1 X 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 Figure 2: Multivariate Gaussian PDF. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.25/105
  • 26. C = V DV H , V = [V1 V2] C = 0.5 −.15 −0.15 0.2 V = 0.31625 −.9487 −0.9487 0.3162 D = 0.05 0 0 0.55 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.26/105
  • 27. . . σ2 2 V2 σ 1 2 V 1 X1 X 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 Figure 3: Multivariate Gaussian PDF. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.27/105
  • 28. For LTI sytems: Y (t) = Z +∞ −∞ h(t − τ)X(τ)dτ = Z +∞ −∞ h(τ)X(t − τ)dτ X(t) = Z +∞ −∞ X(τ)δ(t − τ)dτ AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.28/105
  • 29. Y (t) = L{X(t)} = L{ Z +∞ −∞ X(τ)δ(t − τ)dτ} = Z +∞ −∞ L{X(τ)δ(t − τ)dτ} = Z +∞ −∞ X(τ)L{δ(t − τ)}dτ = Z +∞ −∞ X(τ)h(t − τ)dτ = Z +∞ −∞ h(τ)X(t − τ)dτ AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.29/105
  • 30. Output statistics: µY (t) = E{Y (t)} = Z +∞ −∞ E{X(τ)h(t − τ)dτ} = Z +∞ −∞ µX(τ)h(t − τ)dτ = µX(t) ∗ h(t). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.30/105
  • 31. RXY (t1, t2) = E{X(t1)Y ∗ (t2)} = E{X(t1) Z +∞ −∞ X∗ (t2 − α)h∗ (α)dα} = Z +∞ −∞ E{X(t1)X∗ (t2 − α)}h∗ (α)dα = Z +∞ −∞ RXX(t1, t2 − α)h∗ (α)dα = RXX(t1, t2) ∗ h∗ (t2) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.31/105
  • 32. RY Y (t1, t2) = E{Y (t1)Y ∗ (t2)} = E{ Z +∞ −∞ X(t1 − β)h(β)dβY ∗ (t2)} = Z +∞ −∞ E{X(t1 − β)Y ∗ (t2)}h(β)dβ = Z +∞ −∞ RXY (t1 − β, t2)h(β)dβ = RXY (t1, t2) ∗ h(t1), AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.32/105
  • 33. RY Y (t1, t2) = RXX(t1, t2) ∗ h∗ (t2) ∗ h(t1). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.33/105
  • 34. If X(t) is WSS: µY (t) = µX Z +∞ −∞ h(τ)dτ = µXc, a constant . RXY (t1, t2) = Z +∞ −∞ RXX(t1 − t2 + α)h∗ (α)dα = RXX(τ) ∗ h∗ (−τ) = RXY (τ), τ = t1 − t2 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.34/105
  • 35. RY Y (t1, t2) = Z +∞ −∞ RXY (t1 − β − t2)h(β)dβ, τ = t1 − t2 = RXY (τ) ∗ h(τ) = RY Y (τ) RY Y (τ) = RXX(τ) ∗ h∗ (−τ) ∗ h(τ). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.35/105
  • 36. Example: If X(t) is WSS: z = Z T −T X(t)dt. Y (t) = X(t) ∗ h(t), h(t) = u(t + T) − u(t − T) RY Y (τ) = RXX(τ) ∗ h(τ) ∗ h(−τ) h(τ) ∗ h(−τ) = (2T − |τ|), |τ| ≤ 2T AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.36/105
  • 37. Y (t) = g{X(t)} SSS input SSS output Memoryless system AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.37/105
  • 38. Y (t) = g{X(t)} WSS input Can’t say about stationarity in any sense Memoryless system AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.38/105
  • 39. Y (t) = g{X(t)} Gaussian input Not Gaussian Memoryless system AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.39/105
  • 40. If X(t) is a zero mean stationary Gaussian process, and Y (t) = g{X(t)}, where g(·) represents a nonlinear memoryless device, then RXY (τ) = ηRXX(τ), η = E{g′ (X)}. Proof: RXY (τ) = E{X(t)Y (t − τ)} = E[X(t)g{X(t − τ)}] = Z Z x1g(x2)fX1X2 (x1, x2)dx1dx2, (5) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.40/105
  • 41. where X1 = X(t) and X2 = X(t − τ) are jointly Gaussian RVs. fX1X2 (x1, x2) = 1 2π √ |A| e−x∗A−1x/2 X = (X1, X2)T , x = (x1, x2)T A = E{X X∗ } = RXX(0) RXX(τ) RXX(τ) RXX(0) = LL∗ AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.41/105
  • 42. where L is an upper triangular factor matrix with positive diagonal entries. L = ℓ11 ℓ12 0 ℓ22 . Consider the transformation: Z = L−1 X = (Z1, Z2)T , z = L−1 x = (z1, z2)T so that E{Z Z∗ } = L−1 E{XX∗ }L∗−1 = L−1 AL∗−1 = I AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.42/105
  • 43. and hence Z1, Z2 are zero mean independent Gaussian random variables. Also x = Lz ⇒ x1 = ℓ11z1 + ℓ12z2, x2 = ℓ22z2 and hence, x∗ A−1 x = z∗ L∗ A−1 Lz = z∗ z = z2 1 + z2 2 then the Jacobian of transformation |J| = |L−1 | = |A|−1/2 . AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.43/105
  • 44. RXY (τ) = Z +∞ −∞ Z +∞ −∞ (ℓ11z1 + ℓ12z2)g(ℓ22z2) 1 |J| 1 2π|A|1/2 e−z2 1/2 e−z2 2/2 = ℓ11 Z +∞ −∞ Z +∞ −∞ z1g(ℓ22z2)fz1 (z1)fz2 (z2)dz1dz2 + ℓ12 Z +∞ −∞ Z +∞ −∞ z2g(ℓ22z2)fz1 (z1)fz2 (z2)dz1dz2 = ℓ11 Z +∞ −∞ z1fz1 (z1)dz1 Z +∞ −∞ g(ℓ22z2)fz2 (z2)dz2 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.44/105
  • 45. + ℓ12 Z +∞ −∞ z2g(ℓ22z2) fz2 (z2) | {z } 1 √ 2π e−z2 2/2 dz2 = ℓ12 ℓ2 22 Z +∞ −∞ ug(u) 1 √ 2π e−u2/2ℓ2 22 du where u = ℓ22z2 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.45/105
  • 46. This gives RXY (τ) = ℓ12ℓ22 R +∞ −∞ g(u) u ℓ2 22 fu(u) z }| { 1 √ 2πℓ2 22 e−u2/2ℓ2 22 | {z } − dfu(u) du =−f′ u(u) du = −RXX(τ) R +∞ −∞ g(u)f′ u(u)du, Since A = LL∗ gives ℓ12ℓ22 = RXX(τ). Hence, AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.46/105
  • 47. RXY (τ) = RXX(τ){−g(u)fu(u)|+∞ −∞ + R +∞ −∞ g′ (u)fu(u)du} = RXX(τ)E{g′ (X)} = ηRXX(τ), This is the desired result, where η = E{g′ (X)}. Thus if the input to a memoryless device is stationary Gaussian, the cross correlation function between the input and the output is proportional to the input autocorrelation function. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.47/105
  • 48. WSS input WSS output LTI system h(t) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.48/105
  • 49. SSS input SSS output LTI system h(t) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.49/105
  • 50. Gaussian input Gaussian output LTI system h(t) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.50/105
  • 51. White noise W(t) is said to be a white noise process if RWW (t1, t2) = q(t1)δ(t2 − t1) W(t) is said to be WSS white noise if E{W(t)} = constant, and RWW (t1, t2) = qδ(t1 − t2) = qδ(τ). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.51/105
  • 52. If W(t) is also a Gaussian process (white Gaussian pro- cess), then all of its samples are independent random vari- ables(Why?) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.52/105
  • 53. White noise Colored noise W(t) N(t) = h(t) ∗ W(t) LTI system h(t) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.53/105
  • 54. E[N(t)] = µW Z +∞ −∞ h(τ)dτ = a constant RNN (τ) = qδ(τ) ∗ h∗ (−τ) ∗ h(τ) = qh∗ (−τ) ∗ h(τ) = qρ(τ) ρ(τ) = h(τ) ∗ h∗ (−τ) = Z +∞ −∞ h(α)h∗ (α + τ)dα. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.54/105
  • 55. Thus the output of a white noise process through an LTI system represents a (colored) noise process. Note: White noise need not be Gaussian. “White” and “Gaussian” are two different concepts! AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.55/105
  • 56. ♦ ♦ ♦ X(t) t Figure 4: Upcrossing (♦), Downcrossing () AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.56/105
  • 57. Upcrossings and Downcrossings of a stationary Gaussian pro- cess: Consider a zero mean stationary Gaussian process X(t) with autocorrelation function RXX(τ). An upcrossing over the mean value occurs whenever the realization X(t) passes through zero with positive slope. Let ρ∆t represent the probability of such an upcrossing in the interval (t, t+∆t) We wish to determine ρ. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.57/105
  • 58. Since X(t) is a stationary Gaussian process, its derivative process X′ (t) is also zero mean stationary Gaussian with autocorrelation function RXX(τ) = Z ∞ −∞ SXX(ω)ejωτ dω 2π R ′′ XX(τ) = Z ∞ −∞ (jω)2 SXX(ω)ejωτ dω 2π SX′X′ (ω) = |jω|2 SXX(ω) ⇒ RX′X′ (τ) = −R ′′ XX(τ) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.58/105
  • 59. Further X(t) and X′ (t) and are jointly Gaussian stationary processes, and since RXX′ (τ) = − dRXX(τ) dτ , RXX′ (−τ) = − dRXX(−τ) d(−τ) = dRXX(τ) dτ = −RXX′ (τ) which for τ = 0 gives RXX′ (0) = 0 ⇒ E[X(t)X′ (t)] = 0 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.59/105
  • 60. the jointly Gaussian zero mean random variables X1 = X(t), X2 = X′ (t) are uncorrelated hence independent with variances σ2 1 = RXX(0) and σ2 2 = RX′X′ (0) = −R′′ XX(0) 0 fX1X2 (x1, x2) = fX(x1)fX(x2) = 1 2πσ1σ2 e − x2 1 2σ2 1 + x2 1 2σ2 2 ! . To determine ρ the probability of upcrossing rate, AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.60/105
  • 61. we argue as follows: In an interval (t, t + ∆t) the realization moves from X(t) = X1 to X(t + ∆t) = X(t) + X′ (t)∆t = X1 + X2∆t, and hence the realization intersects with the zero level somewhere in that interval if X1 0, X2 0, and X(t + ∆t) = X1 + X2∆t 0 i.e., X1 −X2∆t Hence the probability of upcrossing in (t, t + ∆t) is given by AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.61/105
  • 62. ρ∆t = Z ∞ x2=0 Z 0 x1=−x2∆t fX1X2 (x1, x2)dx1dx2 = Z ∞ 0 fX2 (x2)dx2 Z ∞ −x2∆t fX1 (x1)dx1. (6) Differentiating both sides of 6 with respect to ∆t we get ρ = Z ∞ 0 fX2 (x2)x2fX1 (−x2∆t)dx2 and letting ∆t → 0, we have AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.62/105
  • 63. c ρ = Z ∞ 0 x2fX(x2)fX(0)dx2 = 1 p 2πRXX(0) Z ∞ 0 x2fX(x2)dx2 = 1 p 2πRXX(0) 1 2 (σ2 p 2/π) = 1 2π s −R′′ XX(0) RXX(0) There is an equal probability for downcrossings, and hence the total probability for crossing the zero line in an interval (t, t + ∆t) equals ρ0∆t where AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.63/105
  • 64. ρ0 = 1 π q −R′′ XX(0)/RXX(0) 0. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.64/105
  • 65. A discrete time stochastic process Xn = X(nT) is a sequence of random variables. The mean, autocorrelation and auto-covariance functions of a discrete-time process are gives by µn = E{X(nT)} R(n1, n2) = E{X(n1T)X∗ (n2T)} C(n1, n2) = R(n1, n2) − µn1 µ∗ n2 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.65/105
  • 66. WSS: E{X(nT)} = µ, a constant E[X{(k + n)T}X∗ {(k)T}] = R(n) = rn = r∗ −n AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.66/105
  • 67. The positive-definite property of the autocorrelation sequence can be expressed in terms of certain Hermitian-Toeplitz matrices as follows: Theorem: A sequence {rn} |∞ −∞ forms an autocorrelation se- quence of a wide sense stationary stochastic process if and only if every Hermitian-Toeplitz matrix Tn given by AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.67/105
  • 68. Tn =      r0 r1 r2 · · · rn r∗ 1 r0 r1 · · · rn−1 . . . . . . . . . . . . . . . r∗ n r∗ n−1 · · · r∗ 1 r0      = T∗ n is non-negative (positive) definite for n = 0, 1, 2 · · · , ∞ AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.68/105
  • 69. a∗ Tna = n X i=0 n X k=0 aia∗ krk−i, ∀~ a a∗ Tna = n X i=0 n X k=0 aia∗ kE{X(kT)X∗ (iT)} = E   
  • 79. 2    ≥ 0. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.69/105
  • 80. Systems and discrete WSS processes: RXY (n) = RXX(n) ∗ h∗ (−n) RY Y (n) = RXY (n) ∗ h(n) RY Y (n) = RXX(n) ∗ h∗ (−n) ∗ h(n). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.70/105
  • 81. ARMA processes: input: W(n), output: X(n) X(n) = − p X k=1 akX(n − k) + q X k=0 bkW(n − k), X(z) p X k=0 akz−k = W(z) q X k=0 bkz−k , a0 ≡ 1 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.71/105
  • 82. H(z) = ∞ X k=0 h(k)z−k = X(z) W(z) = b0 + b1z−1 + b2z−2 + · · · + bqz−q 1 + a1z−1 + a2z−2 + · · · + apz−p = B(z) A(z) X(n) = ∞ X k=0 h(n − k)W(k) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.72/105
  • 83. the transfer function H(z) is rational with p poles and q zeros that determine the model order of the underlying system. X(n) undergoes regression over p of its previous values and at the same time a moving average based on W(n), W(n − 1), · · · , W(n − q) of the input over (q + 1) values is added to it, thus generating an Auto Regressive Moving Average (ARMA (p, q)) process X(n). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.73/105
  • 84. Generally the input {W(n)} represents a sequence of uncorrelated random variables of zero mean and constant variance σ2 W so that RWW (n) = σ2 W δ(n). If in addition, {W(n)} is normally distributed then the output {X(n)} also represents a strict-sense stationary normal process. If q = 0, then we have an AR(p) process (all-pole process), and if p = 0, then an MA(q) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.74/105
  • 85. AR(1) process: An AR(1) process has the form X(n) = aX(n − 1) + W(n) H(z) = 1 1 − az−1 = ∞ X n=0 an z−n h(n) = an , |a| 1 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.75/105
  • 86. RXX(n) = σ2 W δ(n) ∗ {a−n } ∗ {an } = σ2 W ∞ X k=0 a|n|+k ak = σ2 W a|n| 1 − a2 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.76/105
  • 87. The normalized autocorrelation sequence: ρX(n) = RXX(n) RXX(0) = a|n| , |n| ≥ 0. (7) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.77/105
  • 88. It is instructive to compare an AR(1) model discussed above by superimposing a random component to it, which may be an error term associated with observing a first order AR process X(n). Thus Y (n) = X(n) + V (n), X(n) ∼ AR(1), V (n) ∼ White noise (8) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.78/105
  • 89. RY Y (n) = RXX(n) + RV V (n) = RXX(n) + σ2 V δ(n) = σ2 W a|n| 1 − a2 + σ2 V δ(n) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.79/105
  • 90. ρY (n) = RY Y (n) RY Y (0) = 1, n = 0 ca|n| , n = ±1, ±2, · · · (9) c = σ2 W σ2 W + σ2 V (1 − a2) 1. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.80/105
  • 91. Eqs. (7) and (8) demonstrate the effect of superimposing an error sequence on an AR(1) model. For non-zero lags, the autocorrelation of the observed sequence {Y (n)} is re- duced by a constant factor compared to the original process {X(n)}. From (8), the superimposed error sequence V (n) only affects the corresponding term in Y (n) (term by term). However, a particular term in the “input sequence” W(n) affects X(n) and Y (n) as well as all subsequent observa- tions. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.81/105
  • 92. AR(2) Process: An AR(2) process has the form, W(n) is a zero-mean white process X(n) = a1X(n − 1) + a2X(n − 2) + W(n) H(z) = ∞ X n=0 h(n)z−n = 1 1 − a1z−1 − a2z−2 = b1 1 − λ1z−1 + b2 1 − λ2z−1 h(0) = 1, h(1) = a1, h(n) = a1h(n − 1) + a2h(n − 2), n ≥ 2 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.82/105
  • 93. h(n) = b1λn 1 + b2λn 2 , n ≥ 0 b1 + b2 = 1, b1λ1 + b2λ2 = a1. λ1 + λ2 = a1, λ1λ2 = −a2, Stability condition: |λ1| 1, |λ2| 1. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.83/105
  • 94. RXX(n) = E{X(n + m)X∗ (m)} = E{[a1X(n + m − 1) + a2X(n + m − 2)]X∗ (m)} + E{W(n + m)X∗ (m)} = a1RXX(n − 1) + a2RXX(n − 2) Because W(n) and X(n) are uncorrelated E{W(n + m)X∗ (m)} = 0 ρX(n) = RXX(n) RXX(0) = a1ρX(n − 1) + a2ρX(n − 2). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.84/105
  • 95. RXX(n) = RWW (n) ∗ h∗ (−n) ∗ h(n) = σ2 W h∗ (−n) ∗ h(n) = σ2 W ∞ X k=0 h∗ (n + k)h(k) = σ2 W |b1|2 (λ∗ 1)n 1 − |λ1|2 + b∗ 1b2(λ∗ 1)n 1 − λ∗ 1λ2 + b1b∗ 2(λ∗ 2)n 1 − λ1λ∗ 2 + |b2|2 (λ∗ 2)n 1 − |λ2|2 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.85/105
  • 96. The normalized autocorrelation function: ρX(n) = RXX(n) RXX(0) = c1λ∗n 1 + c2λ∗n 2 where c1 and c2 are the appropriate constant. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.86/105
  • 97. ARMA(p, q) system has only p + q + 1 independent coefficients, and hence its impulse response sequence {hk} also must exhibit a similar dependence among them. An old result due to Kronecker (1881) states that the nec- essary and sufficient condition for H(z) = P∞ k=0 h(n)z−n to represent a rational system (ARMA) is that AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.87/105
  • 98. det Hn = 0, n ≥ N (for all sufficiently large n), where Hn =      h0 h1 h2 · · · hn h1 h2 h3 · · · hn+1 . . . hn hn+1 hn+2 · · · h2n      . (10) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.88/105
  • 99. In the case of rational systems for all sufficiently large n, the Hankel matrices Hn in (10) all have the same rank. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.89/105
  • 100. The necessary part easily follows from ∞ X k=0 h(n)z−n = Pq k=0 bkz−k Pp k=0 akz−k by cross multiplying and equating coefficients of like powers of z−n for n = 0, 1, 2, · · · . We have AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.90/105
  • 101. b0 = h0 b1 = h0a1 + h1 . . . bq = h0aq + h1aq−1 + · · · + hm 0 = h0aq+i + h1aq+i−1 + · · · + hq+i−1a1 + hq+i, i ≥ 1. For systems with q ≤ p−1, letting i = p−q, p−q+1, · · · , 2p−q in the last equation we get AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.91/105
  • 102. h0ap + h1ap−1 + · · · + hp−1a1 + hp = 0 . . . hpap + hp+1ap−1 + · · · + h2p−1a1 + h2p = 0 which gives det(Hp) = 0, and similarly for i = p − q + 1, · · · . AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.92/105
  • 103. h0ap+1 + h1ap + · · · + hp+1 = 0 h1ap+1 + h2ap + · · · + hp+2 = 0 . . . hp+1ap+1 + hp+2ap + · · · + h2p+2 = 0, † AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.93/105
  • 104. the ARMA (p, q) model, the input white noise process W(n) is uncorrelated with its own past sample values as well as the past values of the system output. This gives E{W(n)W∗ (n − k)} = 0, k ≥ 1 E{W(n)X∗ (n − k)} = 0, k ≥ 1. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.94/105
  • 105. ri = E{x(n)x∗ (n − i)} = − p X k=1 ak{x(n − k)x∗ (n − i)} + q X k=0 bk{w(n − k)w∗ (n − i)} = − p X k=1 akri−k + q X k=0 bk{w(n − k)x∗ (n − i)} AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.95/105
  • 106. p X k=1 akri−k + ri 6= 0, i ≤ q p X k=1 akri−k + ri = 0, i ≥ q + 1. ‡ We note that the above equation (‡) is the same as (†) with {hk} replaced by {rk}. Hence, the Kronecker conditions for rational systems can be expressed in terms of its output autocorrelations as well. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.96/105
  • 107. X(n) ∼ ARMA (p, q) represents a wide sense stationary stochastic process, then its output autocorrelation sequence {rk} satisfies rankDp−1 = rankDp+k = p, k ≥ 0, where AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.97/105
  • 108. Dk =      r0 r1 r2 · · · rk r1 r2 r3 · · · rk+1 . . . rk rk+1 rk+2 · · · r2k      represents the (k + 1) × (k + 1) Hankel matrix generated from r0, r1, · · · , r2k. It follows that for ARMA (p, q) systems, we have det Dn = 0, for all sufficiently large n. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.98/105
  • 109. Cyclostationarity: In the determinitic case, a signal is called periodic if it repeats after a period of time. In this section we will look at the statistical equivalent of periodicity. A random process X(t) is said to be nth-order cyclostationary if the nth-order PDF is periodic: fX(~ x;~ t) = fX(~ x;~ t + ~ T (n) o ~ 1); ~ T (n) o ∈ R AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.99/105
  • 110. The periodicity of the statistics, ~ T (n) o is often referred to as the cyclostationarity parameter. As in the case of stationarity, n = 1 cyclostationarity implies that µX(t) = µX(t + T(1) o ) σX(t) = σX(t + T(1) o ) Second-order cyclostationarity specifically implies that RXX(t1, t2) = RXX(t1 + T(2) o , t2 + T(2) o ) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.100/105
  • 111. Since the ACF of the process is dependent on both t1 t2, these processes are non-stationary. Furthermore the ensemble ACF is periodic and as a consequence it has a Fourier series expansion of the form: RXX(t, τ) = E{X(t)X∗ (t − τ)} = ∞ X n=−∞ R̃ (n) XX(τ) exp j 2πnt T (2) o AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.101/105
  • 112. The Fourier series coefficients R (n) XX(τ) are referred to as the cyclic autocorrelation function. The DC Fourier coefficient in particular is called the time averaged ACF and is determined via: R̃ (0) XX(τ) = 1 T (2) o Z T (2) o /2 −T (2) o /2 R (0) XX(t, t − τ) dt AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.102/105
  • 113. The corresponding time-averaged power spectrum is defined via the Fourier transform: P (0) XX(Ω) = Z ∞ −∞ R (0) XX(τ) exp(−jΩτ)dτ Using the cyclic autocorrelation function we can define a corresponding cyclic power-spectrum P (n) XX(Ω) via: AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.103/105
  • 114. P (n) XX(Ω) = Z ∞ −∞ R (n) XX(τ) exp(−jΩτ)dτ R (n) XX(τ) = 1 2π Z ∞ −∞ P (n) XX(Ω) exp(jΩτ)dΩ AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.104/105
  • 115. The motivation behind the definition of the time-averaged ACF is to average out and remove the dependence of the ACF on the t variable so that we do not have to deal with two dimensional Fourier transforms. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.105/105