SlideShare a Scribd company logo
Leif Mejlbro
Stochastic Processes 1
Probability Examples c­8
Download free books at
Download free ebooks at bookboon.com
2
Leif Mejlbro
Probability Examples c-8
Stochastic Processes 1
Download free ebooks at bookboon.com
3
Probability Examples c-2 – Stochastic Processes 1
© 2009 Leif Mejlbro & Ventus Publishing ApS
ISBN 978-87-7681-524-0
Download free ebooks at bookboon.com
Stochastic Processes 1
4
Contents
Introduction 5
1 Stochastic processes; theoretical background 6
1.1 General about stochastic processes 6
1.2 Random walk 7
1.3 The ruin problem 9
1.4 Markov chains 12
2 Random walk 17
3 Markov chains 20
Index 137
Contents
Stand out from the crowd
Designed for graduates with less than one year of full-time postgraduate work
experience, London Business School’s Masters in Management will expand your
thinking and provide you with the foundations for a successful career in business.
The programme is developed in consultation with recruiters to provide you with
the key skills that top employers demand. Through 11 months of full-time study,
you will gain the business knowledge and capabilities to increase your career
choices and stand out from the crowd.
Applications are now open for entry in September 2011.
For more information visit www.london.edu/mim/
email mim@london.edu or call +44 (0)20 7000 7573
Masters in Management
London Business School
Regent’s Park
London NW1 4SA
United Kingdom
Tel +44 (0)20 7000 7573
Email mim@london.edu
www.london.edu/mim/
Fast-track
your career
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
5
Introduction
Introduction
This is the eighth book of examples from the Theory of Probability. The topic Stochastic Processes
is so huge that I have chosen to split the material into two books. In the present first book we shall
deal with examples of Random Walk and Markov chains, where the latter topic is very large. In the
next book we give examples of Poisson processes, birth and death processes, queueing theory and other
types of stochastic processes.
The prerequisites for the topics can e.g. be found in the Ventus: Calculus 2 series and the Ventus:
Complex Function Theory series, and all the previous Ventus: Probability c1-c7.
Unfortunately errors cannot be avoided in a first edition of a work of this type. However, the author
has tried to put them on a minimum, hoping that the reader will meet with sympathy the errors
which do occur in the text.
Leif Mejlbro
27th October 2009
Download free ebooks at bookboon.com
Stochastic Processes 1
6
1. Stochastic process; theoretical background
1 Stochastic processes; theoretical background
1.1 General about stochastic processes
A stochastic process is a family {X(t) | t ∈ T} of random variables X(t), all defined on the same
sample space Ω, where the domain T of the parameter is a subset of R (usually N, N0, Z, [0, +∞[ or
R itself), and where the parameter t ∈ T is interpreted as the time.
We note that we for every fixed ω in the sample space Ω in this way define a so-called sample function
T(·, ω) : T → R on the domain T of the parameter.
In the description of such a stochastic process we must know the distribution function of the stochastic
process, i.e.
P {X (t1) ≤ x1 ∧ X (t2) ≤ x2 ∧ · · · ∧ X (tn) ≤ xn}
for every t1, . . . , tn ∈ T, and every x1, . . . , xn ∈ R, for every n ∈ N.
This is of course not always possible, so one tries instead to find less complicated expressions connected
with the stochastic process, like e.g. means, which to some extent can be used to characterize the
distribution.
A very important special case occurs when the random variables X(t) are all discrete of values in N0.
If in this case X(t) = k, then we say that the process at time t is at state Ek. This can now be further
specialized.
©
UBS
2010.
All
rights
reserved.
www.ubs.com/graduates
Looking for a career where your ideas could really make a difference? UBS’s
Graduate Programme and internships are a chance for you to experience
for yourself what it’s like to be part of a global team that rewards your input
and believes in succeeding together.
Wherever you are in your academic career, make your future a part of ours
by visiting www.ubs.com/graduates.
You’re full of energy
and ideas. And that’s
just what we are looking for.
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
7
1. Stochastic process; theoretical background
A Markov process is a discrete stochastic process of values in N0, for which also
P {X (tn+1) = kn+1 | X (tn) = kn ∧ · · · ∧ X (t1) = k1} = P {X (tn+1) = kn+1 | X (tn) = kn}
for any k1,. . . , kn+1 in the range, for any t1 < t2 < · · · < tn+1 from T, and for any n ∈ N.
We say that when a Markov process is going to be described at time tn+1, then we have just as much
information, if we know the process at time tn, as if we even know the process at the times t1, . . . ,
tn, provided that these times are all smaller than tn+1. One may coin this in the following way: If
the present is given, then the future is independent of the past.
1.2 Random walk
Consider a sequence (Xk) of mutually independent identically distributed random variables, where
the distribution is given by
P {Xk = 1} = p and P {Xk = −1} = q, p, q > 0 and p + q = 1andk ∈ N.
We define another sequence of random variables (Sn) by
S0 = 0 and Sn = S0 +
n

k=1
Xk, for n ∈ N.
In this special construction the new sequence (Sn)
+∞
n=0 is called a random walk. In the special case of
p = q =
1
2
, we call it a symmetric random walk.
An outcome of X1, X2, . . . , Xn is a sequence x1, x2, . . . , xn, where each xk is either 1 or −1.
A random walk may be interpreted in several ways, of which we give the following two:
1) A person walks on a road, where he per time unit with probability p takes one step to the right
and with probability q takes one step to the left. At time 0 the person is at state E0. His position
at time n is given by the random variable Sn. If in particular, p = q = 1
2 , this process is also called
the “drunkard’s walk”.
2) Two persons, Peter and Paul, are playing a series of games. In one particular game, Peter wins
with probability p, and Paul wins with probability q. After each game the winner receives 1 $
from the loser. We assume at time 0 that they both have won 0 $. Then the random variable Sn
describes Peter’s gain (positive or negative) after n games, i.e. at time n.
We mention
Theorem 1.1 (The ballot theorem). At an election a candidate A obtains in total a votes, while
another candidate B obtains b votes, where b  a. The probability that A is leading during the whole
of the counting is equal to
a − b
a + b
.
Let Peter and Paul be the two gamblers mentioned above. Assuming that Peter to time 0 has 0 $,
then the probability of Peter at some (later) time having the sum of 1 $ is given by
α = min

1 ,
p
q

,
Download free ebooks at bookboon.com
Stochastic Processes 1
8
1. Stochastic process; theoretical background
hence the probability of Peter at some (later) time having the sum of N $, where N  0, is given by
αN
= min

1 ,

p
q
N

.
The corresponding probability that Paul at some time has the sum of 1 $ is
β = min

1 ,
q
p

,
and the probability that he at some later time has a positive sum of N $ is
βN
= min

1 ,

q
p
N

.
Based on this analysis we introduce
pn := P{return to the initial position at time n}, n ∈ N,
fn := P{the first return to the initial position at time n}, n ∈ N,
f := P{return to the initial position at some later time} =
+∞

n=1
fn.
Notice that pn = fn = 0, if n is an odd number.
We shall now demonstrate how the corresponding generating functions profitably can be applied in
such situation. Thus we put
P(s) =
+∞

n=0
pn sn
and F(s) =
+∞

n=0
fn sn
,
where we have put p0 = 1 and f0 = 0. It is easily seen that the relationship between these two
generating functions is
F(s) = 1 −
1
P(s)
.
Then by the binomial series
P(s) =
1
1 − 4pqs2
,
so we conclude that
F(s) =
+∞

k=1
1
2k − 1

2k
k

(pq)k
s2k
,
which by the definition of F(s) implies that
f2k =
1
2k − 1

2k
k

(pq)k
=
p2k
2k − 1
.
Download free ebooks at bookboon.com
Stochastic Processes 1
9
1. Stochastic process; theoretical background
Furthermore,
f = lim
s→1−
F(s) = 1 − 1 − 4pq = 1 − |1 − 2p| =
⎧
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎩
2p, for p 
1
2
,
1, for p =
1
2
,
2q, for p 
1
2
.
In the symmetric case, where p =
1
2
, we define a random variable T by
T = n, if the first return occurs at time n.
Then it follows from the above that T has the distribution
P{T = 2k} = f2k and P{T = 2k − 1} = 0, for k ∈ N.
The generating function is
F(s) = 1 − 1 − s2,
hence
E{T} = lim
s→1−
F(s) = +∞,
which we formulate as the expected time of return to the initial position is +∞.
1.3 The ruin problem
The initial position is almost the same as earlier. The two gamblers, Peter and Paul, play a series of
games, where Peter has the probability p of winning 1 $ from Paul, while the probability is q that
he loses 1 $ to Paul. At the beginning Peter owns k $, and Paul owns N − k $, where 0  k  N.
The games continue, until one of them is ruined. The task here is to find the probability that Peter
is ruined.
Let ak be the probability that Peter is ruined, if he at the beginning has k $, where we allow that
k = 0, 1, . . . , N. If k = 0, then a0 = 1, and if k = N, then aN = 0. Then consider 0  k  N, in
which case
ak = p ak+1 + q ak−1.
We rewrite this as the homogeneous, linear difference equation of second order,
p ak+1 − ak + q ak−1 = 0, k = 1, 2, . . . , N − 1.
Concerning the solution of such difference equations, the reader is referred to e.g. the Ventus: Calculus
3series. We have two possibilities:
Download free ebooks at bookboon.com
Stochastic Processes 1
10
1. Stochastic process; theoretical background
1) If p =
1
2
, then the probability for Peter being ruined, if he starts with k $, is given by
ak =

q
p
k
−

q
p
N
1 −

q
p
N
, for k = 0, 1, 2, . . . , N.
2) If instead p = q =
1
2
, then
ak =
N − k
N
, for k = 1, 2, . . . , N.
We now change the problem to finding the expected number of games μk, which must be played before
one of the two gamblers is ruined, when Peter starts with the sum of k $. In this case,
μk = p μk+1 + q μk−1 + 1, for k = 1, 2, . . . , N − 1.
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
11
1. Stochastic process; theoretical background
We rewrite this equation as an inhomogeneous linear difference equation of second order. Given the
boundary conditions above, its solution is
1) For p =
1
2
we get
μk =
k
q − p
−
N
q − p
·
1 −

q
p
k
1 −

q
p
N
, for k = 0, 1, 2, . . . , N.
2) For p = q =
1
2
we get instead
μk = k(N − k), for k = 0, 1, 2, . . . , N.
In the special case, where we consider a symmetric random walk, i.e.. p =
1
2
, we sum up the results:
Let (Xk) be a sequence of mutually independent identically distributed random variables of distribu-
tion given by
P {Xk = 1} = P {Xk = −1} =
1
2
, for n ∈ N.
In this case, the random variables S0 = 0 and S2n = S0 +
2n
k=1 Xk have the distribution given by
p2n,2r := P {S2n = 2r} =

2n
n + r

2−2n
, for r = −n, −n + 1, . . . , n, n ∈ N.
In particular,
u2n := p2n,0 =

2n
n

e−2n

∼
1
√
πn
for large n

.
Then we define a random variable T by
T = n, if the first return to E0 occurs to time n.
This random variable has the values 2, 4, 6, . . . , with the probabilities
f2n := P{T = 2n} =
u2n
2n − 1
=
1
2n − 1

2n
n

2−2n
, for n ∈ N,
where
E{T} = +∞.
For every N ∈ Z there is the probability 1 for the process reaching state EN to some later time.
Finally, if the process is at state Ek, 0  k  N to time 0, then the process reaches state E0 before
EN with the probability 1 −
k
N
, and it reaches state EN before E0 with the probability
k
N
. The
expected time for the process to reach either E0 or EN from Ek is k(N − k).
Download free ebooks at bookboon.com
Stochastic Processes 1
12
1. Stochastic process; theoretical background
Theorem 1.2 (The Arcus sinus law for the latest visit). The probability that the process up to
time 2n the last time is in state E0 to time 2k, is given by
α2k,2n = u2k · u2n−2k,
where we have put u2n = P {S2n = 0}.
The distribution which has the probability α2k,2n at the point 2k, where 0 ≤ k ≤ n, is also called the
discrete Arcus sinus distribution of order n. The reason for this name is the following: If β and γ are
given numbers, where 0  β  γ  1, then
P {last visit of E0 is between 2βn and 2γn} =

βn≤k≤γn
u2k · u2n−2k
∼

β≤ k
n ≤γ
1
n
·
1
π

k
n

1 −
k
n
 ∼
 γ
β
1
π x(1 − x)
dx =
2
π
Arcsin
√
γ −
2
π
Arcsin β,
where we recognize the sum as a mean sum of the integral of Arcsin. This implies that
P {last visit of E0 before 2nx} ∼
2
π
Arcsin
√
x, for x ∈ ]0 , 1[.
One interpretation of this result is that if Peter and Paul play many games, then there is a large
probability that one of them is almost all the time on the winning side, and the other one is almost
all the time on the losing side.
1.4 Markov chains
A Markov chain is a (discrete) stochastic process {X(t) | t ∈ N0}, which has a finite number of states,
e.g. denoted by E1, E2, . . . , Em, and such that for any 1 ≤ k0, k1, . . . , kn ≤ m and every n ∈ N,
P {X(n) = kn | X(n − 1) = kn−1 ∧ · · · ∧ X(0) = k0} = P {X(n) = kn | X(n − 1) = kn−1} .
If furthermore the Markov chain satisfies the condition that the conditional probabilities
pij := P{X(n) = j | X(n − 1) = i}
do not depend on n, we call the process a stationary Markov chain.
We shall in the following only consider stationary Markov chains, and we just write Markov chains,
tacitly assuming that they are stationary.
A Markov chain models the situation, where a particle moves between the m states E1, E2, . . . , Em,
where each move happens at discrete times t ∈ N. Then pij represents the probability that the particle
in one step moves from state Ei to state Ej. In particular, pii is the probability that the particle stays
at state Ei.
We call the pij the transition probabilities. They are usually lined up in a stochastic matrix:
P =
⎛
⎜
⎜
⎜
⎝
p11 p12 · · · p1m
p21 p22 · · · p2m
.
.
.
.
.
.
.
.
.
pm1 pm2 · · · pmm
⎞
⎟
⎟
⎟
⎠
.
Download free ebooks at bookboon.com
Stochastic Processes 1
13
1. Stochastic process; theoretical background
In this matrix the element pij in the i-th row and the j-th column represents the probability for the
transition from state Ei to state Ej.
For every stochastic matrix we obviously have
pij ≥ 0 for every i and j,

j
pij = 1 for every i, thus all sums of the rows are 1.
The probabilities of state p
(n)
i are defined by
p
(n)
i := P{X(n) = i}, for i = 1, 2 . . . , m og n ∈ N0.
The corresponding vector of state is
p(n)
:=

p
(n)
1 , p
(n)
2 , . . . , , p(n)
m

, for n ∈ N0.
In particular, the initial distribution is given by
p(0)
=

p
(0)
1 , p
(0)
2 , . . . , , p(0)
m

.
Then by ordinary matrix computation (note the order of the matrices),
p(n)
= p(n−1)
P, hence by iteration p(n)
= p(0)
Pn
,
proving that the probabilities of state at time t = n is only determined by the initial condition p(0)
and the stochastic matrix P, iterated n timers. The elements p
(n)
ij in Pn
are called the transition
probabilities at step n, and they are given by
p
(n)
ij = P{X(k + n) = j | X(k) = i}.
We define a probability vector α = (α1, α2, . . . , αm) as a vector, for which
αi ≥ 0, for i = 1, 2, . . . , m, and
m

i=1
αi = 1.
A probability vector α is called invariant with respect to the stochastic matrix P, or a stationary
distribution of the Markov chain, if
α P = α.
The latter name is due to the fact that if X(k) has its distribution given by α, then every later
X(n + k), n ∈ N has also its distribution given by α.
In order to ease the computations in practice we introduce the following new concepts:
Download free ebooks at bookboon.com
Stochastic Processes 1
14
1. Stochastic process; theoretical background
1) We say that a Markov chain is irreducible, if we to any pair of indices (i, j) can find an n = nij ∈ N,
such that that pn
ij  0. This means that the state Ej can be reached from state Ei, no matter the
choice of (i, j). (However, all the nij ∈ N do not have to be identical).
2) If we even can choose n ∈ N independently of the pair of indices (i, j), we say that the Markov
chain is regular.
Remark 1.1 Notice that stochastic regularity has nothing to do with the concept of a regular matrix
known from Linear Algebra. We must not confuse the two definitions. ♦
your chance
to change
the world
Here at Ericsson we have a deep rooted belief that
the innovations we make on a daily basis can have a
profound effect on making the world a better place
for people, business and society. Join us.
In Germany we are especially looking for graduates
as Integration Engineers for
• Radio Access and IP Networks
• IMS and IPTV
We are looking forward to getting your application!
To apply and for all current job openings please visit
our web page: www.ericsson.com/careers
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
15
1. Stochastic process; theoretical background
Theorem 1.3 Let P be an m × m regular stochastic matrix.
1) The sequence (Pn
) converges towards a stochastic limit matrix G for n → +∞.
2) Every row in G has the same probability vector
g = (g1, g2, . . . , gm) , where gi  0 for every i = 1, 2, . . . , m.
3) If p is any probability vector, then
p Pn
→ g for n → +∞.
4) The regular matrix P has precisely one invariant probability vector, g.
The theorem shows that for a regular stochastic matrix P the limit distribution is uniquely determined
by the invariant probability vector.
It may occur for an irreducible Markov chain that Pn
diverges for n → +∞. We have instead
Theorem 1.4 Let P be an m × m irreducible stochastic matrix.
1) The sequence

1
n
n
i=1 Pi

converges towards a stochastic limit matrix G for n → +∞.
2) Every row in G is the same probability vector
g = (g1, g2, . . . , gm) , where gi  0 for ethvert i = 1, 2, . . . , m.
3) Given any probability vector p, then
p

1
n
n

i=1
Pi

→ g for n → +∞.
4) The irreducible matrix P has precisely one invariant probability vector, namely g.
Given a Markov chain of the m states E1, E2, . . . , Em with the corresponding stochastic matrix P.
A subset C of the states E1, E2, . . . , Em is called closed, if no state outside C can be reached from
any state in C. This can also be expressed in the following way: A subset C of the m states is closed,
if for every Ei ∈ C and every Ej /
∈ C we have pij = 0.
If a closed set only contains one state, C = {Ei}, we call Ei an absorbing state. This is equivalent to
pii = 1, so we can immediately find the absorbing states from the numbers 1 in the diagonal of the
stochastic matrix P.
The importance of a closed set is described by the following theorem, which is fairly easy to apply in
practice.
Theorem 1.5 A Markov chain is irreducible, if and only if it does not contain any proper closed
subset of states.
A necessary condition of irreducibility of a Markov chain is given in the following theorem:
Download free ebooks at bookboon.com
Stochastic Processes 1
16
1. Stochastic process; theoretical background
Theorem 1.6 Assume that a Markov chain of the m states E1, E2, . . . , Em is irreducible. Then to
every pair of indices (i, j), 1 ≤ i, j ≤ m there exists an n = nij, such that 1 ≤ n ≤ m and p
(n)
ij  0.
Concerning the proof of regularity we may use the following method, if the matrix P is not too
complicated: Compute successively the matrices Pi
, until one at last (hopefully) reaches a number
i = n, where all elements of Pn
are different from zero. Since we already know that all elements are
≥ 0, we can ease the computations by just writing ∗ for the elements of the matrices, which are = 0.
We do not have to compute their exact values. But we must be very careful with the zeros. This
method is of course somewhat laborious, and one may often apply the following theorem instead.
Theorem 1.7 If a stochastic matrix P is irreducible, and there exists a positive element in the diag-
onal pii  0, then P is regular.
It is usually easy to prove that P is irreducible. The difficult part is to prove that it is also regular.
We give here another result:
We introduce for every m × m irreducible stochastic matrix P the following numbers
di := largest common divisor of all n ∈ N, for which p
(n)
ii  0.
It can be proved that d1 = d2 = · · · = dm := d, so we only have to find one single of the di-erne. (For
one’s own convenience, choose always that di, which gives the easiest computations).
Theorem 1.8 An irreducible Markov chain is regular, if and only if d = 1.
If d  1, then the Markov chain is periodic of period d.
One may consider a random walk on the set {1, 2, 3, . . . , N} as a Markov chain of the transition
probabilities
pi,i−1 = q and pi,i+1 = p, for i = 2, 3, . . . , N − 1, where p, q  0 og p + q = 1.
1) If p11 = pNN = 1, then we have a random walk of two absorbing barriers.
2) If p12 = pN,N−1 = 1, then we have a random walk of two reflecting barriers. In this case the
corresponding Markov chain is irreducible.
Download free ebooks at bookboon.com
Stochastic Processes 1
17
2. Random walk
2 Random walk
Example 2.1 Consider a ruin problem of total capital N $, where p 
1
2
.
In every game the loss/gain is only 50 cents. Is this game more advantageous for Peter than if the
stake was 1 $ (i.e. a smaller probability that Peter is ruined)?
We have 2N + 1 states E0, E1, . . . , E2N , where state Ei means that A has
i
2
$. If A initially has k
$, then a2N = 0 and a0 = 1.
We get for the values in between,
ak = p ak+1 + q ak−1, 0  k  2N,
which we rewrite as
p (ak+1 − ak) = q (ak − ak−1) .
Hence by recursion,
ak − ak−1 =
q
p
(ak−1 − ak−2) = · · · =

q
p
k−1
(a1 − a0) ,
and we get
ak = (ak − ak−1) + (ak−1 − ak−2) + · · · + (a1 − a0) + a0
=

q
p
k−1
+

q
p
k−2
+ · · · +

q
p
1
+ 1

(a1 − a0) + a0
=
1 −

q
p
k
1 −
q
p
(a1 − a0) + a0 =

q
p
k
− 1
q
p
− 1
(a1 − a0) + a0.
Now, a0 = 1, so we get for k = 2N that
0 = a2N =

q
p
2N
− 1
q
p
− 1
(a1 − 1) + 1 = a1

q
p
2N
− 1
q
p
− 1
+ 1 −

q
p
2N
− 1
q
p
− 1
,
hence by a rearrangement,
a1 =
q
p
− 1

q
p
2N
− 1
⎧
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎩

q
p
2N
− 1
q
p
− 1
− 1
⎫
⎪
⎪
⎪
⎬
⎪
⎪
⎪
⎭
=
q
p

q
p
2N−1
− 1

q
p
2N
− 1
.
Then by insertion,
a2k =

q
p
2k
− 1
q
p
− 1
⎧
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎩
−
q
p
− 1

q
p
2N
− 1
⎫
⎪
⎪
⎪
⎬
⎪
⎪
⎪
⎭
+1 =

q
p
2N
−

q
p
2k

q
p
2N
− 1
=

q
p
N
−

q
p
k

q
p
N
− 1
·

q
p
N
+

q
p
k

q
p
N
+ 1
.
Download free ebooks at bookboon.com
Stochastic Processes 1
18
2. Random walk
These expressions should be compared with
ãk =

q
p
N
−

q
p
k

q
p
N
− 1
from the ruin problem with 1 $ at stake in each game. Notice the indices ãk and a2k, because 1 $
= 2 · 50 cents. It clearly follows from q  1
2  p, that
a2k  ãk.
Since the a indicate the probability that Peter is ruined, if follows that there is larger probability that
he is ruined if the stake is 50 cents than if the stake is 1 $.
what‘s missing in this equation?
MAERSK INTERNATIONAL TECHNOLOGY  SCIENCE PROGRAMME
You could be one of our future talents
Are you about to graduate as an engineer or geoscientist? Or have you already graduated?
If so, there may be an exciting future for you with A.P. Moller - Maersk.
www.maersk.com/mitas
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
19
2. Random walk
Example 2.2 Peter and Paul play in total N games. In each game Peter has the probability 1
2 for
winning (in which case he receives 1 $ from Paul) and probability 1
2 for losing (in which case he
delivers 1 $ to Paul). The games are mutually independent of each other. Find the probability that
Peter’s total gain never after the start of the games is 0 $.
The probability that Peter’s gain is never 0 $, is
1 − P{return to the initial position at some time} = 1 −
N

n=1
fn,
where
fn = P{first return is to time n}.
The parity assures that f2k−1 = 0, because we can only return to the initial position after an even
number of steps. It follows from p = q = 1
2 that
f2k =
1
2k − 1

2k
k

(p q)k
=
1
2k − 1

2k
k
 
1
4
k
.
By insertion we get the probability
1 −
N

n=1
fn = 1 −
[N
2 ]

k=1
1
2k − 1

2k
k
 
1
4
k
=
∞

k=[N
2 ]+1
1
2k − 1

2k
k
 
1
4
k
.
Alternatively, we may use the following considerations which somewhat simplify the task.
1) If N = 2n is even, then the wanted probability is
P {S1 = 0 ∧ S2 = 0 ∧ · · · ∧ S2n = 0} .
This expression is equal to u2n, which again is equal to
u2n =

2n
n

2−2n

∼
1
√
πn

.
2) If N = 2n + 1 is odd, then S2n+1 is always = 0. Hence, the probability is
P {S1 = 0 ∧ S2 = 0 ∧ · · · ∧ S2n = 0 ∧ S2n+1 = 0}
= P {S1 = 0 ∧ S2 = 0 ∧ · · · ∧ S2n = 0} =

2n
n

2−2n

∼
1
√
πn

according to the first question.
Download free ebooks at bookboon.com
Stochastic Processes 1
20
3. Markov chains
3 Markov chains
Example 3.1 Let P be a stochastic matrix for a Markov chain of the states E1, E2, . . . , Em.
1) Prove that
p
(n1+n2+n3)
ij ≥ p
(n1)
ik · p
(n2)
kk · p
(n3)
kj ,
for
1 ≤ i, j, k ≤ m, n1, n2, n3 ∈ N0.
2) Prove that if the Markov chain is irreducible and pii  0 for some i, then the Markov chain is
regular.
1) Since Pn1+n2+n3
= Pn1
Pn2
Pn3
, and since all matrix elements are ≥ 0, it follows that
p
(n1+n2+n3)
ij =
m

k=1
m

=1
p
(n1)
ik p
(n2)
k p
(n3)
j ≥ p
(n1)
ik · p
(n2)
kk · p
(n3)
kj .
2) Assume that the Markov chain is irreducible and that there is an i, such that pii  0.
Since p
(n)
ii ≥ (pii)
n
 0, we must have p
(n)
ii  0 for all n ∈ N0.
Now, P is irreducible, so to every j there exists an n1, such that
p
(n1)
ji  0, [index “i” as above],
and to every k there exists an n2, such that also
p
(n2)
ik  0.
Then follow this procedure on all pairs of indices (j, k).
If we choose N1 as the largest of the possible n1, and N2 as the largest of the possible n2, then it
follows from 1. that
p
(N1+N2)
jk ≥ p
(n1)
ji p
(n2)
ik p
(N1−n1+N2−n−2)
ii  0,
where n1 = n1(j) and n2 = n2(k) depend on j and k, respectively.
Hence all elements of PN1+N2
are  0, so the stochastic matrix P is regular.
Download free ebooks at bookboon.com
Stochastic Processes 1
21
3. Markov chains
Example 3.2 Let P be an irreducible stochastic matrix. We introduce for every i the number di by
di = largest common divisor of all n, for which p
(n)
ii  0.
1) Prove that di does not depend on i. We denote the common value by d.
2) Prove that if P is regular, then d = 1.
1) If we use Example 3.1 with j = i, we get
p
(n1+n2+n2)
ii ≥ p
(n1)
ik p
(n2)
kk p
(n3)
ki ,
and analogously
p
(n1+n2+n3)
kk ≥ p
(n1)
ik p
(n2)
ii p
(n3)
ki .
Using that P is irreducible, we can find n1 and n3, such that p
(n1)
ik  0 and p
(n3)
ki  0.
Let n1 and n3 be as small as possible. Then
p
(n1+n3)
ii ≥ p
(n1)
ik  0 and p
(n1+n3)
kk ≥ p
(n1)
ik p
(n3)
ki  0.
Hence, di|n1 + n3 and dk|n1 + n3, where “a |b ” means that a is a divisor in b.
By choosing n2 = m dk, such that p
(n2)
kk  0, we also get
p
(n1+n2+n3)
ii ≥ p
(n1)
ik p
(n2)
kk p
(n3)
ki  0, thus di|n1 + n2 + n3.
We conclude that di|n2 = m · dk.
If n2 = n di is chosen, such that p
(n2)
kk  0, then analogously
dk|n1 + n2 + n3, thus dk|n · di.
It follows that d1 is divisor in all numbers n2 = m · dk, for which p
(n2)
kk A0. Since dk is the largest
common divisor, we must have di|dk.
Analogously, dk|di, hence dk = di.
Since i and k are chosen arbitrarily, we have proved 1..
2) If P is regular, there exists an n ∈ N, such that all p
(n)
ij  0. Then also p
(n+m)
ij  0, m ∈ N0, and
the largest common divisor is clearly 1, hence d = 1.
The proof that conversely d = 1 implies that P is regular is given in Example 3.3.
Download free ebooks at bookboon.com
Stochastic Processes 1
22
3. Markov chains
Example 3.3 . Let P be a stochastic matrix of an irreducible Markov chain E1, E2, . . . , En, and
assume that d = 1 (cf. Example 3.2).
Prove that there exists an N ∈ N, such that we for all n ≥ N and all i and j have p
(n)
ij  0 (which
means that the Markov chain is regular).
Hint: One may in the proof use without separate proof the following result from Number Theory: Let
a1, a2, . . . , ak ∈ N have the largest common divisor 1. Then there exists an N ∈ N, such that for all
n ≥ N there are integers c1(n), c2(n), . . . , ck(n) ∈ N, such that
n =
k

j=1
cj(n) aj.
Since P is irreducible, we can to every pair of indices (i, j) find nij ∈ N, such that p
(nij )
ij  0.
Since d = 1, we have for every index “ i ” a finite sequence ai1, ai2, . . . aini
∈ N, such that the largest
common divisor is 1 and p
(aij )
ii  0.
Then by the result from Number Theory mentioned in the hint there exists an Ni, such that one to
every n ≥ Ni can find ci1(n), . . . , cini
(n) ∈ N, such that
n =
ni

j=1
cij(n) aij.
It all starts at Boot Camp. It’s 48 hours
that will stimulate your mind and
enhance your career prospects. You’ll
spend time with other students, top
Accenture Consultants and special
guests. An inspirational two days
packed with intellectual challenges
and activities designed to let you
discover what it really means to be a
high performer in business. We can’t
tell you everything about Boot Camp,
but expect a fast-paced, exhilarating
and intense learning experience.
It could be your toughest test yet,
which is exactly what will make it
your biggest opportunity.
Find out more and apply online.
Choose Accenture for a career where the variety of opportunities and challenges allows you to make a
difference every day. A place where you can develop your potential and grow professionally, working
alongside talented colleagues. The only place where you can learn from our unrivalled experience, while
helping our global clients achieve high performance. If this is your idea of a typical working day, then
Accenture is the place to be.
Turning a challenge into a learning curve.
Just another day at the office for a high performer.
Accenture Boot Camp – your toughest test yet
Visit accenture.com/bootcamp
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
23
3. Markov chains
Then let N ≥ max {nij} + max {Ni}. If n ≥ N, then
p
(n)
ij ≥ p
(nij )
ij p
(n−pij )
jj .
Since n − nij ≥ Ni, it follows that
n − nij =
nj

k=1
c̃jk(n) ajk,
and we conclude that
p
(n)
ij ≥ p
(nij )
ij p
(n−nij )
jj ≥ p
(nij )
ij

k=1
nj

p
(ajk)
jj
c̃jk
 0.
This is true for every pair of indices (i, j), thus we conclude that P is regular.
Example 3.4 Let P be an m × m stochastic matrix.
1) Prove that if P is regular, then P2
is also regular.
2) Assuming instead that P is irreducible, can one conclude that P2
is also irreducible?
1) If P is regular, then there is an N ∈ N, such that p
(n)
ij  0 for all n ≥ N and all i, j. In particular,
p
(2N)
ij  0 for all (i, j). Now, p
(2N)
ij are the matrix elements of P2N
=

P2 N
, thus P2
is also
regular.
2) The answer is “no”! In fact,

0 1
1 0

is irreducible, while

0 1
1 0
 
0 1
1 0

=

1 0
0 1

is not.
Example 3.5 Let P be an m × m stochastic matrix. Assume that P is irreducible, and that there is
an i, such that
p
(3)
ii  0 and p
(5)
ii  0.
Prove that P is regular.
The result follows immediately from Example 3.3, because the largest common divisor for 3 and 5 is
di = 1.
Notice that
p
(8)
ii ≥ p
(3)
ii · p
(5)
ii  0, p
(9)
ii =

p
(3)
ii
3
 0, p
(10)
ii =

p
(5)
ii
2
 0, p
(11)
ii = p
(5)
ii

p
(3
)
ii
2
 0,
hence the succeeding p
(n)
ii  0, n ≥ 12, because one just multiply this sequence successively by P3
.
Download free ebooks at bookboon.com
Stochastic Processes 1
24
3. Markov chains
Example 3.6 Let P be an m×m irreducible matrix. Prove for every pair (i, j) (where 1 ≤ i, j ≤ m)
that there exists an n, depending on i and j, such that 1 ≤ n ≤ m and p
(n)
ij  0.
When P is irreducible, we can get from every state Ei to any other state. When we sketch the graph,
we see that it must contain a cycle,
Ei1
→ Ei2
→ · · · → Eim
→ Ei1
,
where (i1, i2, . . . , im) is a permutation of (1, 2, . . . , m). It follows that we can get from every Ei
to any other Ej in n steps, where 1 ≤ n ≤ m. This means that the corresponding matrix Pn
has
p
(n)
ij  0.
Example 3.7 Let P be a stochastic matrix for an irreducible Markov chain of the states E1, E2, . . . ,
Em. Given that P has the invariant probability vector
g =

1
m
,
1
m
, . . . ,
1
m

.
Prove that P is double stochastic.
The condition g P = g is written
1
m
m

j=1
pij =
1
m
, thus
m

j=1
pij = 1.
This proves that the sum of every column is 1, thus the matrix is double stochastic.
Example 3.8 Given a regular Markov chain of the states E1, E2, . . . , Em and with the stochastic
matrix P. Then
lim
n→∞
p
(n)
ij = gi,
where g = (g1, g2, . . . , gm) is the uniquely determined invariant probability vector of P. Prove that
there exist a positive constant K and a constant a ∈ ]0, 1[, such that
!
!
!p
(n)
ij − gj
!
!
! ≤ K an
for i, j = 1, 2, . . . , m and n ∈ N.
If P is regular, then there is an n0, such that p
(n)
ij  0 for all n ≥ n0 and all i, j = 1, 2, . . . , m. Let
j be the column vector which has 1 in its row number j and 0 otherwise. Then
Pn
j =

p
(n)
1j , . . . , p
(n)
mj
T
.
If
0  ε := min
i, j
p
(n0)
ij

clearly 
1
2
#
Download free ebooks at bookboon.com
Stochastic Processes 1
25
3. Markov chains
and
mj
n = min
i
p
(n)
ij and Mj
n = max
i
p
(n)
ij ,
then
Mj
n − mj
n ≤ (1 − 2 ε)n
= an
for all j,
and
mj
n ≤
⎧
⎨
⎩
qj
p
(n)
ij
⎫
⎬
⎭
Mj
n for n ≥ n0.
Thus
!
!
!p
(n)
ij − gj
!
!
! ≤ Mj
n − mj
n ≤ an
for all i, j and all n ≥ n0.
Now,
!
!
!p
(n)
ij − gj
!
!
! ≤ 1 for all n ∈ N. We therefore get the general inequality if we put
K = (1 − 2 ε)−n0
=

1
a
n0
.
Notice that since 0  ε 
1
2
, we have a = 1 − 2 ε ∈ ]0, 1[.
Example 3.9 Given a stochastic matrix by
P =
⎛
⎝
0 0 1
0 0 1
1
3
2
3 0
⎞
⎠ .
Prove that P is irreducible, but not regular.
Find the limit matrix G.
Compute P2
and find all invariant probability vectors for P2
.
We conclude from the matrix that
E1 ↔ E3 ↔ E2,
thus P is irreducible. We conclude from
P2
=
⎛
⎝
0 0 1
0 0 1
1
3
2
3 0
⎞
⎠
⎛
⎝
0 0 1
0 0 1
1
3
2
3 0
⎞
⎠ =
⎛
⎝
1
3
2
3 0
1
3
2
3 0
0 0 1
⎞
⎠
and
P3
=
⎛
⎝
1
3
2
3 0
1
3
2
3 0
0 0 1
⎞
⎠
⎛
⎝
0 0 1
0 0 1
1
3
2
3 0
⎞
⎠ =
⎛
⎝
0 0 1
0 0 1
1
3
2
3 0
⎞
⎠ = P,
Download free ebooks at bookboon.com
Stochastic Processes 1
26
3. Markov chains
that
P2n+1
= P and P2n
= P2
.
Since every Pn
contains zeros, we conclude that P is not regular.
The solution of g P = g, g1 + g2 + g3 = 1, is the solution of
1
3
g3 = g1,
2
3
g3 = g2, g1 + g2 = g3, g1 + g2 + g3 = 1,
thus
g =

1
6
,
1
3
,
1
2

,
and the limit matrix is
G =
⎛
⎜
⎝
1
6
1
3
1
2
1
6
1
3
1
2
1
6
1
3
1
2
⎞
⎟
⎠ .
Alternatively,
G = lim
n→∞
1
n
n

i=1
Pi
= lim
n→
1
2n
 n

i=1
P2i−1
+
n

i=1
Pn

=
1
2
P +
1
2
P2
=
1
2
⎛
⎝
1
3
2
3 1
1
3
2
3 1
1
3
2
3 1
⎞
⎠ =
⎛
⎝
1
6
1
3
1
2
1
6
1
3
1
2
1
6
1
3
1
2
⎞
⎠ .
Clearly, P2
is not irreducible (E3 ↔ E3).


       
In Paris or Online
International programs taught by professors and professionals from all over the world
BBA in Global Business
MBA in International Management / International Marketing
DBA in International Business / International Management
MA in International Education
MA in Cross-Cultural Communication
MA in Foreign Languages
Innovative – Practical – Flexible – Affordable
Visit: www.HorizonsUniversity.org
Write: Admissions@horizonsuniversity.org
Call: 01.42.77.20.66 www.HorizonsUniversity.org
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
27
3. Markov chains
The probability vectors for P2
are the solutions of
g P2
= g, g1 + g2 + g3 = 1, gi ≥ 0,
thus
g1 =
1
3
g1 +
1
3
g2, g2 =
2
3
g1 +
2
3
g2, g3 = g3, g1 + g2 + g3 = 1,
and hence
g = (x, 2x, 1 − 3x), x ∈

0,
1
3
#
.
Remark 3.1 We see that in P2
we have the closed set {E1, E2}, and {E3} is another closed set. ♦
Example 3.10 A Markov chain of three states E1, E2 and E3 has the stochastic matrix
P =
⎛
⎜
⎝
1 0 0
0 1
2
1
2
1
3
2
3
0
⎞
⎟
⎠ .
Prove that the state E1 is absorbing, and prove that the only invariant probability vector for P is
g = (1, 0, 0). Prove for any probability vector p that p Pn
→ g for n → ∞.
Obviously E1 is an absorbing state.
We get the equations for the probability vectors from g = g P, i.e.
g1 = g1 +
1
3
g3, g2 =
1
2
g2 +
2
3
g3, g3 =
1
2
g2.
It follows from the former equation that g3 = 0, which by insertion into the latter one implies that
g2 = 0, so g = (1, 0, 0) is the only invariant probability vector.
Let p be any probability vector. We put
p Pn
=

p
(n)
1 , p
(n)
2 , p
(n)
3

.
Then

p
(n+1)
1 , p
(n+1)
2 , p
(n+1)
3

=

p
(n)
1 , p
(n)
2 , p
(n)
3

P,
implies that
p
(n+1)
1 = p
(n)
1 +
1
3
p
(n)
3 , p
(n+1)
2 =
1
2
p
(n)
2 +
2
3
p
(n)
3 , p
(n+1)
3 =
1
2
p
(n)
2 .
In particular,

p
(n)
1

is (weakly) increasing, and since the sequence is bounded ≤ 1, we get
lim
n→∞
p
(n)
1 = p1 ≤ 1.
Download free ebooks at bookboon.com
Stochastic Processes 1
28
3. Markov chains
By taking the limit of the first coordinate it follows that

p
(n)
3

is also convergent, and that
lim
n→∞
p
(n)
3 = lim
n→∞
p
(n+1)
1 − lim
n→
p
(n)
1 = p1 − p1 = 0.
Finally we get from the third coordinate that

p
(n)
2

is convergent,
lim
n→∞
p
(n)
2 = 2 lim
n→∞
p
(n+1)
3 = 0.
Then p1 = 1, and we have proved that
p Pn
→ (1, 0, 0) = g for n → ∞.
Example 3.11 1) Find for every a ∈ [0, 1] the invariant probability vector(s) (g1, g2, g3) of the
stochastic matrix
P =
⎛
⎝
0 1 0
0 a 1 − a
1
3 0 2
3
⎞
⎠ .
2) Prove for every a ∈ ]0, 1[ that
Q =
⎛
⎜
⎜
⎝
1
2 0 1
2 0
1
3 0 2
3 0
0 a 0 1 − a
0 3
4 0 1
4
⎞
⎟
⎟
⎠
is a regular stochastic matrix, and find the invariant probability vector (g1, g2, g3, g4) of Q.
1) We write the equation g P = g as the following system of equations,
⎧
⎨
⎩
g1 = 1
3 g3,
g2 = g1 + a g2
g3 = (1 − a)g2 + 2
3 g3.
Thus g3 = 3 g1 and g1 = 1 − a, so
1 + g1 + g2 + g3 = g2 {1 + 1 − a + 3 − 3a} = g2(5 − 4a).
If a ∈ [0, 1], then
g2 =
1
5 − 4a
∈

1
5
, 1
#
for a ∈ [0, 1],
and the probability vector is
g =

1 − a
5 − 4a
,
1
5 − 4a
,
3 − 3a
5 − 4a

.
Download free ebooks at bookboon.com
Stochastic Processes 1
29
3. Markov chains
2) When 0  a  1, it follows from
Q2
=
⎛
⎜
⎜
⎝
1
2 0 1
2 0
1
3 0 2
3 0
0 a 0 1 − a
0 3
4 0 1
4
⎞
⎟
⎟
⎠
⎛
⎜
⎜
⎝
1
2 0 1
2 0
1
3 0 2
3 0
0 a 0 1 − a
0 3
4 0 1
4
⎞
⎟
⎟
⎠ =
⎛
⎜
⎜
⎜
⎝
1
4
a
2
1
4
1
2 (1 − a)
1
6
2
3 a 1
6
2
3 (1 − a)
a
3
3
4 (1 − a) 2
3 a 1
4 (1 − a)
1
4
3
16
1
2
1
6
⎞
⎟
⎟
⎟
⎠
,
that all elements of Q2
are  0, hence Q is regular.
Then g = g Q implies that
g1 = 1
2 g1 + 1
3 g2
g2 = a g3 + 3
4 g4,
g3 = 1
2 g1 + 2
3 g2
g4 = (1 − a)g3 + 1
4 g4.
We get from the first equation that 1
2 g1 = 1
3 g2, hence g2 = 3
2 g1.
By adding the first and the third equation we get g1 + g3 = g1 + g2, so g2 = g3.
Finally, we conclude from the fourth equation that 3
4 g4 = (1 − a)g3, thus
g4 =
4
3
(1 − a)g3 =
4
3
(1 − a)
3
2
g1 = 2(1 − a)g1,
and
1 = g1 + g2 + g3 + g4 = g1

1 +
3
2
+
3
2
+ 2(1 − a)

= g1{4 + 2(1 − a)} = g1(6 − 2a),
By 2020, wind could provide one-tenth of our planet’s
electricity needs. Already today, SKF’s innovative know-
how is crucial to running a large proportion of the
world’s wind turbines.
Up to 25 % of the generating costs relate to mainte-
nance. These can be reduced dramatically thanks to our
systems for on-line condition monitoring and automatic
lubrication. We help make it more economical to create
cleaner, cheaper energy out of thin air.
By sharing our experience, expertise, and creativity,
industries can boost performance beyond expectations.
Therefore we need the best employees who can
meet this challenge!
The Power of Knowledge Engineering
Brain power
Plug into The Power of Knowledge Engineering.
Visit us at www.skf.com/knowledge
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
30
3. Markov chains
hence
g1 =
1
6 − 2a
,
and the invariant probability vector is
g =

1
6 − 2a
,
3
12 − 4a
,
3
12 − 4a
,
1 − a
3 − a

.
Example 3.12 Given a Markov chain of four states E1, E2, E3 and E4 and with the stochastic
matrix
P =
⎛
⎜
⎜
⎝
1
4
1
4
1
4
1
4
0 1
2
1
4
1
4
0 0 3
4
1
4
0 0 0 1
⎞
⎟
⎟
⎠ .
1. Find for P its invariant probability vector(s).
2. Prove for a randomly chosen initial distribution
p(0)
= (α0, β0, γ0, δ0)
for the distribution p(n)
= (αn, βn, γn, δn) that
αn + βn + γn =

3
4
n
(α0 + β0 + γ0) .
3. Let p(0)
= (1, 0, 0, 0). Find p(1)
, p(2)
and p(3)
.
Given a sequence of random variables (Yn)
∞
n=0 by the following: The possible values of Yn are 1, 2, 3,
4, and the corresponding probabilities are αn, βn, γn and δn, resp. (as introduced above).
Prove for any initial distribution p(0)
= (α0, β0, γ0, δ0) that the sequence (Yn) converges in probability
towards a random variable Y . Find the distribution of Y .
1) The last coordinate of the matrix equation g = g P with g = (α, β, γ, δ) is given by
δ =
1
4
(α + β + γ) + δ, thus α + β + γ = 0.
Now α, β, γ ≥ 0, so α = β = γ = 0, and hence δ = 1. The only invariant probability vector is
(0, 0, 0, 1).
2) Consider again the last coordinate,
δn =
1
4
(αn−1 + βn−1 + γn−1) + δn−1.
We have in general δ = 1 − (α + β + γ), so
1 − (α + βn + γn) =
1
4
(αn−1 + βn−1 + γn−1) + 1 − (αn−1 + βn−1 + γn−1) ,
Download free ebooks at bookboon.com
Stochastic Processes 1
31
3. Markov chains
thus
αn + βn + γn =
3
4
(αn−1 + βn−1 + γn−1) ,
and hence by recursion,
αn + βn + γn =

3
4
n
(α0 + β0 + γ0) .
3) In general,
αn =
1
4
αn−1,
βn =
1
4
αn−1 +
1
2
βn−1,
γn =
1
4
αn−1 +
1
4
βn−1 +
3
4
γn−1,
δn =
1
4
(αn−1 + βn−1 + γn−1) + δn−1.
Put
p(0)
= (α0, β0, γ0) = (1, 0, 0, 0),
then
p(1)
=

1
4
,
1
4
,
1
4
,
1
4

,
p(2)
=

1
16
,
1
16
+
1
8
,
1
16
+
1
16
+
3
16
,
3
16
+
1
4

=

1
16
,
3
16
,
5
6
,
7
16

,
p(3)
=

1
64
,
1
64
+
3
32
,
1
64
+
3
64
+
15
64
,
9
64
+
7
16

=

1
64
,
7
64
,
19
64
,
37
64

.
4) A qualified guess is that Yn → Y in probability, where
P{Y = 4} = 1 and P{Y = j} = 0 for j = 1, 2, 3.
We shall prove that
P {|Yn − Y | ≥ ε} → 0 for n → ∞ for every fixed ε  0.
If ε ∈ ]0, 1[, then
P {|Yn − Y | ≥ ε} = 1 − P {|Yn − Y |  ε} = 1 − P {Yn − Y = 0}
= 1 − δn = αn + βn + γn =

3
4
n
(α0 + β0 + γ0)
=

3
4
n
(1 − δ0) → 0 for n → ∞,
and the claim is proved.
Download free ebooks at bookboon.com
Stochastic Processes 1
32
3. Markov chains
Example 3.13 Five balls, 2 white ones and 3 black ones, are distributed in two boxed A and B, such
that A contains 2, and B contains 3 balls. At time n (where n = 0, 1, 2, . . . ) we choose at random
from each of the two boxes one ball and let the two chosen balls change boxes. In this way we get a
Markov chain with 3 states: E0, E1 and E2, according to whether A contains 0, 1 or 2 black balls.
1. Find the corresponding stochastic matrix P.
2. Prove that it is regular, and its invariant probability vector.
We let in the following p(n)
= (αn, βn, γn) denote the distribution immediately befor the interchange
at time t = n.
3. Given the initial distribution p(0)
= (1, 0, 0), find the probabilities of state,
p(3)
= (α3, β3, γ3) and p(4)
= (α4, β4, γ4)
and prove that
$
(α3 − α4)
2
+ (β3 − β4)
2
+ (γ3 − γ4)
2
 0, 07.
Figure 1: The two boxes with two black balls in A to the left and 1 black ball in B to the right.
1) Since pij = P{X(n) = j | X(n − 1) = i}, the stochastic matrix is with i as the row number and j
as the column number,
P =
⎛
⎝
0 1 0
1
2 · 1
3
1
2
1
2 · 2
3
0 2
3
1
3
⎞
⎠ =
⎛
⎝
0 1 0
1
6
1
2
1
3
0 2
3
1
3
⎞
⎠ .
2) All elements of
P2
=
⎛
⎝
0 1 0
1
6
1
2
1
3
0 2
3
1
3
⎞
⎠
⎛
⎝
0 1 0
1
6
1
2
1
3
0 2
3
1
3
⎞
⎠ =
⎛
⎝
1
6
1
2
1
3
1
12
23
36
5
18
1
9
5
9
1
3
⎞
⎠
Download free ebooks at bookboon.com
Stochastic Processes 1
33
3. Markov chains
are  0, thus P is regular.
We imply from g = g P, i.e.
g1 =
1
6
g2, g2 = g1 +
1
2
g2 +
2
3
g3, g3 =
1
3
g2 +
1
3
g3,
that
(1) g2 = 6 g1, g2 = 2 g1 +
4
3
g3, 2 g3 = g2,
hence g3 = 1
2 g2 = 3 g1, and thus by insertion,
g1 + g2 + g3 = g1 + 6 g1 + 3 g1 = 10 g1 = 1.
The probability vector is
g =

1
10
,
6
10
,
3
10

.
3) From (1) follows
αn =
1
6
βn−1, βn = αn−1 +
1
2
βn−1 +
2
3
γn−1, γn =
1
3
βn−1 +
1
3
γn−1.
www.simcorp.com
MITIGATE RISK REDUCE COST ENABLE GROWTH
The financial industry needs a strong software platform
That’s why we need you
SimCorp is a leading provider of software solutions for the financial industry. We work together to reach a common goal: to help our clients
succeed by providing a strong, scalable IT platform that enables growth, while mitigating risk and reducing cost. At SimCorp, we value
commitment and enable you to make the most of your ambitions and potential.
Are you among the best qualified in finance, economics, IT or mathematics?
Find your next challenge at
www.simcorp.com/careers
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
34
3. Markov chains
Put p(0)
= (α0, β0, γ0) = (1, 0, 0). Then
p(1)
= (α1, β1, γ1) = (0, 1, 0),
p(2)
= (α2, β2, γ2) =

1
6
,
1
2
,
1
3

,
p(3)
= (α3, β3, γ3) =

1
12
,
23
36
,
5
18

,
p(4)
= (α4, β4, γ4) =

23
216
,
127
216
,
11
36

.
Then by insertion,
$
(α3 − α4)
2
+ (β3 − β4)
2
+ (γ3 − γ4)
=

1
12
−
23
216
2
+

23
36
−
127
216
2
+

5
18
−
11
36
2
=

18 − 23
216
2
+

138 − 127
216
2
+ 62

10 − 11
216
2
=
1
216
52 + 112 + 62 =
1
216
√
25 + 121 + 36
=
√
182
216

14
216
=
7
108
 0.07.
Example 3.14 Consider a Markov chain of the states E0, E1, . . . , Em and transition probabilities
pi,i+1 = 1 −
i
m
, i = 0, 1, 2, . . . , m − 1;
pi,i−1 =
i
m
, i = 1, 2, . . . , m;
pij = 0 otherwise.
Prove that the Markov chain is irreducible, and find its invariant probability vector.
(This Markov chain is called Ehrenfest’s model: There are in total m balls in two boxes A and B; at
time n we choose at random one ball and move it to the other box, where Ei denotes the state that
there are i balls in the box A).
The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
0 1 0 0 · · · 0 0
1
m 0 1 − 1
m 0 · · · 0 0
0 2
m 0 1 − 2
m · · · 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 · · · 0 1
m
0 0 0 0 · · · 1 0
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎠
.
Download free ebooks at bookboon.com
Stochastic Processes 1
35
3. Markov chains
We get from the second oblique diagonal the diagram
E0 → E1 → · · · → Em−1 → Em,
and similarly from the first oblique diagonal,
Em → En−1 → · · · → E1 → E0.
Hence P is irreducible.
The first and last coordinate of g = g P give
g0 =
1
m
g1 and gm =
1
m
gm−1.
For the coordinates in between, i.e. for i = 1, . . . , m − 1, we get
gi = gi−1pi−1,i + gi+1pi+1,i =

1 −
1 − i
m

gi−1 +
i + 1
m
gi+1.
Hence, g1 = m g0. If i = 1, then
g1 = g0 +
2
m
g2, thus g2 =
m(m − 1)
2
g0 =

m
2

g0.
A qualified guess is that
gi =

m
i

g0.
This is obviously true for i = 0 and i = 1. Then a check gives

1 −
i − 1
m
 
m
i − 1

+
i + 1
m

m
i + 1

=
m + 1 − i
m
·
m!
(m + 1 − i)!
·
1
(i − 1)!
+
i + 1
m
·
m!
(i + 1)!(m − 1 − i)!
=
m!
(m − i)!i!

i
m
+
m − i
m

=

m
i

,
and we have tested the claim. The rest then follows from that the solution is unique and from the
fact that gi =

m
i

g0 solves the problem. Therefore,
gi =

m
i

g0, i = 0, 1, 2, . . . , m.
We conclude that
m

i=0
gi = g0
m

i=0

m
i

1i
1m−i
= g0 · 2m
= 1,
thus g0 = 2−m
, and
gi =
1
2m

m
i

, i = 0, 1, 2, . . . , m,
Download free ebooks at bookboon.com
Stochastic Processes 1
36
3. Markov chains
corresponding to the probability vector
g =
1
2m

1,

m
1

,

m
2

, · · · ,

m
m − 1

, 1

.
Example 3.15 (Continuation of Example 3.14).
Let Y (n) = 2X(n) − m, and let en = E{Y (n)}, n ∈ N0.
Find en+1 expressed by en and m.
Find en, assuming that the process at time t = 0 is in state Ej.
If we put p
(n)
i = P{X(n) = i}, then (cf. Example 3.14),
p
(n+1)
0 = p
(n)
1 p1,0 =
1
m
p
(n)
1 and pm(n + 1) = p
(n)
m−1pm−1,m =
1
m
p
(n)
m−1,
and
p
(n+1)
i =

1 −
i − 1
m

p
(n)
i−1 +
i + 1
m
p
(n)
i+1, i = 1, . . . , m − 1.
Furthermore,
en = E{Y (n)} = 2E{X(n)} − m = 2
m

i=1
i p
(n)
i − m.
Hence
en+1 = E{Y (n + 1)} = 2E{X(n + 1)} − m = 2
m

i=1
i p
(n+1)
i − m = 2
m−1

i=1
i p
(n+1)
i + 2m p(n+1)
m − m
= 2
m−1

i=1
i

1 −
i − 1
m

p
(n)
i−1 + 2
m−1

i=1
i ·
i + 1
m
p
(n)
i+1 + 2m p(n+1)
m − m
= 2
m−2

i=0
(i + 1)

1 −
i
m

p
(n)
i + 2
m

i=2
(i − 1) ·
i
m
p
(n)
i + 2m p(n+1)
m − m
= 2
m−2

i=0
(i + 1)p
(n=
i − 2
m−2

i=0
i
m
(i + 1)p
(n)
i + 2
m

i=0
i
m
(i − 1) p
(n)
i + 2m p(n+1)
m − m
= 2
m

i=0
(i + 1)p
(n)
i − 2m p
(n)
m−1 − 2(m + 1)p(n)
m − 2
m

i=0
i
m
(i + 1) p
(n)
i + 2
m − 1
m
· m p
(n)
m−1
+2 ·
m
m
(m + 1)P(n)
m + 2
m

i=0
i
m
(i − 1)p
(n)
i + 2m p(n+1)
m − m,
Download free ebooks at bookboon.com
Stochastic Processes 1
37
3. Markov chains
and thus
en+1 = 2
m

i=0
i p
(n)
i − m + 2
m

i=0
p
(n)
i − 2
m

i=0
i
m
{i + 1 − i + 1}p
(n)
i
+2(m − 1) p
(n)
m−1 − 2m p
(n)
m−1 + 2(m + 1)p(n)
m − 2(m + 1)p(n)
m + 2m p(n+1)
m
= en + 2 −
4
m
m

i=0
i p
(n)
i − 2p
(n)
m−1 + 2m ·
1
m
· p
(n)
m−1
= en + 2 −
2
m

2
m

i=0
i p
(n)
i − m

−
2
m
· m = en + 2 −
2
m
· en − 2 =

1 −
2
m

en.
Then by recursion,
en

1 −
2
m
n
e0,
where
e0 = 1

(j − 1) ·
j
m
+ (j + 1) ·

1 −
j
m

− m = 2

j
m
(j − 1 − j − 1) + j + 1

− m
= 2j + 2 −
4j
m
− m = 2j

1 −
2
m

− m

1 −
2
m

= (2j − m)

1 −
2
m

,
hence
en =

1 −
2
m
n+1
(2j − m).
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
38
3. Markov chains
Example 3.16 Consider a Markov chain of the states E0, E1, . . . , Em and transition probabilities
pi,i+1 = p, i = 2, 3, . . . , m − 1;
pi,i−1 = q, i = 2, 3, . . . , m − 1;
p1,2 = 1, pm,m−1 = 1;
pij = 0 otherwise.
Her er p  0, q  0, p + q = 1.
1) Prove that the Markov chain is irreducible, and find its invariant probability vector.
2) Is the given Markov chain regular?
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
0 1 0 0 · · · 0 0
q 0 p 0 · · · 0 0
0 q 0 p · · · 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 · · · 0 p
0 0 0 0 · · · 1 0
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎠
.
The first oblique diagonal gives
Em → Em−1 → · · · → E1 → E0,
and the last oblique diagonal gives the diagram
E0 → E1 → · · · → Em−1 → Em.
It is therefore possible to come from every state to any other by using these diagrams. Hence, the
Markov chain is irreducible.
The equations of the invariant probability vector are
g0 = q g1, g1 = g0 + q g2, gm−1 = p gm−2 + gm, gm = p gm−1,
and for the coordinates in between,
gj = p gj−1 + q gj+1, j = 2, 3, 4, . . . , m − 2.
It follows from the first equation that g1 =
1
q
g0.
Similarly, from the second equation,
q g2 = g1 − g0 =
1
q
g0 − g0 =
1 − q
q
g0 =
p
q
g0,
thus
g2 =
p
q2
g0.
Download free ebooks at bookboon.com
Stochastic Processes 1
39
3. Markov chains
A qualified guess is
(2) gj =
1
q

p
q
j−1
g0, for j ≥ 1 and j ≤ m − 1.
It follows from the above that this formula is true for j = 1 and j = 2.
Then we check for j = 2, 3, 4, . . . , m − 2, i.e.
p gj−1 + q gj+1 =
p
q
·

p
q
j−2
g0 +
q
q

p
q
j
g0 =
1
q

p
q
j−1 
q + q ·
p
q

= gj.
If j = m − 1, then
gm−1 = p gm−2 + gm = p gm−2 + p gm−1, i.e. gm−1 =
p
q
gm−2,
proving that (2) also holds for j = m − 1. Finally,
gm = p gm−1 =
p
q

p
q
m−2
g0 =

p
q
m−1
g0.
Summing up we get
(3) g = g0

1,
1
q
,
p
q2
, . . . ,
1
q

p
q
m−2
,

p
q
m−1

.
If p = q, i.e. p =
1
2
, then we get the condition
1 =
n

j=0
gj =

1 +
1
q

1 +
p
q
+

p
q
2
+ · · · +

p
q
m−2

+

p
q
m−1

g0
=
⎧
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎩
1 +
1
q
·
1 −

p
q
m−1
1 −
p
q
+

p
q
m−1
⎫
⎪
⎪
⎪
⎬
⎪
⎪
⎪
⎭
g0 =
⎧
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎩
1 +
1 −

p
q
m−1
q − p
+

p
q
m−1
⎫
⎪
⎪
⎪
⎬
⎪
⎪
⎪
⎭
g0
=
1
q − p

q − p + 1 + (q − p − 1)

p
q
m−1

g0 =
1
q − p

2q − 2p

p
q
m−1

g0
=
2q
q − p

1 −

p
q
m
g0,
hence
g0 =
q − p
2q

1 −

p
q
m,
which is inserted into (3).
When p = q =
1
2
, formula (3) is reduced to
g = g0(1, 2, 2, · · · , 2, 1), where 1 = g0{1 + 2(m − 1) + 1} = 2m g0,
Download free ebooks at bookboon.com
Stochastic Processes 1
40
3. Markov chains
so g0 =
1
2m
, and
g =

1
2m
,
1
m
,
1
m
, · · · ,
1
m
,
1
2m

.
2) The Markov chain is not regular. In fact, all Pn
contain zeros. This is easily seen by an example.
Let m = 3, and let denote any number  0. Then
P =
⎛
⎜
⎜
⎝
0 0 0
0 0
0 0
0 0 0
⎞
⎟
⎟
⎠ , P2
=
⎛
⎜
⎜
⎝
0 0
0 0
0 0
0 0
⎞
⎟
⎟
⎠ , P =
⎛
⎜
⎜
⎝
0 0
0 0
0 0
0 0
⎞
⎟
⎟
⎠ ,
and we see that P2
has zeros at all places where i + j is odd, and P3
has zeros at all places, where
i + i is even.
This pattern is repeated, so P2n
has the same structure as P2
, and P2n+1
has the same structure
as P3
, concerning zeros.
Challenging? Not challenging? Try more
Try this...
www.alloptions.nl/life
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
41
3. Markov chains
Example 3.17 A circle is divided into m arcs E1, E2, . . . , Em, where m ≥ 3.
A particle moves in the following way between the states E1, E2, . . . , Em:
There is every minute the probability p ∈ ]0, 1[ that it moves from a state to the neighbouring state in
the positive sense of the plane, and the probability q = 1 − p that it moves to the neighbouring state in
the negative sense of the plane, i.e-
pi,i+1 = p, i = 1, 2, . . . , m − 1;
pi,i−1 = q, i = 2, 3, . . . , m;
pm,1 = p; p1,m = q;
pij = 0 otherwise.
1) Find the stochastic matrix.
2) Prove that the Markov chain is irreducible.
3) Prove that the Markov chain is double stochastic, and find the invariant probability vector.
4) Prove that if the particle at time t = 0 is in state Ei, then there is a positive probability that it is
in the same state to all of the times t = 2, 4, 6, 8, . . . .
5) Prove that if the particle at time t = 0 is in state Ei, then there is a positive probability that the
particle is in the same state at time t = m.
6) Find the values of m, for which the Markov chain is regular.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
0 p 0 0 · · · 0 0 q
q 0 p 0 · · · 0 0 0
0 q 0 p · · · 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 · · · q 0 p
p 0 0 0 · · · 0 q 0
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎠
.
Download free ebooks at bookboon.com
Stochastic Processes 1
42
3. Markov chains
2) It follows from
E1 → E2 → · · · → Em → E1,
that P is irreducible.
3) The sum of each column is p + q = 1, hence the Markov chain is double stochastic. Since P is
irreducible,

1
m
,
1
m
, . . . ,
1
m

is the only invariant probability vector.
4) This is obvious from the parity. (The same number of steps forward as backward, hence in total
an even number).
5) This is also obvious, because the probability is ≥ pm
+ qm
, because pm
is the probability of m
steps forward, and qm
is the probability of m steps backward.
6) It follows from 4. and 5. that if m is odd, then there is a positive probability to be in any given
state, when t  3m is even, thus the Markov chain is regular, when m is odd.
If m is even, then the Markov chain is not regular, because the difference of the indices of the
possible states must always be an even number. We shall therefore have zeros in every matrix Pn
.
Stand out from the crowd
Designed for graduates with less than one year of full-time postgraduate work
experience, London Business School’s Masters in Management will expand your
thinking and provide you with the foundations for a successful career in business.
The programme is developed in consultation with recruiters to provide you with
the key skills that top employers demand. Through 11 months of full-time study,
you will gain the business knowledge and capabilities to increase your career
choices and stand out from the crowd.
Applications are now open for entry in September 2011.
For more information visit www.london.edu/mim/
email mim@london.edu or call +44 (0)20 7000 7573
Masters in Management
London Business School
Regent’s Park
London NW1 4SA
United Kingdom
Tel +44 (0)20 7000 7573
Email mim@london.edu
www.london.edu/mim/
Fast-track
your career
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
43
3. Markov chains
Example 3.18 Given a Markov chain of the states E0, E1, E2, . . . , Em (where m ≥ 2) and the
transition probabilities
p0,j = ai, i = 1, 2, · · · , m,
pi,i−1 = 1, i = 1, 2, · · · , m,
pij = 0, otherwise,
where
ai ≥ 0, i = 1, 2, . . . , m − 1, am  0,
m

i=1
ai = 1.
1) Prove that the Markov chain is irreducible.
2) Assume that the process at time t = 0 is at state E0; let T1 denote the random variable, which
indicates the time of the first return to E0.
Find the distribution of T1.
3) Compute the mean of T1.
4) Let T1, T2, . . . , Tk denote the times of the first, second, . . . , k-th return to E0. Prove for every
ε  0 that
P
!
!
!
!
Tk
k
− μ
!
!
!
!  ε

→ 0 for k → ∞.
Hint: Apply that
Tk = T1 + (T2 − T1) + · · · + (Tk − Tk−1) .
1) The corresponding stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
0 a1 a2 · · · am−1 am
1 0 0 · · · 0 0
0 1 0 · · · 0 0
0 0 1 · · · 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 · · · 1 0
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎠
,
From this we immediately get the diagram (cf. the oblique diagonal in the matrix consisting of
only ones)
Em → Em−1 → · · · → E1 → E0.
Now am  0, hence also
E0 → Em,
and we conclude that we can get from any state to any other state, so P is irreducible.
Download free ebooks at bookboon.com
Stochastic Processes 1
44
3. Markov chains
2) The probability is aj for getting from E0 to Ej. We shall use j steps in order to move from Ej
back to E0, so
P {T1 = j + 1} = aj, j = 1, 2, . . . , m.
3) The mean is
μ = E {T1} =
m

j=1
(j + 1)aj = 1 +
m

j=1
j aj.
4) Clearly,
Tk = T1 + (T2 − T1) + · · · + (Tk − Tk−1) .
Furthermore, Xj = Tj − Tj−1 has the same distribution as T1, and T1 and the Xj are mutually
independent.
Using V {T1} = σ2
 ∞, it follows from Chebyshev’s inequality that
P
!
!
!
!
Tk
k
− μ
!
!
!
! ≥ ε

≤
σ2
k ε2
→ 0 for k → ∞.
©
UBS
2010.
All
rights
reserved.
www.ubs.com/graduates
Looking for a career where your ideas could really make a difference? UBS’s
Graduate Programme and internships are a chance for you to experience
for yourself what it’s like to be part of a global team that rewards your input
and believes in succeeding together.
Wherever you are in your academic career, make your future a part of ours
by visiting www.ubs.com/graduates.
You’re full of energy
and ideas. And that’s
just what we are looking for.
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
45
3. Markov chains
Example 3.19 Given a Markov chain of the states E0, E1, . . . , Em (where m ≥ 2) and transition
probabilities
pi,i+1 = p, i = 0, 1, 2, . . . , m − 1,
pi,0 = q, i = 0, 1, 2, . . . , m − 1,
pm,0 = 1,
pi,j = 0 otherwise,
(where p  0, q  0, p + q = 1).
The above can be considered as a model of the following situation:
A person participates in a series of games. He has in each game the probability p of winning and
probability q of losing; if he loses in a game, he starts from the beginning in the next game. He does
the same after m won games in a row. The state Ei corresponds for i = 1, 2, . . . , m to the situation
that he has won the latest i games.
1) Find the stochastic matrix.
2) Prove that the Markov chain is irreducible.
3) Prove that the Markov chain is regular.
4) Find the stationary distribution.
5) Assume that the process at time t = 0 is in state E0; let T1 denote the random variable, which
indicagtes the time of the first return to E0. Find
P {Tk = k + 1} , k = 0, 1, 2, . . . , m.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
q p 0 0 · · · 0 0
q 0 p 0 · · · 0 0
q 0 0 p · · · 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
q 0 0 0 · · · 0 p
1 0 0 0 · · · 0 0
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎠
.
2) The Markov chain is irreducible, because we have the transitions
E0 → E1 → E2 → · · · → Em → E0.
3) Since the Markov chain is irreducible, and p00 = q  0, it is also regular.
4) The matrix equation g P = g, where g = (g0, g1, . . . , gm), is written
g0 = (g0 + · · · + gm−1) q + gm,
g1 = g0 · p,
g2 = g1 · p,
.
.
.
.
.
.
gm = gm−1 · p,
Download free ebooks at bookboon.com
Stochastic Processes 1
46
3. Markov chains
hence
gk = g0 · pk
, k = 0, 1, . . . , m.
It follows from
1 =
m

k=0
gk = g0
m

k=0
pk
= g0 ·
1 − pm+1
1 − p
= g0 ·
1 − pm+1
q
,
that
gk = q ·
pk
1 − pm+1
, k = 0, 1, . . . , m.
5) If k ≤ m − 1, then
P {T1 = k + 1}
= P{k games are won successively, and the (k + 1)-th game is lost}
= pk
q.
If k = m, then
P {T1 = m + 1} = P{m games are won successively} = pm
.
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
47
3. Markov chains
Example 3.20 Given a Markov chain of states E1, E2, E3, E4 and E5 and of its stochastic matrix
P given by
P =
⎛
⎜
⎜
⎜
⎜
⎝
0 1 0 0 0
1
3 0 2
3 0 0
0 1
3 0 2
3 0
0 0 1
3 0 2
3
0 0 0 1 0
⎞
⎟
⎟
⎟
⎟
⎠
.
1. Prove that the Markov chain is irreducible, and find its invariant probability vector.
A Markov chain of the states E1, E2, E3, E4 and E5 has its stochastic matrix Q given by
Q =
⎛
⎜
⎜
⎜
⎜
⎝
0 1
5 0 0 4
5
4
5 0 0 0 1
5
0 0 1 0 0
0 0 0 1 0
1
5
4
5 0 0 0
⎞
⎟
⎟
⎟
⎟
⎠
.
2. Find all probability vectors which are invariant with respect to Q.
3. Assume that an initial distribution is given by
q(0)
= (1, 0, 0, 0, 0).
Prove that limn→∞ q(n)
exists and find the limit vector.
1) The two oblique diagonals next to the main diagonal give
E5 → E4 → E3 → E2 → E1
and
E1 → E2 → E3 → E4 → E5,
so we can get from any state to any other one, hence P is irreducible.
The equations of the invariant probability vector are
p1 =
1
3
p2,
p2 = p1 +
1
3
p3,
p3 =
2
3
p2 +
1
3
p4,
p4 =
2
3
p3 + p5,
p5 =
2
3
p4,
Download free ebooks at bookboon.com
Stochastic Processes 1
48
3. Markov chains
thus
p2 = 3p1,
p3 = 3p2 − 3p1 = 9p1 − 3p2 = 6p1,
p4 = 2p3 = 12p1,
p5 = 8p1.
We get
1 = p1 + p2 + p3 + p4 + p5 = p1{1 + 3 + 6 + 12 + 8} = 30p1,
thus
p1 =
1
30
,
and hence
p =

1
30
,
1
10
,
1
5
,
2
5
,
4
15

.
2) The equations of the probability vectors are
q1 =
4
5
q2 +
1
5
q5,
q2 =
1
5
q1 +
4
5
q5,
q3 = q3,
q4 = q4,
q5 =
4
5
q1 +
1
5
q2,
from which it is seen that q3 and q4 can be chosen arbitrarily, if only
q3 ≥ 0, q4 ≥ 0 and q3 + q4 ≤ 1.
The remaining equations are

5q1 − 4q2 = q5,
−q1 + 5q2 = 4q5.
thus q1 = q2 = q5.
The invariant probability vectors of Q are given by
q = (x, x, y, z, x), x, y, z ≥ 0 and 3x + y + z = 1.
3) We have for every n that q
(n)
3 = q
(n)
4 = 0, so it suffices to consider
Q(1)
=
⎛
⎝
0 1
5
4
5
4
5 0 1
5
1
5
4
5 0
⎞
⎠ .
Download free ebooks at bookboon.com
Stochastic Processes 1
49
3. Markov chains
Now Q(1)
Q(1)
contains only positive elements, so Q(1)
is regular.
The equations of the corresponding simplified invariant probability vector are
q1 =
4
5
q2 +
1
5
q5,
q2 =
1
5
q1 +
4
5
q5,
q5 =
4
5
q1 +
1
5
q2.
There are the same equations as in 2., and since q1 + q2 + q5 = 1, we have q1 = q2 = q5 =
1
3
. It
follows that
lim
n→∞
q(n)
=

1
3
,
1
3
, 0, 0,
1
3

.
Example 3.21 Given a Markov chain of the 5 states E1, E2, E3, E4 and E5 and the transition
probabilities
pi,i+1 =
i
i + 2
, pi,1 =
2
i + 2
, i = 1, 2, 3, 4;
p5,1 = 1; pi,j = 0 otherwise.
1) Find the stochastic matrix.
2) Prove the Markov chain is irreducible.
3) Prove that the Markov chain is regular.
4) Find the stationary distribution (the invariant probability vector).
5) Assume that the process at time t = 0 is in state E1. Denote by T the random variable, which
indicates the time of the first return to E1.
Find P{T = k}, k = 1, 2, 3, 4, 5, and compute the mean and variance of T.
1) The matrix P is given by
P =
⎛
⎜
⎜
⎜
⎜
⎝
2
3
1
3 0 0 0
1
2 0 1
2 0 0
2
5 0 0 3
5 0
1
3 0 0 0 2
3
1 0 0 0 0
⎞
⎟
⎟
⎟
⎟
⎠
.
2) It follows from the diagram
E1 → E2 → E3 → E4 → E5 → E1,
that P is irreducible.
Download free ebooks at bookboon.com
Stochastic Processes 1
50
3. Markov chains
3) Since p1,1 =
2
3
 0, and P is irreducible, we conclude that P is regular.
4) The equations of the invariant probability vector are
p1 =
2
3
p1 +
1
2
p2 +
2
5
p3 +
1
3
p4 + p5,
p2 =
1
3
p1, p3 =
1
2
p2, p4 =
3
5
p3, p5 =
2
3
p4.
We get successively,
p4 =
3
2
p5,
p3 =
5
3
p4 =
5
2
p5,
p2 = 2p3 = 5p5,
p1 = 3p2 = 15p5,
so
1 = p1 + p2 + p3 + p4 + p5
= p5

1 +
3
2
+
5
2
+ 5 + 15

= 25p5.
your chance
to change
the world
Here at Ericsson we have a deep rooted belief that
the innovations we make on a daily basis can have a
profound effect on making the world a better place
for people, business and society. Join us.
In Germany we are especially looking for graduates
as Integration Engineers for
• Radio Access and IP Networks
• IMS and IPTV
We are looking forward to getting your application!
To apply and for all current job openings please visit
our web page: www.ericsson.com/careers
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
51
3. Markov chains
Thus
p5 =
1
25
, p4 =
3
50
, p3 =
1
10
, p2 =
1
5
and p1 =
3
5
,
and hence
p =

3
5
,
1
5
,
1
10
,
3
50
,
1
25

.
5) We immediately get,
P{T = 1} =
2
3
and P{T  1} =
1
3
.
We are in the latter case in state E2, so
P{T = 2} =
1
3
·
1
2
=
1
6
and P{T  2} =
1
6
.
In the latter case we are in state E3, so
P{T = 3} =
1
6
·
2
15
=
1
15
and P{T  3} =
1
6
·
3
5
=
1
10
.
In the latter case we are in state E4, so
P{T = 4} =
1
3
·
1
10
=
1
30
and P{Y = 5} =
2
3
·
1
10
=
1
15
.
Summing up,
P{T = 1} =
2
3
, P{T = 2} =
1
6
, P{T = 3} =
1
15
,
P{T = 4} =
1
30
, P{T = 5} =
1
15
.
The mean is
E{T} =
2
3
+ 2 ·
1
6
+ 3 ·
1
15
+ 4 ·
1
30
+ 5 ·
1
15
= 1 +
1
3
+
1
3
=
5
3
.
Then we get
E
%
T2

=
2
3
+ 4 ·
1
6
+ 9 ·
1
15
+ 16 ·
1
30
+ 25 ·
1
15
=
2
3
+
2
3
+
3
5
+
8
15
+
5
3
= 3 +
17
15
=
62
15
,
hence
V {T} = E
%
T2

− (E{T})2
=
62
15
−
25
9
=
1
45
(186 − 125) =
61
45
.
Download free ebooks at bookboon.com
Stochastic Processes 1
52
3. Markov chains
Example 3.22 Consider a Markov chain of the m states E1, E2, . . . , Em (where m ≥ 3) and the
transition probabilities
pi,i+1 =
i
i + i + 2
, pi,1 =
1
i + 2
, i = 1, 2, . . . , m − 1,
pm,1 = 1, pi,j = 0 otherwise.
1) Find the stochastic matrix.
2) Prove that the Markov chain is irreducible.
3) Prove that the Markov chain is regular.
4) Find the stationary distribution.
5) Assume that the process at time t = 0 is in state E1. Let T denote the random variable, which
indicates the time of the first return to E1. find
P{T = k}, k = 1, 2, . . . , m.
6) Find the mean E{T}.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
2
3
1
3 0 0 · · · 0 0
2
4 0 2
4 0 · · · 0 0
2
5 0 0 3
5 · · · 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
m+1 0 0 0 · · · 0 m−1
m+1
1 0 0 0 · · · 0 0
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎠
.
2) We get immediately the diagram
E1 → E2 → E3 → · · · → Em.
Since also Em → E1, we conclude that the Markov chain is irreducible.
3) We have proved that P is irreducible, and since p1,1 = 2
3  0, it follows that P is regular.
4) The equations of the stationary distribution are
g1 =
2
3
g1 +
2
4
g2 +
2
5
g3 + · · · +
2
m + 1
gm−1 + gm,
g2 =
1
3
g1,
g3 =
2
4
g2,
.
.
.
.
.
.
gm−1 =
m − 2
m
gm−2,
gm =
m − 1
m + 1
gm−1,
Download free ebooks at bookboon.com
Stochastic Processes 1
53
3. Markov chains
thus
g1 =
3
2
g2,
g2 =
4
2
g3,
.
.
.
.
.
.
gm−2 =
m
m − 2
gm−1,
gm−1 =
m + 1
m − 1
gm.
Since
gj =
j + 2
j
gj+1,
it follows by recursion for j ≤ m − 2 that
gj =
j + 2
j
gj+1 =
j + 2
j
·
j + 3
j + 1
·
j + 4
j + 2
· · ·
m − 1
m − 3
·
m
m − 2
·
m + 1
m − 1
gm =
m(m + 1)
j(j + 1)
gm.
A check shows that this is also true for j = m − 1 and j = m, so in general,
gj =
m(m + 1)
j(j + 1)
gm, j = 1, 2, . . . , m.
Then
1 =
m

j=1
gj = m(m + 1)gm
m

j=1
1
j(j + 1)
= m(m + 1)

1 −
1
m + 1

gm = m2
gm,
i.e.
gm =
1
m2
, and gj =
m + 1
m
·
1
j(j + 1)
, j = 1, . . . , m.
5) Clearly,
P{T = 1} =
2
3
,
so by inspecting the matrix we get
P{T = 2} =
1
3
·
2
4
=
1
6
,
and
P{T = 3} =
1
3
·
2
4
·
2
5
=
1
15
.
In order to compute P{T = k} we must in step number m − 1 be on the oblique diagonal, so
P{T = k} =
1
3
·
2
4
·
3
5
· · ·
k − 1
k + 1
·
2
k + 2
=
1 · 2 · 2
k(k + 1)(k + 2)
=
4
k(k + 1)(k + 2)
.
Download free ebooks at bookboon.com
Stochastic Processes 1
54
3. Markov chains
A small check shows that this result is correct for k = 1, 2, 3.
Finally,
P{T = n} =
1
3
·
2
4
· · ·
m − 1
m + 1
=
2
m(m + 1)
.
6) The mean is
E{T} =
m

k=1
k · P{T = k} =
m−1

k=1
4
(k + 1)(k + 2)
+
2
m + 1
= 4
m−1

k=1

1
k + 1
−
1
k + 2

+
2
m + 1
= 4

1
2
−
1
m + 1

+
2
m + 1
= 2

1 −
1
m + 1

=
2m
m + 1
.
what‘s missing in this equation?
MAERSK INTERNATIONAL TECHNOLOGY  SCIENCE PROGRAMME
You could be one of our future talents
Are you about to graduate as an engineer or geoscientist? Or have you already graduated?
If so, there may be an exciting future for you with A.P. Moller - Maersk.
www.maersk.com/mitas
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
55
3. Markov chains
Example 3.23 Given a Markov chain of the m states E1, E2, . . . , Em (where m ≥ 3) and the
transition probabilities
pi,i = 1 − 2p, i = 1, 2, . . . , m,
pi,i−1 = pi,i+1 = p, i = 2, . . . , m − 1,
p1,2 = pm,m−1 = 2p,
pi,j = 0 otherwise,
where p ∈
#
0,
1
2
#
.
1) Find then stochastic matrix.
2) Prove that the Markov chain is irreducible.
3) Find the invariant probability vector.
4) Prove that the Markov chain is regular for p ∈
#
0,
1
2

, and not regular for p =
1
2
.
5) Compute p
(2)
1,1.
6) In the case of m = 5 and p =
1
2
one shall compute
lim
n→∞
p
(2n−1)
1,1 and lim
n→∞
p
(2n)
1,1 .
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
1 − 2p 2p 0 0 · · · 0 0 0
p 1 − 2p p 0 · · · 0 0 0
0 p 1 − 2p p · · · 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 · · · p 1 − 2p p
0 0 0 0 · · · 0 2p 1 − 2p
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎠
.
2) It follows from the diagram
E1 → E2 → · · · → Em → Em−1 → · · · → E1,
that P is irreducible.
3) The equations are
p1 = (1 − 2p)p1 + p · p2,
p2 = 2p · p1 + (1 − 2p)p2 + p · p3,
pi = p · pi−1 + (1 − 2p)pi + p · pi+1, i = 3, 4, . . . , m − 2,
pm−1 = p · pm−2 + (1 − 2p)pm−1 + 2p · pm,
pm = p · pm−1 + (1 − 2p)pm.
Download free ebooks at bookboon.com
Stochastic Processes 1
56
3. Markov chains
They are reduced to
p2 = 2p1,
p3 = 2p2 − 2p1,
pi+1 = 2pi − pi−1, i = 3, 4, . . . , m − 2,
pm−2 = 2pm−1 − pm,
pm−1 = 2pm.
Hence
p2 = 2p1, p3 = 2p2 − 2p1 = 4p1 − 2p1 = 2p1,
and
pi+1 − pi = pi − pi−1 = · · · = p3 − p2 = 2p1 − 2p1 = 0,
thus
pm−1 = pm−2 = · · · = p3 = p2.
Finally, pm−1 = 2pm. Summing up we get
pm = p1 and p2 = · · · = pm−1 = 2p1,
whence
1 =
m

n=1
pn = p1{1 + 2(m − 2) + 1} = 2(m − 1)p1,
i.e.
p1 =
1
2(m − 1)
,
and the invariant probability vector is
p =

1
2(m − 1)
,
1
m − 1
,
1
m − 1
, · · · ,
1
m − 1
,
1
2(m − 1)

.
4) If p =
1
2
, then all pi,i = 0. Since P according to 2. is irreducible, P is regular.
If p =
1
2
, then
p =
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
0 1 0 0 · · · 0 0 0
1
2 0 1
2 0 · · · 0 0 0
0 1
2 0 1
2 · · · 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 · · · 1
2 0 1
2
0 0 0 0 · · · 0 1 0
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎠
.
It follows that if p
(2)
i,j = 0 then i − j is an even integer, and if p
(3)
i,j = 0, then i − j is an odd integer,
etc.. It therefore follows that Pn
will always contain zeros, and we conclude that P is not regular
for p = 1
2 .
Download free ebooks at bookboon.com
Stochastic Processes 1
57
3. Markov chains
5) The element p
(2)
1,1 of P2
is
p
(2)
1,1 = (1 − 2p)(1 − 2p) + 2p · p = 1 − 4p + 6p2
.
6) If we put m = 5 and p =
1
2
, then we get the stochastic matrix
P =
⎛
⎜
⎜
⎜
⎜
⎝
0 1 0 0 0
1
2 0 1
2 0 0
0 1
2 0 1
2 0
0 0 1
2 0 1
2
0 0 0 1 0
⎞
⎟
⎟
⎟
⎟
⎠
.
Since i = j = 1, clearly i − j = 0 is an even integer, so it follows immediately from 4. that
p
(2n−1)
1,1 = 0 for all n, and hence
lim
n→∞
p
(2n−1)
1,1 = 0.
In the computation of p
(2)
1,1 we consider instead q
(n)
1,1 i Qn
=

P2 n
, where
Q = P2
=
⎛
⎜
⎜
⎜
⎜
⎝
1
2 0 1
2 0 0
0 3
4 0 1
4 0
1
4 0 1
2 01
4
0 1
4 03
4 0
0 0 1
2 0 1
2
⎞
⎟
⎟
⎟
⎟
⎠
.
By 3. the invariant probability vector is given by

1
8
,
1
4
,
1
4
,
1
4
,
1
8

,
and
1
n
n

i=1
P → G, for n → ∞,
where each row in G is
1
8 , 1
4 , 1
4 , 1
4 , 1
8 . From p
(2i−1)
1,1 = 0 follows that
1
2n
2n

i=1
p
(i)
1,1 =
1
2n
n

j=1
p
(2j)
1,1 → g1 =
1
8
for n → ∞.
If limn→∞ p
(2n)
1,1 exists, then this will imply that
1
2
lim
n→∞
p
(2n)
1,1 =
1
8
, hence lim
n→∞
p
(2n)
1,1 =
1
4
.
Now, since
q
(n+1)
1,1 = p
(2n+2)
1,1 =
1
2
p
(2n)
1,1 +
1
4
p
(2n)
1,3  p
(2n)
1,1 = q
(n)
1,1 ,
the sequence

p
(n)
1,1

is decreasing and bounded from below, hence convergent. We therefore con-
clude that
lim
n→∞
p
(2n)
1,1 =
1
4
.
Download free ebooks at bookboon.com
Stochastic Processes 1
58
3. Markov chains
Example 3.24 A Markov chain has its stochastic matrix Q given by
Q =
⎛
⎝
0 1
2
1
2
1
3 0 2
3
1
3
2
3 0
⎞
⎠ .
1. Prove that the Markov chain is regular, and find the invariant probability vector.
Another Markov chain with 5 states has its stochastic matrix P given by
P =
⎛
⎜
⎜
⎜
⎜
⎝
0 1
2
1
2 0 0
1
3 0 2
3 0 0
1
3
2
3 0 0 0
1
3
1
3 0 1
6
1
6
1
2 0 0 3
8
1
8
⎞
⎟
⎟
⎟
⎟
⎠
.
2. Prove that this Markov chain is not irreducible.
3. Prove for any initial distribution
p(0)
=

p
(0)
1 , p
(0)
2 , p
(0)
3 , p
(0)
4 , p
(0)
5

that
p
(n)
4 + p
(n)
5 ≤
1
2
'
p
(n−1)
4 + p
(n−1)
5
(
, n ∈ N,
and then prove that
p
(n)
4 + p
(n)
5 ≤

1
2
n
, n ∈ N.
4. Prove that the limit limn→∞ p(n)
exists and find the limit vector.
1) It follows from
Q =
⎛
⎝
0 1
2
1
2
1
3 0 2
3
1
3
2
3 0
⎞
⎠ that Q2
=
⎛
⎝
1
3
1
3
1
3
2
9
11
18
1
6
2
9
1
6
11
18
⎞
⎠ .
All elements of Q2
are positive, thus Q is regular.
The invariant probability vector g = (g1, g2, g3) satisfies
i) g1, g2, g3 ≥ 0, ii) g1 + g2 + g3 = 1, iii) g Q = g.
The latter condition iii) is written
⎧
⎨
⎩
g1 = 1
3 g2 + 1
3 g3, (1)
g2 = 1
2 g1 + 2
3 g3, (2)
g3 = 1
2 g1 + 2
3 g2. (3)
Download free ebooks at bookboon.com
Stochastic Processes 1
59
3. Markov chains
When we insert (1) into (2), we get
g2 =

1
6
g2 +
1
6
g3

+
2
3
g3, thus g2 = g3.
Then it follows from (1) that g1 = 2
3 g2. Furthermore, ii) implies that
2
3
g2 + g2 + g2 =
8
3
g2 = 1,
thus
g2 =
3
8
= g3 and g1 =
2
3
·
3
8
=
2
8
,
so
g =

2
8
,
3
8
,
3
8

=

1
4
,
3
8
,
3
8

.
2) We notice that Q is included in P as the upper (3 × 3) sub-matrix
P =
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎝
0 1
2
1
2 | 0 0
1
3 0 2
3 | 0 0
1
3
2
3 0 | 0 0
− − − −
1
3
1
3 0 1
6
1
6
1
2 0 0 3
8
1
8
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎠
,
hence {E1, E2, E3} is a proper closed subset. Then P is not irreducible.
It all starts at Boot Camp. It’s 48 hours
that will stimulate your mind and
enhance your career prospects. You’ll
spend time with other students, top
Accenture Consultants and special
guests. An inspirational two days
packed with intellectual challenges
and activities designed to let you
discover what it really means to be a
high performer in business. We can’t
tell you everything about Boot Camp,
but expect a fast-paced, exhilarating
and intense learning experience.
It could be your toughest test yet,
which is exactly what will make it
your biggest opportunity.
Find out more and apply online.
Choose Accenture for a career where the variety of opportunities and challenges allows you to make a
difference every day. A place where you can develop your potential and grow professionally, working
alongside talented colleagues. The only place where you can learn from our unrivalled experience, while
helping our global clients achieve high performance. If this is your idea of a typical working day, then
Accenture is the place to be.
Turning a challenge into a learning curve.
Just another day at the office for a high performer.
Accenture Boot Camp – your toughest test yet
Visit accenture.com/bootcamp
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
60
3. Markov chains
3) It follows from p(n)
= p(n−1)
P that
p
(n)
4 =
1
6
p
(n−1)
4 +
3
8
p
(n−1)
5 ,
p
(n)
5 =
1
6
p
(n−1)
4 +
1
8
p
(n−1)
5 ,
hence by addition,
p
(n)
4 + p
(n)
5 =
1
3
p
(n−1)
4 +
1
2
p
(n−1)
5 ≤
1
2
'
p
(n−1)
4 + p
(n−1)
5
(
.
When this inequality is iterated, we get
p
(n)
4 + p
(n)
5 ≤
1
2
'
p
(n−1)
4 + p
(n−1)
5
(
≤ · · · ≤

1
2
n '
p
(0)
4 + p
(0)
5
(
≤

1
2
n
.
4) It follows from 3. that
p
(n)
4 → 0 and p
(n)
5 → 0 for n → ∞,
so we end in the closed subset {E1, E2, E3}. Inside this closed subset the behaviour is governed
by the stochastic matrix Q, the probability vector of which was found in 1.. Hence it also follows
for P that
p(n)
→

1
4
,
3
8
,
3
8
, 0, 0

for n → ∞.
Download free ebooks at bookboon.com
Stochastic Processes 1
61
3. Markov chains
Example 3.25 Given a Markov chain of 5 states E1, E2, E3, E4 and E5 and transition probabilities
p11 = p55 = 1,
p23 = p34 = p45 =
2
3
,
p21 = p32 = p43 =
1
3
,
pij = 0 otherwise.
1) Find the stochastic matrix P.
2) Find all invariant probability vectors of P.
3) Compute the matrix P2
.
4) Prove for any initial distribution
p(0)
=

p
(0)
1 , p
(0)
2 , p
(0)
3 , p
(0)
4 , p
(0)
5

that
p
(n+2)
2 + p
(n+2)
3 + p
(n+2)
4 ≤
2
3
'
p
(n)
2 + p
(n)
3 + p
(n)
4
(
, n ∈ N0,
and then prove that
lim
n→∞
p
(n)
2 = lim
n→∞
p
(n)
4 = lim
n→∞
p
(n)
4 = 0.
5) Let the initial distribution be given by
q(0)
= (0, 1, 0, 0, 0).
Find limn→∞ q(n)
.
6) We assume that the process at time t = 0 is in state E2. Let T denote the random variable, which
indicates the time when the process for the first time gets to either the state E1 or the state E5.
Find the mean E{T}.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎝
1 0 0 0 0
1
3 0 2
3 0 0
0 1
3 0 2
3 0
0 0 1
3 0 2
3
0 0 0 0 1
⎞
⎟
⎟
⎟
⎟
⎠
.
Remark 3.2 We notice that the situation can be interpreted as a random walk on the set
{1, 2, 3, 4, 5} with two absorbing barriers. ♦
Download free ebooks at bookboon.com
Stochastic Processes 1
62
3. Markov chains
2) The equations of the invariant probability vectors are
g1 = g1 +
1
3
g2,
g2 =
1
3
g3,
g3 =
2
3
g2 +
1
3
g4,
g4 =
2
3
g3
g5 =
2
3
g4 + g5,
from which it is immediately seen that
g2 = 0, g3 = 0, g4 = 0,
so the only constraint on g1 ≥ 0 and g5 ≥ 0 is that g1 + g5 = 1. Putting g1 = x ∈ [0, 1], it follows
that all invariant probability vectors are given by
gx = (x, 0, 0, 0, 1 − x), x ∈ [0, 1].
Remark 3.3 We note that we have a single infinity of invariant probability vectors. ♦
3) By a simple computation,
P2
=
⎛
⎜
⎜
⎜
⎜
⎝
1 0 0 0 0
1
3 0 2
3 0 0
0 1
3 0 2
3 0
0 0 1
3 0 2
3
0 0 0 0 1
⎞
⎟
⎟
⎟
⎟
⎠
⎛
⎜
⎜
⎜
⎜
⎝
1 0 0 0 0
1
3 0 2
3 0 0
0 1
3 0 2
3 0
0 0 1
3 0 2
3
0 0 0 0 1
⎞
⎟
⎟
⎟
⎟
⎠
=
⎛
⎜
⎜
⎜
⎜
⎝
1 0 0 0 0
1
3
2
9 0 4
9 0
1
9 0 4
9 0 4
9
0 1
9 0 2
9
2
3
0 0 0 0 1
⎞
⎟
⎟
⎟
⎟
⎠
.
4) It follows from p(n+2)
= p(n)
P2
that
p(n+2)
=
⎛
⎜
⎜
⎜
⎜
⎜
⎝
p
(n)
1 + 1
3 p
(n)
2 + 1
9 p
(n)
3
2
9 p
(n)
2 + 1
9 p
(n)
4
4
9 p
(n)
3
4
9 p
(n)
2 + 2
9 p
(n)
4
4
9 p
(n)
3 + 2
3 p
(n)
4 + p
(n)
5
⎞
⎟
⎟
⎟
⎟
⎟
⎠
,
hence
p
(n+2)
2 + p
(n+2)
3 + p
(n+2)
4 =

2
9
p
(n)
2 +
1
9
p
(n)
4

+
4
9
p
(n)
3 +

4
9
p
(n)
2 +
2
9
p
(n)
4

=

2
9
+
4
9

p
(n)
2 +
4
9
p
(n)
3 +

1
9
+
2
9

p
(n)
4
=
2
3
p
(n)
2 +
4
9
p
(n)
3 +
1
3
p
(n)
4 ≤
2
3
'
p
(n)
2 + p
(n)
3 + p
(n)
4
(
.
Download free ebooks at bookboon.com
Stochastic Processes 1
63
3. Markov chains
Since p
(n)
i ≥ 0, this implies that
0 ≤ p
(2n)
2 + p
(2n)
3 + p
(2n)
4 ≤

2
3
n '
p
(0)
2 + p
(0)
3 + p
(0)
4
(
→ 0 for n → ∞,
and
0 ≤ p
(2n+1)
2 + p
(2n+1)
3 + p
(2n+1)
4 ≤

2
3
n '
p
(1)
2 + p
(1)
3 + p
(1)
4
(
→ 0 for n → ∞,
thus
lim
n→∞
'
p
(n)
2 + p
(n)
3 + p
(n)
4
(
= 0.
Now, p
(n)
i ≥ 0, so we conclude that
lim
n→∞
p
(n)
2 = lim
n→∞
p
(n)
3 = lim
n→∞
p
(n)
4 = 0.
5) Let us rename the states to E
0 , E
1 , E
2 , E
3 , E
4 . Let us first compute the probability of starting
at E
1 and then ending again in E
1 . The parameters are here
N = 4, k = 1, q =
1
3
, p =
2
3
,
so this probability is given by some known formula,

1
2
1
−

1
2
4
1 −

1
4
4 =
7
15
.
It follows from the first results of the example that the structure of the limit vector is (x, 0, 0, 0, 1−
x). Hence
q(n)
→

7
15
, 0, 0, 0,
8
15

.
6) Here it is again advantageous to use the theory of the ruin problem. We get
μ = E{T} =
1
−
1
3
−
4
−
1
3
·
1 −

1
2
1
1 −

1
2
4 = −3 + 12 ·
1
2
15
16
= −3 +
32
5
=
17
5
so the mean is
17
5
.
Remark 3.4 The questions 5 and 6 can of course also be solved without using the theory of the ruin
problem. ♦
Download free ebooks at bookboon.com
Stochastic Processes 1
64
3. Markov chains
Example 3.26 Given a Markov chain of the states E1, E2, . . . , Em, where m ≥ 3, and of the
transition probabilities
pi,i = 1 − 2p, i = 2, 3, . . . , m;
pi,i+1 = p, i = 1, 2, . . . , m − 1;
pi,i−1 = p, i = 2, 3, . . . , m − 1;
p1,1 = 1 − p;
pm,m−1 = 2p;
pi,j = 0 otherwise,
(here p is a number in the interval
)
0, 1
2
)
).
1. Find the stochastic matrix.
2. Prove that the Markov chain is irreducible.
3. Prove that the Markov chain is regular.
4. Find the invariant probability vector of P.
If the process to time t = 0 is in state Ek, then the process will with probability 1 på reach either state
E1 or Em at some time.
Let ak denote the probability of getting to E1 before Em, when we start at Ek, for k = 1, 2, . . . , m.
In particular, a1 = 1 and am = 0.
5. Prove that
(4) ak = p ak+1 + (1 − 2p)ak + p ak−1, k = 2, 3, . . . , m − 1.
6. Apply (4) to find the probabilities ak, k = 2, 3, . . . , m − 1.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
1 − p p 0 0 · · · 0 0 0
p 1 − 2p p 0 · · · 0 0 0
0 p 1 − 2p p · · · 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 · · · 1 − 2p p 0
0 0 0 0 · · · p 1 − 2p p
0 0 0 0 · · · 0 2p 1 − 2p
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎠
.
2) Since p = 0, it follows by the two oblique diagonals that we have the following transitions
E1 → E2 → · · · → Em−1 → Em → Em−1 · · · → E2 → E1,
proving that P is irreducible.
3) It follows from P being irreducible, and p1,1 = 1 − p  0 for p ∈
)
0, 1
2
)
, that P is regular.
Download free ebooks at bookboon.com
Stochastic Processes 1
65
3. Markov chains
4) The equations of the invariant probability vector are
g1 = (1 − p)g1 + p g2,
gj = p gj−1 + (1 − 2p)gj + p gj+1, for j = 2, 3, . . . , m − 2,
gm−1 = p gm−2 + (1 − 2p)gm−1 + 2p gm,
gm = p gm−1 + (1 − 2p)gm,
thus
g2 = g1,
2gj = gj−1 + gj+1, for j = 2, 3, . . . , m − 2,
2gm−1 = gm−2 + 2gm,
2gm = gm−1.


       
In Paris or Online
International programs taught by professors and professionals from all over the world
BBA in Global Business
MBA in International Management / International Marketing
DBA in International Business / International Management
MA in International Education
MA in Cross-Cultural Communication
MA in Foreign Languages
Innovative – Practical – Flexible – Affordable
Visit: www.HorizonsUniversity.org
Write: Admissions@horizonsuniversity.org
Call: 01.42.77.20.66 www.HorizonsUniversity.org
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
66
3. Markov chains
We get by a backwards recursion that
2gmgm−1 = gm−2 = · · · = g2 = g1,
so
1 =
m

k=1
gk = 2(m − 1)gm + gm = (2m − 1)gm,
and the invariant probability vector is
g =

2
2m − 1
,
2
2m − 1
, · · · ,
2
2m − 1
,
1
2m − 1

.
5) If the process starts at state Ek, k = 2, 3, . . . , m − 1, we end after 1 step with probability p in
state Ek−1, with probability 1 − 2p in Ek, and with probability p in Ek+1. If ak is defined as
above, this gives precisely (4), so
ak = p ak+1 + (1 − 2p)ak + p ak−1, k = 2, 3, . . . , m − 1.
6) A reduction of (4) gives
2ak = ak+1 + ak−1, k = 2, 3, . . . , m − 1,
or more convenient
ak−1 − ak = ak − ak+1, k = 2, 3, . . . , m − 1.
Hence
1 − a2 = a1 − a2 = a2 − a3 = · · · = am−1 − am = am−1 − 0 = am−1,
so
1 = a2 + am−1.
On the other hand,
a2 = (a2 − a3) + (a3 − a4) + · · · + (am−1 − am) = (m − 2)am−1,
hence by insertion
1 = a2 + am−1 = (m − 1)am−1, thus am−1 =
1
m − 1
.
In general,
ak = (ak − ak+1) + (ak+1 − ak+2) + · · · + (am−1 − am)
= (m − k)am−1 =
m − k
m − 1
for k = 2, . . . , m − 1.
A simple check shows that this is also true for k = 1 and k = m, so
ak =
m − k
m − 1
, k = 1, 2, . . . , m.
Download free ebooks at bookboon.com
Stochastic Processes 1
67
3. Markov chains
E_5
E_4
E_3
E_2
E_1
Example 3.27 A circle is divided into 5 arcs E1, E2, E3, E4 and E5.
A particle moves in the following way between the 5 states:
At every time unit there is the probability p ∈ ]0, 1[ of walking two steps forward in the positive direction,
and the probability q = 1 − p of walking one step backwards, so the transition probabilities are
p1,3 = p2,4 = p3,5 = p4,1 = p5,2 = p,
p1,5 = p2,1 = p3,2 = p4,3 = p5,4 = q,
pi,j = 0 otherwise.
1) Find the stochastic matrix P of the Markov chain.
2) Prove that the Markov chain is irreducible.
3) Find the invariant probability vector.
4) Assume that the particle at t = 0 is in state E1.
Find the probability that the particle is in state E1 for t = 3, and find the probability that the
particle is in state E1 for t = 4.
5) Check if the Markov chain is regular.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎝
0 0 p 0 q
q 0 0 p 0
0 q 0 0 p
p 0 q 0 0
0 p 0 q 0
⎞
⎟
⎟
⎟
⎟
⎠
.
2) The oblique diagonal in the stochastic matrix below the main diagonal gives the transitions
E5 → E4 → E3 → E2 → E1.
Since also E1 → E5, we conclude that P is irreducible.
Download free ebooks at bookboon.com
Stochastic Processes 1
68
3. Markov chains
3) Since the matrix is double stochastic, the invariant probability vector is

1
5
,
1
5
,
1
5
,
1
5
,
1
5

.
4) Let the particle start at E1. Then we have the tree
E1
p
→ E3
p
→ E5
p
→ E2
p
→ E4
q ↓ q ↓ q ↓ q ↓
E5
p
→ E2
p
→ E4
p
→ E1
q ↓ q ↓ q ↓
E4
p
→ E1
p
→ E3
q ↓ q ↓
E3
p
→ E5
q ↓
E2
We see that we can back to E1 in three steps by
E1
p
→ E3
q
→ E2
q
→ E1, probability p · q2
,
E1
q
→ E5
p
→ E2
q
→ E1, probability p · q2
,
E1
q
→ E5
q
→ E4
p
→ E1, probability p · q2
,
so
P{T = 3} = 3pq2
.
By 2020, wind could provide one-tenth of our planet’s
electricity needs. Already today, SKF’s innovative know-
how is crucial to running a large proportion of the
world’s wind turbines.
Up to 25 % of the generating costs relate to mainte-
nance. These can be reduced dramatically thanks to our
systems for on-line condition monitoring and automatic
lubrication. We help make it more economical to create
cleaner, cheaper energy out of thin air.
By sharing our experience, expertise, and creativity,
industries can boost performance beyond expectations.
Therefore we need the best employees who can
meet this challenge!
The Power of Knowledge Engineering
Brain power
Plug into The Power of Knowledge Engineering.
Visit us at www.skf.com/knowledge
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
69
3. Markov chains
Analogously we reach E1 in four steps along four paths, all of probability p3
q, so
P{T = 4} = 4p3
q.
5) It follows from the tree that we from E1 with positive probability can reach any other state in 4
steps. It follows by the symmetry that this also holds for any other initial state Ek, so P4
has
only positive elements, so we have that P is regular.
Example 3.28 Given a Markov chain with 5 states E0, E1, E2, E3 and E4 and the transition prob-
abilities
p0,2 = p4,2 = 1,
p1,2 = p2,3 = p3,4 =
1
4
,
p1,0 = p2,1 = p3,2 =
3
4
,
pi,j = 0 otherwise.
This can be considered as a model of the following situation:
Two gamblers, Peter and Poul, participate in a series of games, in each of which Peter wins with the
probability
1
4
and loses with the probability
3
4
; if Peter wins, he receives 1 $ from Paul, and if he loses,
he gives 1 $ to Paul.
The two gamblers have in total 4 $, and the state Ei corresponds to that Peter has i $ (and Paul has
4 − i $). To avoid that the game stops, because one of the gamblers has 0 $, they agree that in that
case the gambler with 0 $ then receives 2 $ from the gambler with 4 $.
1) Find the stochastic matrix.
2) Prove that the Markov chain is irreducible.
3) Find the invariant probability vector.
4) Compute p
(2)
22 and p
(3)
22 .
5) Check if the Markov chain is regular.
6) At time t = 0 the process is in state E0. Find the probability that the process returns to E0, before
it arrives at E4.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎝
0 0 1 0 0
3
4 0 1
4 0 0
0 3
4 0 1
4 0
0 0 3
4 0 1
4
0 0 1 0 0
⎞
⎟
⎟
⎟
⎟
⎠
.
2) An analysis of P provides us with the diagram
E0 ← E1 ↔ E2 ↔ E3 → E4
↓  ↓
E2 = E2 = E2
Download free ebooks at bookboon.com
Stochastic Processes 1
70
3. Markov chains
It follows from this diagram that we can come from any state Ei to any other state Ej, so the
Markov chain is irreducible.
3) The equations of the invariant probability vector are
g0 =
3
4
g1,
g1 =
3
4
g2,
g2 = g0 +
1
4
g1 +
3
4
g3 + g4,
g3 =
1
4
g2,
g4 =
1
4
g3,
thus
g3 = 4g4, g2 = 16g4, g1 = 12g4, g0 = 0g4,
and
1 = g0 + g1 + g2 + g3 + g4 = (9 + 12 + 16 + 4 + 1)g4 = 42g4,
hence
g = (g0, g1, g2, g3, g4) =

3
14
,
2
7
,
8
21
,
2
21
,
1
42

.
4) It follows from
P2
=
⎛
⎜
⎜
⎜
⎜
⎝
0 0 1 0 0
3
4 0 1
4 0 0
0 3
4 01
4 0
0 0 3
4 0 1
4
0 0 1 0 0
⎞
⎟
⎟
⎟
⎟
⎠
⎛
⎜
⎜
⎜
⎜
⎝
0 0 1 0 0
3
4 0 1
4 0 0
0 3
4 01
4 0
0 0 3
4 0 1
4
0 0 1 0 0
⎞
⎟
⎟
⎟
⎟
⎠
=
⎛
⎜
⎜
⎜
⎜
⎝
0 3
4 0 1
4 0
0 3
16
3
4
1
16 0
9
16 0 3
8 0 1
16
0 9
16
1
4
3
16 0
0 3
4 0 1
4 0
⎞
⎟
⎟
⎟
⎟
⎠
,
that p
(2)
22 =
3
8
. (Notice that the indices start at 0). Hence
p
(3)
22 =

0,
3
4
, 0,
1
4
, 0

·

0,
3
4
,
3
8
,
1
4
, 0

=
9
16
+
1
16
=
5
8
.
5) From P2
we get
E0 → E1 → E2 and E2 → E0,
thus
E0 ↔ E1 ↔ E2.
Download free ebooks at bookboon.com
Stochastic Processes 1
71
3. Markov chains
In this case we get the diagram
E0
 
E3 → E1 ← E4,
 
E2
and P2
is seen to be irreducible.
Since p
(2)
22  0, it follows that P2
, and hence also P, is regular.
6) When t = 0 we are in state E0.
When t = 1 we are with probability 1 in state E2
If t ≥ 1 we denote by ak the probability that we can get from Ek to E0 before E4. Since t  0, it
follows that a0 = 1 and a4 = 0, and
a2 =
3
4
a1 +
1
4
a3,
a1 =
3
4
a0 +
1
4
a2 =
3
4
+
1
4
a2,
a3 =
3
4
a2 +
1
4
a4 =
3
4
a2.
www.simcorp.com
MITIGATE RISK REDUCE COST ENABLE GROWTH
The financial industry needs a strong software platform
That’s why we need you
SimCorp is a leading provider of software solutions for the financial industry. We work together to reach a common goal: to help our clients
succeed by providing a strong, scalable IT platform that enables growth, while mitigating risk and reducing cost. At SimCorp, we value
commitment and enable you to make the most of your ambitions and potential.
Are you among the best qualified in finance, economics, IT or mathematics?
Find your next challenge at
www.simcorp.com/careers
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
72
3. Markov chains
When the latter two equations are inserted into the first one, we get
a2 =
3
4

3
4
+
1
4
a2

+
1
4
·
3
4
a2 =
9
16
+
3
16
a2 +
3
16
a2,
thus
16a2 = 9 + 6a2 or a2 =
9
10
.
The wanted probability is a2 =
9
10
, because we might as well start from E2 as from E0.
Example 3.29 Given a Markov chain of 5 states E1, E2, E3, E4 and E5 and the transition proba-
bilities
p1,2 = p1,3 = p1,4 = p1,5 =
1
4
,
p2,3 = p2,4 = p2,5 =
1
3
,
p3,4 = p3,5 =
1
2
, p4,5 = 1,
p5,1 = 1, pi,j = 0 otherwise.
1) Find the stochastic matrix.
2) Prove that the Markov chain is irreducible.
3) Find the invariant probability vector.
4) Check if the Markov chain is regular.
5) To time t = 0 the process is in state E1. Denote by T1 the random variable, which indicates the
time of the first return to E1.
Compute
P {T1 = k} , k = 2, 3, 4, 5,
and find the mean of T1.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎝
0 1
4
1
4
1
4
1
4
0 0 1
3
1
3
1
3
0 0 0 1
2
1
2
0 0 0 0 1
1 0 0 0 0
⎞
⎟
⎟
⎟
⎟
⎠
.
2) It follows from the diagram
E5 → E1 → E2 → E3 → E4 → E5,
that the Markov chain is irreducible.
Download free ebooks at bookboon.com
Stochastic Processes 1
73
3. Markov chains
3) The equations of the invariant probability vector are
g1 = g5,
g2 =
1
4
g1,
g3 =
1
4
g1 +
1
3
g2 =
1
3
g1,
g4 =
1
4
g1 +
1
3
g2 +
1
2
g3 =
1
6
g1,
where
1 = g1 + g2 + g3 + g4 + g5 = g1

1 +
1
4
+
1
3
+
1
2
+ 1

=
37
12
g1,
thus g1 =
12
37
and
g =
1
37
(12, 3, 4, 6, 12).
4) It follows from the computation
P2
=
⎛
⎜
⎜
⎜
⎜
⎝
0 1
4
1
4
1
4
1
4
0 0 1
3
1
3
1
3
0 0 0 1
2
1
2
0 0 0 0 1
1 0 0 0 0
⎞
⎟
⎟
⎟
⎟
⎠
⎛
⎜
⎜
⎜
⎜
⎝
0 1
4
1
4
1
4
1
4
0 0 1
3
1
3
1
3
0 0 0 1
2
1
2
0 0 0 0 1
1 0 0 0 0
⎞
⎟
⎟
⎟
⎟
⎠
=
⎛
⎜
⎜
⎜
⎜
⎝
1
4 0 1
12
5
24
11
24
1
3 0 0 1
6
1
2
1
2 0 0 0 1
2
1 0 0 0 0
0 1
4
1
4
1
4
1
4
⎞
⎟
⎟
⎟
⎟
⎠
,
that we have in P2
,
E2 E3
 
E3 → E1 → E4
 
E4 E5,
i.e.
E1 ↔ E3 ↔ E4 and E2 → E1 → E5 → E2,
and we can get from any state Ei to any other state Ej, so P2
is irreducible.
Now, p
(2)
1,1 =
1
4
 0, so P2
is regular, which implies that also P is regular, because there is an n,
such that P2n
has only positive elements.
Download free ebooks at bookboon.com
Stochastic Processes 1
74
3. Markov chains
5) In this case we have the tree
E5
1
→ E1
1
3 
E2
1
3
→ E3
1
2
→ E4
1
→ E5
1
→ E1
1
4  1
3  1
2 
E1
1
4
→ E3
1
2
→ E4
1
→ E5
1
→ E1
1
4
 1
2 
E4
1
→ E5
1
→ E1
1
4
E5
1
→ E1.
We conclude from an analysis of this tree that P {T1 = 1} = 0, and
P {T1 = 2} =
1
4
· 1 =
1
4
,
P {T1 = 3} =
1
4

1
3
+
1
2
+ 1

· 1 =
11
24
,
P {T1 = 4} =
1
4

1
3
·
1
2
+
1
3
· 1 +
1
2
· 1

· 1 =
1
4

1
6
+
1
3
+
1
2

=
1
4
,
P {T1 = 5} =
1
4
·
1
3
·
1
2
· 1 · 1 =
1
24
.
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
75
3. Markov chains
The mean is
E {T1} = 2 ·
1
4
+ 3 ·
11
24
+ 4 ·
1
4
+ 5 ·
1
24
=
1
24
(12 + 33 + 24 + 5) =
74
24
=
37
12
.
Example 3.30 Given a Markov chain of 4 states and the transition probabilities
p11 = 1 − a, p12 = a,
p23 = p34 = p21 = p32 =
1
2
,
p43 = 1, pij = 0 otherwise.
(Here a is a constant in the interval [0, 1]).
1. Find the stochastic matrix P.
2. Prove that the Markov chain is irreducible for a ∈ ]0, 1], though not irreducible for a = 0.
Find for a ∈ [0, 1] the invariant probability vector.
Find all values of a for which the Markov chain is regular.
Assume in the following that a = 0.
5. Prove that p
(3)
i1 ≥ 1
4 , i = 1, 2, 3, 4.
6. Let
p(0)
=

p
(0)
1 , p
(0)
2 , p
(0)
3 , p
(0)
4

be any initial distribution.
Prove that
p
(n+3)
2 + p
(n+3)
3 + p
(n+3)
4 ≤
3
4

p
(n)
2 + p
(n)
3 + p
(n)
4

, for alle n ∈ N,
and then find
lim
n→∞
p(0)
Pn
.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎝
1 − a a 0 0
1
2 0 1
2 0
0 1
2 01
2
0 0 1 0
⎞
⎟
⎟
⎠ .
2) If a  0, then
E1 → E2 → E3 → E4 → E3 → E2 → E1,
so the Markov chain is irreducible.
If a = 0, then E1 is an absorbing state, so the Markov chain is not irreducible for a = 0.
Download free ebooks at bookboon.com
Stochastic Processes 1
76
3. Markov chains
3) The equations of the invariant probability vector are
g1 = (1 − a)g1 + 1
2 g2, i.e. g2 = 2ag1
g2 = a g1 + 1
2 g3, i.e. g3 = 2ag1,
g3 = 1
2 g2 + g4,
g4 = 1
2 g3, i.e. g4 = 4ag1.
It follows from
1 = g1 + g2 + g3 + g4 = g(1 + 2a + 2a + 4a) = (1 + 8a)g1,
that g1 =
1
1 + 8a
, and
g =
1
1 + 8a
(1, 2a, 2a, 4a).
4) Since the Markov chain is irreducible, we must at least require that a ∈ ]0, 1].
If a  1, then p11 = 1 − a  0, so the Markov chain is regular for a ∈ ]0, 1[.
If a = 1, then
P =
⎛
⎜
⎜
⎝
0 1 0 0
1
2 0 1
2 0
0 1
2 0 1
2
0 0 1 0
⎞
⎟
⎟
⎠ ,
and it is obvious that Pn
contains zeros for every n ∈ N. Hence the Markov chain is not regular
for a = 1.
Summing up, the Markov chain is regular, if and only if a ∈ ]0, 1[.
5) If a = 0, then
P =
⎛
⎜
⎜
⎝
1 0 0 0
1
2 0 1
2 0
0 1
2 0 1
2
0 0 1 0
⎞
⎟
⎟
⎠ , P2
=
⎛
⎜
⎜
⎝
1 0 0 0
1
2
1
4 0 1
4
1
4 0 3
4 0
0 1
2 01
4
⎞
⎟
⎟
⎠ , P3
=
⎛
⎜
⎜
⎝
1 0 0 0
5
8 0 3
8 0
1
4
3
8 0 3
8
1
4 0 3
4 0
⎞
⎟
⎟
⎠ .
(It is not necessary to compute the full matrix P3
; however, the alternative proof is just as long
as the above).
It follows that p
(3)
i1 ≥ 1
4 for i = 1, 2, 3, 4.
6) By 5.,
p
(n+3)
2 + p
(n+3)
3 + p
(n+3)
4 = 1 − p
(n+3)
1 ≤

1 −
1
4
 
p
(n)
2 + p
(n)
3 + p
(n)
4

.
Hence by recursion,
0 ≤ p
(n+3p)
2 + p
(n+3p)
3 + p
(n+3p)
4 ≤

3
4
p 
p
(n)
2 + p
(n)
3 + p
(n)
4

→ 0 for p → ∞,
so
0 ≤
⎧
⎪
⎨
⎪
⎩
limn→∞ p
(n)
2
limn→∞ p
(n)
3
limn→∞ p
(n)
4
⎫
⎪
⎬
⎪
⎭
≤ lim
n→∞

p
(n)
2 + p
(n)
3 + p
(n)
4

= 0,
Download free ebooks at bookboon.com
Stochastic Processes 1
77
3. Markov chains
and
lim
n→∞
p
(n)
1 = 1 − lim
n→∞

p
(n)
2 + p
(n)
3 + p
(n)
4

= 1 − 0 = 1.
Therefore,
lim
n→∞
p(0)
Pn
= (1, 0, 0, 0).
Example 3.31 Given a Markov chain of 5 states E0, E1, E2, E3 and E4, and transition probabilities
p0,1 = p0,1 = p0,3 = p0,4 =
1
4
,
p1,1 = p2,2 = p3,3 = p4,4 =
3
4
,
p1,0 = p2,1 = p3,2 = p4,3 =
1
4
,
pi,j = 0 otherwise.
1. Find the stochastic matrix.
2. Prove that the Markov chain is irreducible.
3. Prove that the Markov chain is regular.
4. Find the invariant probability vector.
For t = 0 the process is at state E1. We denote by T1 the stochastic variable, which indicates the time,
when process for the first time is in state E0.
5. Find P {T1 = k}, k ∈ N, and the mean of T1 (i.e. the expected time of getting from E1 to E0).
6. Find for i = 2, 3, 4, the expected time for getting from Ei to E0.
When t = 0, the process is in state E0. Let T denote the time of the first return to E0.
7. Find the mean of T.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎝
0 1
4
1
4
1
4
1
4
1
4
3
4 0 0 0
0 1
4
3
4 0 0
0 0 1
4
3
4 0
0 0 0 1
4
3
4
⎞
⎟
⎟
⎟
⎟
⎠
.
2) It follows from the diagram
E0 → E4 → E3 → E2 → E1 → E0,
that the Markov chain is irreducible.
Download free ebooks at bookboon.com
Stochastic Processes 1
78
3. Markov chains
3) Since e.g. p2,2 = 3
4  0, and the Markov chain is irreducible, it is also regular.
4) The equations of the invariant probability vector are
g0 = 1
4 g1, thus g1 = 4g0,
g1 = 1
4 g0 + 3
4 g1 + 1
4 g2, thus g2 = 4g1 − g0 − 3g1 = g1 − g0 = 3g0,
g2 = 1
4 g0 + 3
4 g2 + 1
4 g3, thus g3 = 4g2 − g0 − 3g2 = g2 − g0 = 2g0,
g3 = 1
4 g0 + 3
4 g3 + 1
4 g4,
g4 = 1
4 g0 + 3
4 g4, thus g4 = g0,
so
1 = g0 + g1 + g2 + g3 + g4 = g0(1 + 4 + 3 + 2 + 1) = 11g0,
and
g = (g0, g1, g2, g3, g4) =
1
11
(1, 4, 3, 2, 1).
5) It follows from the matrix that
P {T1 = 1} =
1
4
and P {T1 = 2} =
1
4
·

3
4
1
,
Challenging? Not challenging? Try more
Try this...
www.alloptions.nl/life
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
79
3. Markov chains
and in general,
P {T1 = k} =
1
4

3
4
k−1
with E {T1} =
1
4
·
1

1 −
3
4
2 = 4.
6) Let T̃1 denote the random variable, which gives the time when the process is at state Ei−1 for the
first time when we start at Ei. Then
P
'
T̃i = k
(
=
1
4

3
4
k−1
with E
'
T̃i
(
= 4.
Let Ti denote the time, when the process for the first time is in state E0, when we start at Ei.
Then
Ti = T̃i + T̃i−1 + · · · + T̃1,
hence
E {Ti} = 4i, i = 1, 2, 3, 4.
Download free ebooks at bookboon.com
Stochastic Processes 1
80
3. Markov chains
7) In the first step we get to one of the states E1, E2, E3, E4, each of the probability 1
4 , hence
E{T} = 1 +
1
4
· 4{1 + 2 + 3 + 4} = 11.
Example 3.32 Given a Markov chain of 7 states E0, E1, E2, E3, E4, E5, E6, and the transition
probabilities
p0,i =
1
6
, i = 1, 2, 3, 4, 5, 6,
pi,0 = r, i = 1, 2, 3, 4, 5, 6,
pi,i+1 = p, i = 1, 2, 3, 4, 5,
pi,i−1 = p, i = 2, 3, 4, 5, 6,
p1,6 = p6,1 = p, pi,j = 0 otherwise,
where p ≥ 0, r  0 and 2p + r = 1.
1. Find the stochastic matrix P.
2. Prove that the Markov chain is irreducible.
3. Prove that the Markov chain is regular, if p ∈
)
0, 1
2
*
, but not regular for p = 0.
4. Find the value of r, for which the invariant probability vector g is given by
g =

1
7
,
1
7
,
1
7
,
1
7
,
1
7
,
1
7
,
1
7

.
At time t = 0 the process is in state E0. Let T0 denote the time of the first return to E0.
5. Find for every value of p ∈
*
0, 1
2
*
4,
P {T0 = k} , k = 2, 3, 4, . . . .
6. Find the mean of T0.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
0 1
6
1
6
1
6
1
6
1
6
1
6
r 0 p 0 0 0 p
r p 0 p 0 0 0
r 0 p 0 p 0 0
r 0 0 p 0 p 0
r 0 0 0 p 0 p
r p 0 0 0 p 0
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎠
.
2) It follows from the first row that E0 → Ei for all i = 1, . . . , 6. It follows from the first column
that Ei → E0 for all i = 1, . . . , 6. Thus E0 ↔ Ei for all i = 1, . . . 6, and we can via E0
always get from any Ei to any other Ej, and the Markov chain is irreducible.
Download free ebooks at bookboon.com
Stochastic Processes 1
81
3. Markov chains
3) If 0  p  1
2 , then clearly p
(2)
ij  0 for all (i, j), and the Markov chain is regular.
If p = 0, then p
(2)
ij = 0 for i, j = 1, . . . , 6, and hence also for p
(n)
ij , which means that every Pn
contains zeros, and the Markov chain is not regular for p = 0.
4) If the vector
g =

1
7
,
1
7
,
1
7
,
1
7
,
1
7
,
1
7
,
1
7

satisfies g = g P, then we get from the first coordinate that
1
7
=
1
7
· 6r,
so r = 1
6 is the only possibility. In this case,
p =
1
2
(1 − r) =
5
12
,
and it follows that the matrix is double stochastic for this value, and the given vector is indeed an
invariant probability vector.
5) Due to the extreme symmetry we may introduce the new state
E = E1 ∪ E2 ∪ E3 ∪ E4 ∪ E5 ∪ E6.
Then we have a new stochastic matrix for E0 and E alone,
Q =

0 1
r 2p

=

0 1
1 − 2p 2p

.
It follows that
P {T0 = 2} = 1 − 2p and P {T0 = 3} = (1 − 2p) · (2p)1
,
and in general,
P {T0 = k} = (1 − 2p) · (2p)k−2
, k ≥ 2.
6) The mean is
E {T0} = (1 − 2p)
∞

k=2
k (2p)k−2
= (1 − 2p)
∞

k=1
(k + 1) (2p)k−1
= (1 − 2p)

1
(1 − 2p)2
+
1
1 − 2p

=
1
1 − 2p
+ 1 = 2
1 − p
1 − 2p
.
Download free ebooks at bookboon.com
Stochastic Processes 1
82
3. Markov chains
Example 3.33 Given a Markov chain of 5 states E0, E1, E2, E3 and E4 and transition probabilities
p0,1 = 1,
p1,2 = p2,3 = p3,4 =
2
3
,
p1,0 = p2,1 = p3,2 =
1
3
,
p4,2 = p4,3 =
1
2
,
pi,j = 0 otherwise.
1) Find the stochastic matrix P.
2) Prove that the Markov chain is irreducible.
3) Check if the Markov chain is regular.
4) Find the invariant probability vector.
5) Find p
(2)
2,2.
6) At time t = 0 the process is in state E2. Find for every n ∈ N the probability that the process is in
state E0 to time 2n without in the meantime having been in any of the states E0 or E4.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎝
0 1 0 0 0
1
3 0 2
3 0 0
0 1
3 0 2
3 0
0 0 1
3 0 2
3
0 0 1
2
1
2 0
⎞
⎟
⎟
⎟
⎟
⎠
.
2) We obtain from the oblique diagonals the transitions
E0 → E1 → E2 → E3 → E4 → E3 → E2 → E1 → E0,
so we conclude that the Markov chain is irreducible.
3) We get by a computition,
P2
=
⎛
⎜
⎜
⎜
⎜
⎝
0 1 0 0 0
1
3 0 2
3 0 0
0 1
3 0 2
3 0
0 0 1
3 0 2
3
0 0 1
2
1
2 0
⎞
⎟
⎟
⎟
⎟
⎠
⎛
⎜
⎜
⎜
⎜
⎝
0 1 0 0 0
1
3 0 2
3 0 0
0 1
3 0 2
3 0
0 0 1
3 0 2
3
0 0 1
2
1
2 0
⎞
⎟
⎟
⎟
⎟
⎠
=
⎛
⎜
⎜
⎜
⎜
⎝
1
3 0 2
3 0 0
0 5
9 0 4
9 0
1
9 0 4
9 0 4
9
0 1
9
1
3
5
9 0
0 1
6
1
6
1
3
1
3
⎞
⎟
⎟
⎟
⎟
⎠
.
The elements of the diagonal of P2
are  0, so we shall only check if P2
is irreducible. We have
the transitions
E0 ↔ E2, E1 ↔ E3 and E2 ↔ E4,
Download free ebooks at bookboon.com
Stochastic Processes 1
83
3. Markov chains
so we can get from any “even” state to any other “even” state, and from any “odd” state to any
other “odd” state. Since also
E4 ↔ E1 and E3 ↔ E2,
we have also a connection in both directions between “even” and “odd” states, so P2
is irreducible,
and hence also regular, because p
(2)
1,1  0. Then also P is regular.
4) The equations of the invariant probability vector are
g0 = 1
3 g1, thus g1 = 3g0,
g1 = g0 + 1
3 g2, thus g2 = 3g1 − 3g0 = 6g0,
g2 = 2
3 g1 + 1
3 g3 + 1
2 g4, thus 6g0 = 2g0 + 1
3 g3 + 1
2 g4,
g3 = 2
3 g2 + 1
2 g4, thus g3 = 4g0 + 1
2 g4,
g4 = 2
3 g3,
so in particular,
4g0 =
1
3
g0 +
1
2
g4 =
1
3
g3 +
1
3
g3 =
2
3
g3,
and
g3 = 6g0 and g4 =
2
3
g3 = 4g0.
Stand out from the crowd
Designed for graduates with less than one year of full-time postgraduate work
experience, London Business School’s Masters in Management will expand your
thinking and provide you with the foundations for a successful career in business.
The programme is developed in consultation with recruiters to provide you with
the key skills that top employers demand. Through 11 months of full-time study,
you will gain the business knowledge and capabilities to increase your career
choices and stand out from the crowd.
Applications are now open for entry in September 2011.
For more information visit www.london.edu/mim/
email mim@london.edu or call +44 (0)20 7000 7573
Masters in Management
London Business School
Regent’s Park
London NW1 4SA
United Kingdom
Tel +44 (0)20 7000 7573
Email mim@london.edu
www.london.edu/mim/
Fast-track
your career
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
84
3. Markov chains
It follows from
1 = g0 + g1 + g2 + g3 + g4 = g0(1 + 3 + 6 + 6 + 4) = 20g0
that g0 = 1
20 and
g = (g0, g1, g2, g3, g4) =

1
20
,
3
20
,
3
10
,
3
10
,
1
5

.
5) According to the computation in 3. we have p
(2)
22 = 4
9 .
6) Starting at E2 we get in the first step either to E1 or to E3, thus neither to E0 nor to E4. It
follows from the matrix of P2
that
P {E0 after 2 steps} = 1
9 ,
P {E2 after 2 steps} = 4
9 ,
P {E4 after 2 steps} = 4
9 .
Then the process can be iterated,
P {E0 after 2n steps without passing E0 or E4} =
1
9

4
9
n−1
.
Download free ebooks at bookboon.com
Stochastic Processes 1
85
3. Markov chains
Example 3.34 Given a Markov chain of 2 states E1 and E2 and of the stochastic matrix
Q =
 1
4
3
4
1
2
1
2

.
1. Prove that Q is regular, and find the invariant probability vector.
Another Markov chain with 6 states, E1, E2, E3, E4, E5 and E6, has the stochastic matrix
P =
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎝
1
4
3
4 0 0 0 0
1
2
1
2 0 0 0 0
0 1
3 0 2
3 0 0
0 0 1
3 0 2
3 0
0 0 0 1
3 0 2
3
0 0 0 0 0 1
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎠
.
2. Prove that P is not irreducible, and find all closed subsets.
3. Find all invariant probability vectors of P.
4. Prove for any initial distribution
p(0)
=

p
(0)
1 , p
(0)
2 , p
(0)
3 , p
(0)
4 , p
(0)
5 , p
(0)
6

that
p
(n+2)
3 + p
(n+2)
4 + p
(n+2)
5 ≤
2
3

p
(n)
3 + p
(n)
4 + p
(n)
5

, n  N0,
and then prove that
lim
n→∞
p
(n)
3 = lim
n→∞
p
(n)
4 = lim
n→∞
p
(n)
5 = 0.
5. Let the initial distribution be given by
q(0)
= (0, 1, 0, 0, 0, 0).
Find limn→∞ q(n)
.
Then let the initial distribution be given by
r(0)
= (0, 0, 0, 1, 0, 0).
Prove that limn→∞ r(n)
.
1) All elements of Q are  0, so Q is regular.
The equation of the invariant probability vector is
g1 =
1
4
g1 +
1
2
g2 thus g2 =
3
2
g1,
so the probability vector is
g =

2
5
,
3
5

.
Download free ebooks at bookboon.com
Stochastic Processes 1
86
3. Markov chains
2) Obviously, {E1, E2} and {E6} are closed subsets. Since we from E5 can get to both E4 and
E6, from E4 can reach both E3 and E5, and from E3 get to both E2 and E4, we have – with
the exception of the union {E1, E2, E6} – no other possibility of closed subsets. Since we have
non-trivial closed subsets, the Markov chain is not irreducible.
3) The equations of the invariant probability vectors are
g1 = 1
4 g1 + 1
2 g2, g2 = 3
4 g1 + 1
2 g2 + 1
3 g3,
g3 = 1
3 g4, g4 = 2
3 g3 + 1
3 g5,
g5 = 2
3 g4, g6 = 1
2 g5 + g6.
When we solve these equations backwards, we obtain successively,
g5 = 0, g4 = 0, and g3 = 0.
The closed system {E1, E2} corresponds to the matrix Q, so the invariant probability vectors are
g =

2
5
x,
3
5
x, 0, 0, 0, 1 − x

, 0 ≤ x ≤ 1.
4) It follows from
P2
=
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎝
1
4
3
4 0 0 0 0
1
2
1
2 0 0 0 0
0 1
3 0 2
3 0 0
0 0 1
3 0 2
3 0
0 0 0 1
3 0 2
3
0 0 0 0 0 1
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎠
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎝
1
4
3
4 0 0 0 0
1
2
1
2 0 0 0 0
0 1
3 0 2
3 0 0
0 0 1
3 0 2
3 0
0 0 0 1
3 0 2
3
0 0 0 0 0 1
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎠
=
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎝
7
16
9
16 0 0 0 0
3
8
5
8 0 0 0 0
1
6
1
6
2
9 0 4
9 0
0 1
9 0 4
9 0 4
9
0 0 1
9 0 2
9
2
3
0 0 0 0 0 1
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎠
,
that
p
(n+2)
3 + p
(n+2)
4 + p
(n+2)
5 =

2
9
+
4
9

p
(n)
3 +
4
9
p
(n)
4 +

1
9
+
2
9

p
(n)
5
≤
2
3
'
p
(n)
3 + p
(n)
4 + p
(n)
5
(
.
This estimate gives by recursion,
0 ≤ p
(n+2p)
3 + p
(n+2p)
4 + p
(n+2p)
5 ≤

2
3
p '
p
(n)
3 + p
(n)
4 + p
(n)
5
(
→ 0 for p → ∞.
Since all p
(n)
i ≥ 0, we get
lim
n→∞
p
(n)
3 = lim
n→∞
p
(n)
4 = lim
n→∞
p
(n)
5 = 0.
5) If q(0)
= (0, 1, 0, 0, 0, 0), we are in the closed set {E1, E2}, which actually is described by the matrix
Q given in 1.. Then
lim
n→∞
q(n)
=

2
5
,
3
5
, 0, 0, 0, 0

.
Download free ebooks at bookboon.com
Stochastic Processes 1
87
3. Markov chains
6) If r(0)
= (0, 0, 0, 1, 0, 0), then we start in E4. Hence by 4.,
r(2)
=

0,
1
9
, 0,
4
9
, 0,
4
9

,
so 1
9 reaches the closed set {E1, E2}, and 4
9 reaches the closed set {E6}.
The rest,
4
9
, lies again in E4, and the process is repeated.
In total,
5
9
disappears in each step, where
1
5
goes to {E1, E2}, and
4
5
goes to E6. Hence we
conclude that
lim
n→∞
r(n)
=

2
25
,
3
25
, 0, 0, 0,
4
5

.
Alternatively we may apply the theory of the ruin problem. We first re-numerate the states to
F0 = {E1, E2} , F1 = E3, F2 = E4, F3 = E5, and F4 = E6.
Then we get the diagram
F0 ←− F1 ←→ F2 ←→ F3 −→ F4.
Starting at E4(= F2), the parameters of the ruin problem in order to reach F0 = {E1, E2} before
F4 = E6 is given by
k = 2, N = 4, q =
1
3
, p =
2
3
,
hence the probability is
a2 =

1
2
2
−

1
2
4
1 −

1
4
4 =
1
4
−
1
16
1 −
1
16
=
3
15
=
1
5
.
Once one has arrived to F0 = {E1, E2}, one stays there forever, an we approach the stationary
distribution of (1). This gives
r(n)
→

1
5
·
2
5
,
1
5
·
3
5
, 0, 0, 0,
4
5

=

2
25
,
3
25
, 0, 0, 0,
4
5

.
Download free ebooks at bookboon.com
Stochastic Processes 1
88
3. Markov chains
Example 3.35 Given a Markov chain of 5 states E1, E2, E3, E4 and E5, and transition probabilities
p1,1 = a, p1,2 = 1 − a,
p3,1 = p3,5 = 1
2 , p2,3 = p4,5 = p5,4 = 1,
pi,j = 0 otherwise.
(Here a is a constant in the interval [0, 1]).
1) Find the stochastic matrix P.
2) Prove that the Markov chain is irreducible for a  1, and not irreducible for a = 1.
3) Find for every a them invariant probability vector.
4) Find the values of a, for which the Markov chain is regular.
5) To time t = 0 the process is in state E5. Denote by T the time when the process for the first time
is in state E1.
Find the distribution of the random variable T.
6) Find the mean of T.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎝
a 1 − a 0 0 0
0 0 1 0 0
1
2 0 0 0 1
2
0 0 1 0 0
0 0 0 1 0
⎞
⎟
⎟
⎟
⎟
⎠
.
©
UBS
2010.
All
rights
reserved.
www.ubs.com/graduates
Looking for a career where your ideas could really make a difference? UBS’s
Graduate Programme and internships are a chance for you to experience
for yourself what it’s like to be part of a global team that rewards your input
and believes in succeeding together.
Wherever you are in your academic career, make your future a part of ours
by visiting www.ubs.com/graduates.
You’re full of energy
and ideas. And that’s
just what we are looking for.
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
89
3. Markov chains
2) For a  1 we get the diagrams
E1 → E2 → E3 → E1,
and
E3 → E5 → E4 → E3,
so we can get from any state Ei to any other state Ej, thus the Markov chain is irreducible.
If a = 1, then E1 is absorbing, and the Markov chain is not irreducible.
3) The equations of the invariant probability vectors are
g1 = a g1 + 1
2 g3, thus g3 = 2(1 − a)g1,
g2 = (1 − a)g1, thus g2 = (1 − a)g1,
g3 = g2 + g4,
g4 = g5,
g5 = 1
2 g3, thus g4 = g5 = (1 − a)g1.
We now get
1 = g1 + g2 + g3 + g4 + g5 = g1 {1 + 1 − a + 2 − 2a + 1 − a + 1 − a} = g1(6 − 5a).
Since 6 − 5a  0 for a ∈ [0, 1], we get g1 =
1
6 − 5a
, and
g =
1
6 − 5a
(1, 1 − a, 2(2/1 − a), 1 − a, 1 − a).
4) Now, P is irreducible for a ∈ [0, 1[, and p1,1 = a  0 for a ∈ ]0, 1[, hence the Markov chain is (at
least) regular for a ∈ ]0, 1[. When a = 0, then
P2
=
⎛
⎜
⎜
⎜
⎜
⎝
0 1 0 0 0
0 0 1 0 0
1
2 0 0 0 1
2
0 0 1 0 0
0 0 0 1 0
⎞
⎟
⎟
⎟
⎟
⎠
⎛
⎜
⎜
⎜
⎜
⎝
0 1 0 0 0
0 0 1 0 0
1
2 0 0 0 1
2
0 0 1 0 0
0 0 0 1 0
⎞
⎟
⎟
⎟
⎟
⎠
=
⎛
⎜
⎜
⎜
⎜
⎝
0 0 1 0 0
1
2 0 0 0 1
2
0 1
2 0 1
2 0
1
2 0 0 0 1
2
0 0 1 0 0
⎞
⎟
⎟
⎟
⎟
⎠
,
and
P3
=
⎛
⎜
⎜
⎜
⎜
⎝
0 1 0 0 0
0 0 1 0 0
1
2 0 0 0 1
2
0 0 1 0 0
0 0 0 1 0
⎞
⎟
⎟
⎟
⎟
⎠
=
⎛
⎜
⎜
⎜
⎜
⎝
0 0 1 0 0
1
2 0 0 0 1
2
0 1
2 0 1
2 0
1
2 0 0 0 1
2
0 0 1 0 0
⎞
⎟
⎟
⎟
⎟
⎠
=
⎛
⎜
⎜
⎜
⎜
⎝
1
2 0 0 0 1
2
0 1
2 0 1
2 0
0 0 1 0 0
0 1
2 0 1
2 0
1
2 0 0 0 1
2
⎞
⎟
⎟
⎟
⎟
⎠
.
It follows that E3 is an absorbing state for P3
, hence the Markov chain corresponding to P3 is not
irreducible. In particular, P is not regular.
5) We derive from the diagram
E1,
1
2 
E5
1
→ E4
1
→ E3
1
2 
E5,
Download free ebooks at bookboon.com
Stochastic Processes 1
90
3. Markov chains
that
P{T = 1} = P{T = 2} = 0 and P{T = 3} =
1
2
,
and the process is repeated from E5. Hence
P{T = 3k} =

1
2
k
and P{T = j} = 0 for j = 3k.
6) The mean is
P{T} =
∞

k=1
3k

1
2
k
=
3
2
∞

k=1
k

1
2
k−1
=
3
2
·
1

1 −
1
2
2 = 6.
Example 3.36 Given a Markov chain of 5 states E0, E1, E2, E3 and E4, and transition probabilities
p0,i = 1
4 , i = 1, 2, 3, 4;
pi,i−1 = 1
i , i = 1, 2, 3, 4;
pi,i = i−1
i , i = 2, 3, 4;
pi,j = 0 ellers.
1. Find the stochastic matrix P.
2. Prove that the Markov chain is irreducible.
3. Check if the Markov chain is regular.
Find the invariant probability vector.
At time t = 0 the process is in state Ei, where i is one of the numbers 1, 2, 3, 4. Let Ti denote the
random variable, which indicates the time, when the process for the first time is in state Ei−1.
5. Find for i = 1, 2, 3, 4, the probabilities P {Ti = k}, k ∈ N, and find the mean of Ti (i.e. the
expected time for getting from Ei to Ei−1).
6. Find for i = 1, 2, 3, 4, the expected time for getting from Ei to E0.
Let the process at time t = 0 be in state E0. Denote by T the time of the the first return to E0.
7. Find the mean of T.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎝
0 1
4
1
4
1
4
1
4
1 0 0 0 0
0 1
2
1
2 0 0
0 0 1
3
2
3 0
0 0 0 1
4
3
4
⎞
⎟
⎟
⎟
⎟
⎠
.
Download free ebooks at bookboon.com
Stochastic Processes 1
91
3. Markov chains
2) It follows from the diagram
E4 → E3 → E2 → E1 → E0 → E4,
that the Markov chain is irreducible.
3) Since e.g. p2,2 =
1
2
 0, and the Markov chain is irreducible, it is also regular.
4) The equations of the invariant probability vectors are
g0 = g1, thus g1 = g0,
g1 = 1
4 g0 + 1
2 g2, thus g2 = 3
2 g0,
g2 = 1
4 g0 + 1
2 g2 + 1
3 g3, thus g3 = 3
1
2 g2 − 1
4 g0 = 3
2 g0,
g3 = 1
4 g0 + 2
3 g3 + 1
4 g4,
g4 = 1
4 g0 + 3
4 g4, thus g4 = g0.
Hence
1 = g0 + g1 + g2 + g3 + g4 = g0

1 + 1 +
3
2
+
3
2
+ 1

= 6 g0,
from which g0 =
1
6
, and
g = (g0, g1, g2, g3, g4) =

1
6
,
1
6
,
1
4
,
1
4
,
1
6

.
5) Clearly,
P {T1 = 1} = 1, P {T1 = k} = 0 for k ≥ 2, and E {T1} = 1.
We get for T2,
P {T2 = k} =

1
2
k
, k ∈ N, and E {T2} = 2.
We get for T3,
P {T3 = k} =
1
3

2
3
k−1
, k ∈ N, and E {T3} = 3.
We get for T4,
P {T4 = k} =
1
4

3
4
k−1
, k ∈ N, and E {T4} = 4.
6) Let T̃i denote the time when the process of initial state Ei for the first time is in E0. Then
E
'
T̃1
(
= E {T1} = 1,
E
'
T̃2
(
= E {T2} + E {T1} = 2 + 1 = 3,
E
'
T̃3
(
= E {T3} + E
'
T̃2
(
= 3 + 3 = 6,
E
'
T̃4
(
= E {T4} + E
'
T̃3
(
= 4 + 6 = 10.
Download free ebooks at bookboon.com
Stochastic Processes 1
92
3. Markov chains
7) In the first step we get to one of the states E1, E2, E3, E4, each of probability
1
4
. Then we shall
move back to E0, so
E{T} = 1 +
1
4

E
'
T̃1
(
+ E
'
T̃2
(
+ E
'
T̃3
(
+ E
'
T̃4
(
= 1 +
1
4
(1 + 3 + 6 + 10) = 1 +
1
4
· 20 = 6.
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
93
3. Markov chains
Example 3.37 Given a Markov chain of 4 states E1, E2, E3 and E4, and transition probabilities
p1,1 = a, p1,2 = 1 − a,
p2,3 = p3,2 = 2
3 , p2,1 = p3,4 = 1
3 ,
p4,3 = 1, pi,j = 0 otherwise.
(Here a is a constant in the interval [0, 1]).
1) Find the stochastic matrix P.
2) Find the values of a, for which the Markov chain is irreducible.
3) Find the values of a, for which the Markov chain is regular.
4) Find for every a the invariant probability vector.
5) Find for every a the limit limn→∞ p
(2n)
22 .
6) Put a = 1, and assume that the process at time t = 0 is in state E2. Find the probability that the
process at any later time reaches state E4.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎝
a 1 − a 0 0
1
3 0 2
3 0
0 2
3 0 1
3
0 0 1 0
⎞
⎟
⎟
⎠ .
2) When a ∈ [0, 1[, we have the diagram
E1 → E2 → E3 → E4 → E3 → E2 → E1,
and we conclude that the Markov chain is irreducible.
If a = 1, then E1 is absorbing, and the Markov chain is not irreducible for a = 1.
3) When we check the possible regularity we shall only consider a ∈ [0, 1[.
If a ∈ ]0, 1[, then p1,1 = a  0, and the Markov chain is regular for a ∈ ]0, 1[.
If a = 0, then every second element of Pn
is 0 for every n ∈ N, thus the Markov chain is not
regular for a = 0.
4) The equations of the invariante probability vector are
g1 = a g1 + 1
3 g2, thus g2 = 3(1 − a)g1,
g2 = (1 − a)g1 + 2
3 g3, thus g3 = 3
2 (3 − 3a − 1 + a)g1 = 3(1 − a)g1,
g3 = 2
3 g2 + g4, thus g4 = (1 − a)g1.
It follows from
1 = g1 + g2 + g3 + g4 = g1 {1 + 3(1 − a) + 3(1 − a) + (1 − a)} = (8 − 7a)g1
and 7a ≤ 7  8 that g1 =
1
8 − 7a
≤ 1, hence
g =
1
8 − 7a
(1, 3(1 − a), 3(1 − a), 1 − a).
Download free ebooks at bookboon.com
Stochastic Processes 1
94
3. Markov chains
5) If a ∈ ]0, 1[, the Markov chain is regular, so Pn
konverges, hence P(2n)
also converges towards G,
where each row of g is the invariant probability vector found in 4.. Hence
lim
n→∞
p
(2n)
2,2 = g2 =
3(1 − a)
8 − 7a)
.
If a = 0, the Markov chain is irreducible, but not regular.
Then compute
P2
=
⎛
⎜
⎜
⎝
a 1 − a 0 0
1
3 0 2
3 0
0 2
3 0 1
3
0 0 1 0
⎞
⎟
⎟
⎠
⎛
⎜
⎜
⎝
a 1 − a 0 0
1
3 0 2
3 0
0 2
3 0 1
3
0 0 1 0
⎞
⎟
⎟
⎠ =
⎛
⎜
⎜
⎝
1
3 0 2
3 0
0 7
9 0 2
9
4
5 0 5
9 0
0 2
3 0 1
3
⎞
⎟
⎟
⎠ .
It follows that {E2, E4} is a closed system. The corresponding stochastic sub-matrix
Q =
 7
9
2
9
2
3
1
3

is regular, and the equations of the invariant probability vector are
g2 = 7
9 g2 + 2
3 g4,
g4 = 2
9 g2 + 1
3 g4,
thus g2 = 3g4,
hence
(g2, g4) =

3
4
,
1
4

, and lim
n→∞
p
(2n)
2,2 =
3
4
,
(and not 3
8 , which we get by inserting a = 0 into the formula of 4..
This result is in agreement with the theoretical result, because p
(2n+1)
2,2 = 0, so
1
2n
2n

i=1
P =
1
2n
n

i=1
P2i
+
1
2n
n

i=1
P2i−1
→ G for n → ∞.
If a = 1, then
P2
=
⎛
⎜
⎜
⎜
⎝
1 0 0 0
1
3 0 2
3 0
0
2
3
0 1
3
0 0 1 0
⎞
⎟
⎟
⎟
⎠
=
⎛
⎜
⎜
⎜
⎝
1 0 0 0
1
3 0 2
3 0
0
2
3
0 1
3
0 0 1 0
⎞
⎟
⎟
⎟
⎠
=
⎛
⎜
⎜
⎝
1 0 0 0
1
3
4
9 0 2
9
2
9 0 7
9 0
0 2
3 0 1
3
⎞
⎟
⎟
⎠ .
Thus
g
(n+2)
2 + g
(n+2)
3 + g
(n+2)
4 ≤
7
9

g
(n)
2 + g
(n)
3 + g
(n)
4

,
and hence g
(2n)
2 → 0 for n → ∞, and in particular, p
(2n)
2,2 → 0.
6) Let a = 1. If the proces at time t = 0 starts in E2, we get P{T = 1} = 0.
If t = 2, then P{T = 2} =
2
9
, while
4
9
of the “mass” lies in E2, and
1
3
is “lost” in the absorbing
state E1. Thus
P{T = 2k + 1} = 0 for k ∈ N0,
Download free ebooks at bookboon.com
Stochastic Processes 1
95
3. Markov chains
and
P{T = 2k} =
2
9
·

4
9
k−1
for k ∈ N.
Finally, by a summation, the wanted probability is
∞

k=1
2
9
·

4
9
k−1
=
2
9
1 − 4
9
=
2
5
.
Example 3.38 Given a Markov chain of 5 states E0, E1, E2, E3 and E4, and transition probabilities
pi,i = 4
5 , i = 1, 2, 3, 4,
pi,i−1 = 1
5 , i = 1, 2, 3, 4,
p0,2 = p0,4 = a, p0,0 = 1 − 2a,
pi,1 = 0 otherwise.
Here a is a constant in the interval
*
0, 1
2
)
.
1. Find the stochastic matrix P.
2. Find the values of a, for which the Markov chain is irreducible.
3. Find for every a the invariant probability vector.
Ay time t = 0 the process is in state Ei, where i is one of the numbers 1, 2, 3, 4.
Let Ti denote the random variable, which indicates the time when the process for the first time is in
state E0.
4. Find P {T2 = k}, k = 2, 3, 4, . . . , and the mean of T2 (i.e. the expected time for getting from E2
to E0).
5. Find the mean of T4.
Now put a = 1
2 , and assume that the process to time t = 0 is in state E0. Let T denote the time of
the first return to E0.
6. Find the mean of T.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎝
1 − 2a 0 a 0 a
1
5
4
5 0 0 0
0 1
5
4
5 0 0
0 0 1
5
4
5 0
0 0 0 1
5
4
5
⎞
⎟
⎟
⎟
⎟
⎠
.
2) When a = 0, we see that E0 is absorbing, and the Markov chain is not irreducible.
For 0  a ≤ 1
2 we get the diagram
E0 → E4 → E3 → E2 → E1 → E0,
proving that the Markov chain is irreducible, and even regular, because e.g. p1,1 = 4
5  0.
Download free ebooks at bookboon.com
Stochastic Processes 1
96
3. Markov chains
3) The equations of the invariant probability vector are
g0 = (1 − 2a)g0 + 1
5 g1, thus g1 = 10a g0,
g1 = 4
5 g1 + 1
5 g2, thus g2 = g1 = 10a g0,
g2 = a g0 + 4
5 g2 + 1
5 g3, thus g3 = 10a g0 − 5a g0 = 5a g0,
g3 = 4
5 g3 + 1
5 g4, thus g4 = g3 = 5a g0,
g4 = a g0 + 4
5 g4, thus g4 = 5a g0.
It follows that
1 = g0 + g1 + g2 + g3 + g4 = g0 (1 + 10a + 10a + 5a + 5a) = (1 + 30a) g0,
thus g0 =
1
1 + 30a
, and therefore
g = (g0, g1, g2, g3, g4) =
1
1 + 30a
(1, 10a, 10a, 5a, 5a).
4) Let T̃i denote the random variable, which indicates the first time, when the process is in state
Ei−1, when we start in Ei. Then
P
'
T̃i = k
(
=
1
5
·

4
5
k−1
, k ∈ N, med E
'
T̃i
(
= 5.
your chance
to change
the world
Here at Ericsson we have a deep rooted belief that
the innovations we make on a daily basis can have a
profound effect on making the world a better place
for people, business and society. Join us.
In Germany we are especially looking for graduates
as Integration Engineers for
• Radio Access and IP Networks
• IMS and IPTV
We are looking forward to getting your application!
To apply and for all current job openings please visit
our web page: www.ericsson.com/careers
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
97
3. Markov chains
It follows from T2 = T̃2 + T̃1 for k ≥ 2 that
P {T2 = k} =
k−1

j=1
P
'
T̃2 = j
(
· P
'
T̃1 = k − j
(
=
k−1

j=1
1
5

4
5
j−1
·
1
5

4
5
k−j−1
=
1
25
(k − 1)

4
5
k−2
, k ≥ 2.
The mean is
E {T2} = E
'
T̃2
(
+ E
'
T̃1
(
= 5 + 5 = 10.
Alternatively,
E {T2} =
1
25
∞

k=2
k(k − 1)

4
5
k−2
=
1
25
·
2!

1 −
4
5
3 = 10.
5) The mean of T4 is
E {T4} = E
'
T̃4
(
+ E
'
T̃3
(
+ E
'
T̃2
(
+ E
'
T̃1
(
= 4 · 5 = 20.
6) Let a = 1
2 , and assume that we at time t = 0 are in state E0. Then we are to time t = 1 either in
E2 or in E4, each of probability 1
2 . This gives
E{T} = 1 +
1
2
E {T2} +
1
2
E {T4} = 1 +
1
2
· 10 +
1
2
· 20 = 16.
Download free ebooks at bookboon.com
Stochastic Processes 1
98
3. Markov chains
Example 3.39 Given a Markov chain of the states E1, E2, . . . , Em. We assume that
{E1, E2, . . . , Er} is a closed subset C, and that we from any other of the states Er+1, Er+2, . . . , Em,
have positive probability eventually of reaching the closed subset. Thus, the stochastic matrix looks like
P =
⎛
⎝
S | 0
− + −
R | Q
⎞
⎠ ,
where S is an r × r stochastic matrix, Q is an (m − r) × (m − r)-matrix, 0 is an r × (m − r)-matrix
consisting of zeros, and R is an (m − r) × r-matrix.
1. Prove for every pair (i, j) with r + 1 ≤ i, j ≤ m that p
(n)
ij → 0 for n → ∞, and conclude that
Qn
→ 0 for n → ∞.
2. Prove that there are constants b  0, c ∈ ]0, 1[, such that p
(n)
ij ≤ b cn
for every pair (i, j) with
r + 1 ≤ i, j ≤ m and every n ∈ N, and conclude that for every (i, j) as above, the infinite series
∞
n=0 p
(n)
ij is convergent.
3. Prove that the matrix I − Q has the reciprocal matrix
N =
∞

k=0
Qk
.
We define for every j ∈ {r + 1, . . . , m} a random variable Xj by
Xj = k, if the process is in state Ej in total k times.
For i ∈ {r + 1, . . . , m} we let Ei {Xj} denote the expected number of times the process is in state Ej,
if the processen at time t = 0 is in state Ei.
4. Prove that
Ei {Xj} =
∞

n=0
p
(n)
ij .
5. Prove that Ei {Xj} can be found as the (i, j)-th element of the matrix N = (I − Q)−1
.
We denote for i ∈ {r + 1, . . . , m} and j ∈ {1, 2, . . . , r} by bij the probability that the process by
starting in Ei reachers state Ej before any of the states in C.
6. Prove that bij is equal to the (i, j)-th element of the matrix B = N R.
1) For every fixed i ∈ {r + 1, . . . , m} there exists an ni, such that the i-th row in Pni
contains
elements. Then
m

j=r+1
p
(m+ni)
i,j ≤ αi
m

j=r+1
p
(n)
i,j ≤ αi, where 0 ≤ αi  1,
hence
m

j=r+1
p
(n+s ni)
i,j ≤ αs
i → 0 for s → ∞.
We conclude that p
(n)
i,j → 0 for n → ∞ and i, j = r+1, . . . , m. This implies precisely that Qn
→ 0
for n → ∞.
Download free ebooks at bookboon.com
Stochastic Processes 1
99
3. Markov chains
2) It follows from 1. that
p
(n+sni)
i,j = αs
i = ( ni
√
αi)
sni
= α
−n/ni
i ( ni
√
αi)
n+sni
≤
1
αi
( ni
√
αi)
n+sni
,
(with a trivial modification for αi = 0).
If we choose bi =
1
αi
and ci = ni
√
αi, then we get the estimate
p
(n)
i,j ≤ bi · cn
i .
Then choose b = maxi bi  0 and c = maxi ci  1, and we get the inequality
∞

n=0
p
(n)
i,j ≤ b
∞

n=0
cn
=
b
1 − c
 ∞.
3) If we put N(n)
=
n
k=0 Qn
, then
N(n)
(I − Q) = (O − Q)N(n)
= I − Q(n)
→ I for n → ∞,
hence
(I − Q)−1
= N =
∞

k=0
Qk
.
4) Since p
(n)
i,j is the probability that we are in state Ej after n steps, when we start in Ei, then the
expected number of times, the process is in state Ej, is the sum of all these probabilities, thus
Ei {Xj} =
∞

n=0
p
(n)
i,j .
5) The claim follows from that
Ei {Xj} =
∞

n=0
p
(n)
i,j
is the (i, j)-th element of the matrix
∞

n=0
Qn
= N = I − Q.
6) We can only reach a state Ej, j ∈ {1, 2, . . . , r}, through the matrix R, i.e. through one of the
possibilities
Q0
R, Q1
R, . . . , Qn
R, . . . .
An addition of these gives precisely B = N R, where the (i, j)-th element bij is the probability
that we end in Ej ∈ C, without (by the construction) being in any state earlier from C.
Download free ebooks at bookboon.com
Stochastic Processes 1
100
3. Markov chains
Example 3.40 Given an irreducible Markov chain E1, E2, . . . , Em with the stochastic matrix P and
invariant probability vector
α = (α1, α2, . . . , αn) ,
where αj = 0 for every j.
1. Prove that if the process to time t = 0 is in state Ei, then for every j ∈ {1, 2, . . . , m}, the process
reaches eventually with probability 1 the state Ej.
We define for every j an random variable Tj by putting Tj = n, if the process is in state Ej for the
first time after the time 0 to the time n.
Denote for every i by mij the mean of Tj, if the process to time t = 0 is in state Ei.
2. Prove that mij is finite for every (i, j).
3. Prove by a convenient splitting of what happens in the first step,
(5) mij =

k
pikmkj − pijmjj + 1 for every (i, j).
4. Prove that the mean of the time of return mii is given by
mii =
1
αi
, i = 1, 2, . . . , m.
Hint: Multiply the i-th equation of (5) by αi, and then sum over i.
1) This follows from the fact that the Markov chain is irreducible, so Ei is transferred into Ek after
some steps.
2) When the Markov chain is irreducible, it follows by considering the graph that there exists a
transition diagram of M transitions, by which one comes from any Ei back to Ei through all the
other states in at most M steps. Hence there exists an a ∈ ]0, 1[, such that
mij ≤
∞

k=0
ak
=
1
1 − a
.
3) If we start in Ei, then after the first step the one on the right hand side of (5) is in state Ek of
probability pi,k, hence
m

k=1
pikmkj + 1.
However, if we end in the state Ej, we count pijmjj too much, so
mij =

k
pikmkj − pijmjj + 1 for every (i, j).
Download free ebooks at bookboon.com
Stochastic Processes 1
101
3. Markov chains
4) By using the hint and that α is invariant, we get

i
αimij =

k


i
αipik

mkj −


i
αipij

mjj +

i
αi
=

k
αkmkj − αjmjj + 1.
The two sums are equal, hence by a rearrangement,
mjj =
1
αj
.
what‘s missing in this equation?
MAERSK INTERNATIONAL TECHNOLOGY  SCIENCE PROGRAMME
You could be one of our future talents
Are you about to graduate as an engineer or geoscientist? Or have you already graduated?
If so, there may be an exciting future for you with A.P. Moller - Maersk.
www.maersk.com/mitas
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
102
3. Markov chains
Example 3.41 Given a Markov chain of the states 0, 1, 2, . . . , and transition probabilities
pi,i+1 = p, pi,0 = q, i ∈ N0, pij = 0 otherwise,
(where p  0, q  0, p + q = 1).
Prove that the Markov chain is regular, and find its stationary distribution.
The corresponding stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎜
⎝
q p 0 0 0 · · ·
q 0 p 0 0 · · ·
q 0 0 p 0 · · ·
q 0 0 0 p · · ·
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
⎞
⎟
⎟
⎟
⎟
⎟
⎠
.
We have trivially the transitions
Ei → E0 → E1 → E2 → · · · , i ∈ N,
so we can get from any state Ei to any other state Ej. This shows that the Markov chain is irreducible.
From p0,0 = q  0 follows that d = 1, and the Markov chain is regular.
A possible stationary distribution g must fulfil the equations
g0 = q
∞

j=0
gj (convergent series),
and
gn = p gn−1, n ∈ N0.
When we divide the recursion formula by pn
 0, we get
1
pn
gn =
1
pn−1
gn−1 = · · · = g0,
hence
gn = pn
g0, n ∈ N0,
is the only possibility. We see by insertion that the series
∞
j=0 gj is in fact convergent and
1 =
∞

j=0
gj = g0
∞

j=0
pn
= g0 ·
1
1 − p
= g0 ·
1
q
,
from which we conclude that g0 = q. Therefore, the Markov chain has a stationary distribution, which
is given by the coordinates
gn = q pn
, n ∈ N0.
Download free ebooks at bookboon.com
Stochastic Processes 1
103
3. Markov chains
Example 3.42 A Markov chain has the countably many states E1, E2, E3, . . . , and transition prob-
abilities
pi,i+1 =
i
i + 1
, pi,1 =
2
i + 2
, i ∈ N, pij = 0 otherwise.
1) Prove that the Markov chain is regular.
2) Prove that there exists a stationary distribution, and then find it.
3) Assume that the process at time t = 0 is in state E1. Let T denote the random variable, which
indicates the time of the first return to E1. Find P{T = k}, k ∈ N.
4) Find the mean E{T}.
5) Prove that T does not have a variance.
1) The infinite stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎝
2
3
1
3 0 0 · · ·
2
4 0 2
4 0 · · ·
2
5 0 0 3
5 · · ·
.
.
.
.
.
.
.
.
.
.
.
.
⎞
⎟
⎟
⎟
⎠
.
We conclude from
E1 → E2 → E3 → · · · → En → · · · ,
and
En → E1 for alle n,
that the Markov chain is irreducible.
Since d1 = 1, the Markov chain is regular.
2) The equations of a possible invariant probability vector are
gi+1 =
i
i + 2
gi, i ∈ N.
Then by recursion,
gi =
2
(i + 1)i
g1, i ∈ N.
Using that the sectional series is telescopic we conclude from
∞

i=1
2
(i + 1)i
= 2
∞

i=1

1
i
−
1
i + 1

= 2,
that the stationary distribution exists and that its coordinates are given by
gi =
1
(i + 1)i
, i ∈ N.
Download free ebooks at bookboon.com
Stochastic Processes 1
104
3. Markov chains
3) It follows that
P{T = 1} =
2
3
,
P{T = 2} =

1 −
2
3

·
2
4
=
1
6
,
P{T = 3} =

1 −
2
3
 
1 −
2
4

·
2
5
=
1
15
,
P{T = 4} = =

1 −
2
3
 
1 −
2
4
 
1 −
2
5

·
2
6
=
1
30
.
We conclude from this pattern that
P{T = k} =
k−1

i=1

1 −
2
i + 2

·
2
k + 2
=
k−1

i=1
i
i + 2
·
2
k + 2
=
2(k − 1)!
(k + 2)!
· 2 =
4
(k + 2)(k + 1)k
.
4) The mean is
E{T} = 4
∞

k=1
1
(k + 2)(k + 1)
= 4
∞

k=1

1
k + 1
−
1
k + 2

=
4
2
= 2.
It all starts at Boot Camp. It’s 48 hours
that will stimulate your mind and
enhance your career prospects. You’ll
spend time with other students, top
Accenture Consultants and special
guests. An inspirational two days
packed with intellectual challenges
and activities designed to let you
discover what it really means to be a
high performer in business. We can’t
tell you everything about Boot Camp,
but expect a fast-paced, exhilarating
and intense learning experience.
It could be your toughest test yet,
which is exactly what will make it
your biggest opportunity.
Find out more and apply online.
Choose Accenture for a career where the variety of opportunities and challenges allows you to make a
difference every day. A place where you can develop your potential and grow professionally, working
alongside talented colleagues. The only place where you can learn from our unrivalled experience, while
helping our global clients achieve high performance. If this is your idea of a typical working day, then
Accenture is the place to be.
Turning a challenge into a learning curve.
Just another day at the office for a high performer.
Accenture Boot Camp – your toughest test yet
Visit accenture.com/bootcamp
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
105
3. Markov chains
5) Now
k2
·
4
(k + 2)(k + 1)k
∼
4
k
,
and the series
 4
k = ∞ is divergent, hence the variance does not exist.
Example 3.43 Given a Markov chain of the states E1, E2, E3, E4 and E5 and with the stochastic
matrix
P =
⎛
⎜
⎜
⎜
⎜
⎝
0 3
4
1
4 0 0
0 0 2
3
1
3 0
0 0 0 1
2
1
2
0 0 0 0 1
a 0 0 0 1 − a
⎞
⎟
⎟
⎟
⎟
⎠
,
where a is a constant in the interval [0, 1].
1) Find the values of a, for which the Markov chain is irreducible.
2) Find the values of a, for which the Markov chain is regular.
3) Find for every a the invariant probability vector.
4) At time t = 0 the process is in state E1. Let T denote the time when the process for the first time
is in state E5. Find the distribution of the random variable T.
5) Find the mean and variance of T.
6) Assume that a = 0. Prove that all the matrices Pn
for n ≥ 4 are equal to the same matrix Q, and
find Q.
1) If a = 0, then E5 is absorbing, and the Markov chain is not irreducible.
If a ∈ ]0, 1], then we get the transitions
E5 → E1 → E2 → E3 → E4 → E5,
proving that the Markov chain is irreducible for a ∈ ]0, 1].
2) If a ∈ ]0, 1[, then the diagonal element p55 = 1 − a  0, hence the Markov chain is regular.
On the other hand, if a = 1, then
P =
⎛
⎜
⎜
⎜
⎜
⎝
0 3
4
1
4 0 0
0 0 2
3
1
3 0
0 0 0 1
2
1
2
0 0 0 0 1
a 0 0 0 1 − a
⎞
⎟
⎟
⎟
⎟
⎠
, P2
=
⎛
⎜
⎜
⎜
⎜
⎝
0 0 1
2
3
8
1
8
0 0 0 1
3
2
3
1
2 0 0 0 1
2
1 0 0 0 0
0 3
4
1
4 0 0
⎞
⎟
⎟
⎟
⎟
⎠
,
and
P4
=
⎛
⎜
⎜
⎜
⎜
⎝
5
8
3
32
1
32 0 1
4
1
3
1
2
1
6 0 0
0 3
8
3
8
3
16
1
16
0 0 1
2
3
8
1
8
1
8 0 0 1
4
5
8
⎞
⎟
⎟
⎟
⎟
⎠
.
Download free ebooks at bookboon.com
Stochastic Processes 1
106
3. Markov chains
Since P4
has the transitions
E5 → E1 → E2 → E3 → E4 → E5,
the corresponding Markov chain is irreducible.
Furthermore, all elements of the diagonal are  0, so the Markov chain corresponding to the matrix
P4
is regular. This implies that the original Markov chain is also regular.
3) The equations of the invariant probability vector are
g1 = ag5,
g2 =
3
4
g1,
g3 =
1
4
g1 +
2
3
g2,
g4 =
1
3
g2 +
1
2
g2,
g5 =
1
2
g3 + g4 + (1 − a)g5,
thus
g1 = ag5,
g2 =
3
4
a g5,
g3 =
1
4
a g5 +
1
2
a g5 =
3
4
a g5,
g4 =
1
4
a g5 +
3
8
a g5 =
5
8
a g5.
A check gives
a g5 =
3
8
a g5 +
5
8
a g5 = a g5,
so it is OK.
Furthermore,
1 = g1 + g2 + g3 + g4 + g5 = g5

a +
3
4
a +
3
4
a +
5
8
a + 1

=

1 +
25
8
a

g5,
from which g5 =
8
8 + 25a
, and thus
g =
1
8 + 25a
(8a, 6a, 6a, 5a, 8).
4) Here a consideration of the graph is the easiest method:
E1
3
4
→ E2
2
3
→ E3
1
2
→ E4
1
→ E5
 1
4  1
3  1
2
E3
1
2
→ E4
1
→ E5
 1
2
E5
t = 0 t = 1 t = 2 t = 3 t = 4
Download free ebooks at bookboon.com
Stochastic Processes 1
107
3. Markov chains
It follows that
P{T = 1} = 0 and P{T = 2} =
1
4
·
1
2
=
1
8
.
To time t = 3 we get the paths
E1 → E2 → E3 → E5, sandsynlighed:
3
4
·
2
3
·
1
2
=
1
4
,
E1 → E2 → E4 → E5, sandsynlighed:
3
4
·
1
3
· 1 =
1
4
,
E1 → E3 → E4 → E5, sandsynlighed:
1
4
·
1
2
· 1 =
1
8
,
hence
P{T = 3} =
1
4
+
1
4
+
1
8
=
5
8
.
Finally,
P{T = 4} =
3
4
·
2
3
·
1
2
=
1
4
,
hence, summing up,
P{T = 2} =
1
8
, P{T = 3} =
5
8
, P{T = 4} =
1
4
.
5) The mean is
E{T} = 2 ·
1
8
+ 3 ·
5
8
+ 4 ·
1
4
=
25
8
.
Furthermore,
E
%
T2

= 4 ·
1
8
+ 9 ·
5
8
+ 16 ·
1
4
=
81
8
,
so
V {T} =
81
8
−
625
64
=
648 − 625
64
=
23
64
.
6) If a = 0, then
P =
⎛
⎜
⎜
⎜
⎜
⎝
0 3
4
1
4 0 0
0 0 2
3
1
3 0
0 0 0 1
2
1
2
0 0 0 0 1
0 0 0 0 1
⎞
⎟
⎟
⎟
⎟
⎠
, P2
=
⎛
⎜
⎜
⎜
⎜
⎝
0 0 1
2
3
8
1
8
0 0 0 1
3
2
3
0 0 0 0 1
0 0 0 0 1
0 0 0 0 1
⎞
⎟
⎟
⎟
⎟
⎠
, P4
=
⎛
⎜
⎜
⎜
⎜
⎝
0 0 0 0 1
0 0 0 0 1
0 0 0 0 1
0 0 0 0 1
0 0 0 0 1
⎞
⎟
⎟
⎟
⎟
⎠
.
Then it is obvious that P5
= P P4
= P4
, because the sums of the rows of P are 1. We conclude
that
Pn
= P4
, for n ≥ 4.
Download free ebooks at bookboon.com
Stochastic Processes 1
108
3. Markov chains
Example 3.44 Given a Markov chain of 5 states E1, E2, E3, E4 and E5, and transition probabilities
p1,1 = 1 − 4a, p1,2 = p1,3 = p1,4 = p1,5 = a,
p2,1 = p2,2 = p3,2 = p3,3 = p4,3 = p4,4 = p5,4 = p5,5 =
1
2
,
Pi,j = 0 otherwise.
Here a is a constant in the interval

0,
1
4
#
.
1. Find the stochastic matrix P.
2. Find the values of a, for which the Markov chain is irreducible.
3. Find the values of a, for which the Markov chain is regular.
4. Find for every a the invariant probability vector.
At time t = 0 the process is in state E2. Let T2 denote the random variable, which indicates the time,
when the process for the first time is in state E1.
5. Find P {T2 = k}, k ∈ N, and compute the mean of T2.
Then put a =
1
4
and assume that the process at time t = 0 is in state E1, and let T denote the time
of its first return to E1.
6. Find the mean of T.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎝
1 − 4a a a a a
1
2
1
2 0 0 0
0 1
2
1
2 0 0
0 0 1
2
1
2 0
0 0 0 1
2
1
2
⎞
⎟
⎟
⎟
⎟
⎠
.
2) If a = 0, then E1 is clearly an absorbing state, so P is not irreducible for a = 0.
When 0  a ≤
1
4
we notice the oblique diagonal below the main diagonal. All elements of this
diagonal are
1
2
, so we have always the flow
E5 → E4 → E3 → E2 → E1.
Now a  0 implies that also E1 → E5, so we get e.g.
E1 → E5 → E4 → E3 → E2 → E1,
proving that P is irreducible for 0  a ≤
1
4
.
3) If a ∈
#
0,
1
4
#
, then P is irreducible. Since there exist positive elements in the diagonal (e.g.
p2,2 = 1
2 ), it follows that P is regular for every a ∈
#
0,
1
4
#
.
Download free ebooks at bookboon.com
Stochastic Processes 1
109
3. Markov chains
4) The system of equations g P = g is written
(1 − 4a)g1 = 1
2 g2 = g1,
a g1 = 1
2 g2 = 1
2 g3 = g2,
a g1 + 1
2 g3 + 1
2 g4 = g3,
a g1 + 1
2 g4 = 1
2 g5 = g4,
a g1 + 1
2 g5 = g5,
from which clearly
g2 = 8a g1 and g5 = 2a g1.
Then by insertion of these values,
g3 = 6a g1 and g4 = 4a g1.
Finally,
1 =
5

i=1
gi = g1(1 + 8a + 6a + 4a + 2a) = (20a + 1)g1,
so
g =
1
20a + 1
(1, 8a, 6a, 4a, 2a).
In particular, g = (1, 0, 0, 0, 0) for a = 0.


       
In Paris or Online
International programs taught by professors and professionals from all over the world
BBA in Global Business
MBA in International Management / International Marketing
DBA in International Business / International Management
MA in International Education
MA in Cross-Cultural Communication
MA in Foreign Languages
Innovative – Practical – Flexible – Affordable
Visit: www.HorizonsUniversity.org
Write: Admissions@horizonsuniversity.org
Call: 01.42.77.20.66 www.HorizonsUniversity.org
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
110
3. Markov chains
5) We note that T2 is geometrically distributed, so
P {T2 = k} =

1
2
k
, k ∈ N,
and we have E {T2} = 2.
6) Starting at E1 we reach in the first step one of the states E2, E3, E4 or E5, all of probability
1
4
.
From these states it takes in average 2, 4, 6 or 8 steps to get back to E1. Consequently
E{T} = 1 +
1
4
{2 + 4 + 6 + 8} = 6.
Example 3.45 Given a Markov chain of 5 states E0, E1, E2, E3 and E4, and transition probabilities
p0,1 = p4,3 = 1, p3,2 = p2,1 = p1,0 =
2
3
,
p1,2 = p3,4 =
1
3
, p2,3 =
a
3
, p2,4 =
1 − a
3
,
pi,j = 0 otherwise.
Here a is a constant in the interval [0, 1]
1. Find the stochastic matrix P.
2. Prove that the Markov chain is irreducible for every a ∈ [0, 1].
3. Find for a ∈ [0, 1] the invariant probability vector.
4. Prove that the Markov chain is regular for a ∈ [0, 1[, but not for a = 1.
We assume in the following that the process at time t = 0 is in state E2.
5. Find for a = 1 the probability that the process gets to the state E4 before the state E0.
Hint: One may apply results concerning the ruin problem.
6. Find for a = 0 the probability that the process gets to state E4 before to state E0-
7. Find for every a ∈ [0, 1] the probability that the process reaches state E4 before state E0.
1) The stochastic matrix is
P =
⎛
⎜
⎜
⎜
⎜
⎝
0 1 0 0 0
2
3 0 1
3 0 0
0 2
3 0 a
3
1−a
3
0 0 2
3 0 1
3
0 0 0 1 0
⎞
⎟
⎟
⎟
⎟
⎠
, a ∈ [0, 1].
Download free ebooks at bookboon.com
Stochastic Processes 1
111
3. Markov chains
2) When a ∈ ]0, 1], then we have the transitions
E0 ←→ E1 ←→ E2 ←→ E3 ←→ E4,
and when a ∈ [0, 1[, then we have the transitions
E0 ←→ E1 ←→ E2 ←− E3,
 
E4,
and it follows that the chain is irreducible.
One might e.g. split into the three cases
a = 0 : E0 ←→ E1 ←→ E2 ←− E3,
 
E4,
a = 1 : E0 ←→ E1 ←→ E2 ←→ E3,

E4,
0  a  1 : E0 ←→ E1 ←→ E2 ←→ E3,
 
E4,
from which one also derives the irreducibility.
3) The system of equations g P = g is written
⎧
⎪
⎪
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎪
⎪
⎩
2
3 g1 = g0,
g0 + 2
3 g2 = g1,
1
3 g1 + 2
3 g3 = g2,
a
3 g2 +g4 = g3,
1−a
3 g3 + 1
3 g3 = g4,
thus
g1 =
3
2
g0,
g2 =
3
2
(g1 − g0) =
3
4
g0,
g3 =
3
2

g2 −
1
3
g1

=
3
8
g0,
g4 = g3 −
a
3
g2 =

3
8
−
a
4

g0.
Recalling that
1 =
4

i=0
gi = g0

1 −
3
2
+
3
4
+
3
8
+
3
8
−
a
4

= g0
'
4 −
a
4
(
=
16 − a
4
g0,
Download free ebooks at bookboon.com
Stochastic Processes 1
112
3. Markov chains
we conclude that
g0 =
4
16 − a
,
and hence
g =
4
16 − a

1,
3
2
,
3
4
,
3
8
,
3
8
−
a
4

=
1
32 − 2a
(8, 12, 6, 3, 3 − 2a).
4) If a = 1, then we have a random walk {E0, E1, E2, E3, E4}. It follows from the diagram
E0 ←→ E1 ←→ E2 ←→ E3 ←→ E4,
that it is only possible to get from E0 back to E0 through an even number of steps. Hence the
Markov chain is periodic of period 2, and it is not regular.
If a  1, then it was mentioned previously that we have the diagram
E0 ←→ E1 ←→ E2 ←→ E3,
 
E4.
The chain E4 −→ E3 −→ E4 shows that p
(2)
44  0, and thus p
(2n)
44  0.
The chain E4 −→ E3 −→ E2 −→ E4 shows that p
(3)
44  0. By a composition with the first chain it
follows that p
(2n+1)
44  0, thus p
(n)
44  0 for n  2.
Since P is irreducible, we conclude that P is regular.
Alternatively we compute Pn
, and it is easily seen that all elements of P6
are  0, and the
claim is proved-
5) We now return to the diagram for a = 1, i.e.
E0 ←→ E1 ←→ E2 ←→ E3 ←→ E4,
where E2 is the initial state.
This can be interpreted as a ruin problem, where we shall get to E4 before E0.
The interpretation gives
N = 4, k = 2, p =
1
3
, q =
2
3
,
so we are in case b) with
q
p
= 2, hence the probability of reaching E0 before E4 is
a2 =
22
− 24
1 − 24
=
12
15
=
4
5
.
Thus the wanted probability (of the complementary event) is
b2 = 1 − a2 =
1
5
.
Alternatively. Let bk denote the probability that we by starting in Ek reaches E4 before E0.
When we split the investigation according to what happens after one step, we get
bk =
1
3
bk+1 +
2
3
bk−1, k = 1, 2, 3, and b0 = 0, b4 = 1,
Download free ebooks at bookboon.com
Stochastic Processes 1
113
3. Markov chains
thus
1
3
bk+1 − bk +
2
3
bk−1 = 0.
The complete solution is
bk = c1 · 1 + c2 · 2k
.
We get
k = 0 : c1 + c2 = b0 = 0,
k = 4 : c1 + 16c2 = b4 = 1,
hence
⎧
⎪
⎪
⎨
⎪
⎪
⎩
c1 = −
1
15
,
c2 =
1
15
,
and whence
bk =
1
15

2k
− 1 , and in particular b2 =
3
15
=
1
5
.
By 2020, wind could provide one-tenth of our planet’s
electricity needs. Already today, SKF’s innovative know-
how is crucial to running a large proportion of the
world’s wind turbines.
Up to 25 % of the generating costs relate to mainte-
nance. These can be reduced dramatically thanks to our
systems for on-line condition monitoring and automatic
lubrication. We help make it more economical to create
cleaner, cheaper energy out of thin air.
By sharing our experience, expertise, and creativity,
industries can boost performance beyond expectations.
Therefore we need the best employees who can
meet this challenge!
The Power of Knowledge Engineering
Brain power
Plug into The Power of Knowledge Engineering.
Visit us at www.skf.com/knowledge
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
114
3. Markov chains
Alternatively. We can reach E4 without passing E0 by either
go directly to E4 (in two steps), probability
1
9
,
or
be back at E2 after two steps,
p
(2)
22 =
1
3
·
2
3
+
2
3
·
1
3
=
4
9
,
and then go directly to E4,
or
be back at E2 after 2 · 2 steps etc..
Summing up we get the probability
1
9
∞

n=0

4
9
n
=
1
9
·
1
1 −
4
9
=
1
9
·
9
5
=
1
5
.
6) If a = 0, then we have the diagram
E0 ←→ E1 ←→ E2 ←→ E4,
which again may be interpreted as a ruin problem. The probability of reaching E0 before E4 is for
N = 3, k = 2, p =
1
3
, q =
2
3
given by
a2 =
22
− 23
1 − 23
=
4
7
.
The wanted probability (of the complementary event) is
b2 = 1 −
4
7
=
3
7
.
Alternatively this question can also be solved by the two alternatives described in 5..
7) The general case. Let ck denote the probability when starting in Ek to reach E4 before E0.
We get by the usual splitting
c0 = 0,
c4 = 1,
c1 =
1
3
c2,
c2 =
2
3
c1 +
1 − a
3
+
a
3
c3,
c3 =
1
3
+
2
3
c2.
Download free ebooks at bookboon.com
Stochastic Processes 1
115
3. Markov chains
When we insert the expressions of c1 and c3 into the equations of c2, we get
c2 =
2
3
·
1
3
c2 +
1 − a
3
+
a
3

1
3
+
2
3
c2

=
2
9
c2 +
2
9
a c2 +
1
3
−
a
3
+
a
9
,
which is reduced to
c2

7
9
−
2
9
a

=
1
3
−
2a
9
=
3 − 2a
9
,
hence
c2 =
3 − 2a
7 − 2a
.
Check. If a = 0, then c2 =
3
7
, cf. 6., and if a = 1, then c2 =
1
5
, cf. 5.
Alternatively we split according to when we last time were in state E2,
p
(2)
22 =
2
3
·
1
3
+
a
3
·
2
3
=
2
9
(1 + a).
www.simcorp.com
MITIGATE RISK REDUCE COST ENABLE GROWTH
The financial industry needs a strong software platform
That’s why we need you
SimCorp is a leading provider of software solutions for the financial industry. We work together to reach a common goal: to help our clients
succeed by providing a strong, scalable IT platform that enables growth, while mitigating risk and reducing cost. At SimCorp, we value
commitment and enable you to make the most of your ambitions and potential.
Are you among the best qualified in finance, economics, IT or mathematics?
Find your next challenge at
www.simcorp.com/careers
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
116
3. Markov chains
The probability of going from E2 to E4 in one or two steps is
a
3
·
1
3
+
1 − a
3
=
1
3
−
2
9
a =
3 − 2a
9
.
The wanted probability is
3 − 2a
9
∞

n=0

2
9
(1 + a)
n
=
3 − 2a
9
·
1
1 −
2
9
−
2a
9
=
3 − 2a
7 − 2a
.
Example 3.46 A Markov chain of the states E1, E2, E3 and E4 has the stochastic matrix
P =
⎛
⎜
⎜
⎝
0 1
2
1
2 0
0 0 1
2
1
2
0 0 1
2
1
2
a 0 0 1 − a
⎞
⎟
⎟
⎠ ,
where a is a constant in the interval [0, 1].
1. Find the values of a, for which the Markov chain is irreducible, and the values of a, for which it is
regular.
2. Find for every a the invariant probability vector.
A particle moves between the states E1, E2, E3 and E4 of the given transition probabilities. At time
t = 0 the particle is in state E1. Let T denote the random variable, which indicates the time, when
the particle for the first time is in state E4.
3. Find P{T = 2}.
Hint: Split the investigation according to whether the particle is passing through state E2 or state
E3.
4. Find P{T = n} for n = 2, 3, 4, . . . .
5. Find the mean of T.
6. Explain why we have in the case a = 0 that
Pn
→
⎛
⎜
⎜
⎝
0 0 0 1
0 0 0 1
0 0 0 1
0 0 0 1
⎞
⎟
⎟
⎠ for n → ∞.
1) If a = 0, then E4 is absorbing, and the Markov chain is neither irreducible nor regular for a = 0.
If a ∈ ]0, 1], then we have the transitions
E1 −→ E2 −→ E3 −→ E4 −→ E1,
and the Markov chain is irreducibel.
Since p3,3 =
1
2
 0, it is also regular for a ∈ ]0, 1].
Download free ebooks at bookboon.com
Stochastic Processes 1
117
3. Markov chains
2) The equations of the invariant probability vector are
g1 = a g4, thus g1 = a g4,
g2 = 1
2 g1, thus g2 = 1
2 a g4,
g3 = 1
2 g1 + 1
2 g2 + 1
2 g3, thus g3 = g1 + g2 = 3
2 a g4,
hence
1 = g1 + g2 + g3 + g4 = g4

a +
1
2
a +
3
2
a + 1

)(1 − 3a)g4.
The invariant probability vector is
g =
1
1 + 3a

a,
a
2
,
3a
2
, 1

.
3) We derive from the matrix the tree
E1 → E2 → E3
 
R3 → E4

E3
where all arrows have the weight
1
2
, thus
P{T = 2} =
1
2
·
1
2
+
1
2
·
1
2
=
1
2
.
4) We have at step n = 2 only the possibilities E3 and E4. Hence
P{T = n} =

1
2
n−1
for n ≥ 2.
5) The mean is
E{T} =
∞

n=2
n

1
2
n−1
=
∞

n=1
n

1
2
n−1
− 1 =
1

1 −
1
2
2 − 1 = 3.
6) If a = 0, then
P =
⎛
⎜
⎜
⎝
0 1
2
1
2 0
0 0 1
2
1
2
0 0 1
2
1
2
0 0 0 1
⎞
⎟
⎟
⎠ ,
thus
⎧
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎩
p
(n+1)
i,1 = 0,
p
(n+1)
i,2 =
1
2
p
(n)
2,i = 0,
p
(n+1)
i,3 =
1
2
'
p
(n)
3,1 + p
(n)
3,2 + p
(n)
3,3
(
=
1
2
p
(n)
3,3 ,
Download free ebooks at bookboon.com
Stochastic Processes 1
118
3. Markov chains
and hence p
(n)
1,j = p
(n)
2,j = 0, and
p
(n+1)
3,1 + p
(n+1)
3,2 + p
(n+1)
3,3 =
1
2
'
p
(n)
3,1 + p
(n)
3,2 + p
(n)
3,3
(
.
This shows that
p
(n)
i,j → 0 for n → ∞, ifi = 1, 2, 3, 4 and j = 1, 2, 3,
and we get
p
(n)
i,4 → 1 for n → ∞,
and the claim is proved.
Example 3.47 A Markov chain of the states E1, E2, E3, E4 and E5 has the stochastic matrix
P =
⎛
⎜
⎜
⎜
⎜
⎝
0 1
3
1
3
1
3 0
0 0 1
2
1
2 0
0 0 0 1
2
1
2
0 0 0 0 1
a 0 0 0 1 − a
⎞
⎟
⎟
⎟
⎟
⎠
,
where a is a constant in the interval [0, 1].
1. Find the values of a, for which the Markov chain is irreducible.
2. Find the values of a, for which the Markov chain is regular.
3. Find for every a the invariant probability vector.
Assume in the following that a =
1
2
.
At time t = 0 the process is in state E1. Let T denote the random variable, which indicates the
time, when the particle for the first time is in state E5, and let U denote the random variable, which
indicates the time, when the particle for the first time returns to the state E1.
4. Find P{T = k} for k = 2, 3, 4, and the mean of T.
5. Find P{U = 3} and P{Y = 4}.
6. Find P{U = k} for k = 5, 6, . . . .
Hint: Split into the cases T = 2, T = 3 or T = 4.
1) When a = 0, then E5 is absorbing, and the Markov chain is not irreducible.
If a ∈ ]0, 1], then we have the transitions
E1 −→ E2 −→ E3 −→ E4 −→ E5 −→ E1,
and the Markov chain is irreducible for a ∈ ]0, 1].
Download free ebooks at bookboon.com
Stochastic Processes 1
119
3. Markov chains
2) If a ∈ ]0, 1[, then the element of the diagonal p5,5 = 1 − a  0, and since the Markov chain is
irreducible for a ∈ ]0, 1[, it is also regular for a ∈ ]0, 1[.
If a = 1, then
P =
⎛
⎜
⎜
⎜
⎜
⎝
0 1
3
1
3
1
3 0
0 0 1
2
1
2 0
0 0 0 1
2
1
2
0 0 0 0 1
1 0 0 0 0
⎞
⎟
⎟
⎟
⎟
⎠
, P2
=
⎛
⎜
⎜
⎜
⎜
⎝
0 0 1
6
1
3
1
2
0 0 0 1
4
3
4
1
2 0 0 0 1
2
1 0 0 0 0
0 1
3
1
3
1
3 0
⎞
⎟
⎟
⎟
⎟
⎠
, P3
=
⎛
⎜
⎜
⎜
⎜
⎝
1
2 0 0 1
12
5
12
3
4 0 0 0 1
4
1
2
1
6
1
6
1
6 0
0 1
3
1
3
1
3 0
0 0 1
6
1
3
1
2
⎞
⎟
⎟
⎟
⎟
⎠
.
The Markov chain corresponding to P3
has the transitions
E1 −→ E5 −→ E4 −→ E3 −→ E2 −→ E1,
so it is irreducible. Since the element of the diagonal p
(3)
1,1 =
1
2
 0, it is also regular. Hence the
original Markov chain is also regular for a = 1, so the Markov chain is regular for a ∈ ]0, 1].
3) The equations of the invariant probability vector are
g1 = a g5, thus g1 = a g5,
g2 = 1
3 g1, thus g2 = 1
3 a g5,
g3 = 1
3 g1 + 1
2 g2, thus g3 = 3
2 g2 = 1
2 a g5,
g4 = 1
3 g1 + 1
2 g2 + 1
2 g3, thus g4 = 3
2 g3 = 3
4 a g5.
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
120
3. Markov chains
Hence
1 = g1 + g2 + g3 + g4 + g5 = g5

a +
1
3
a +
1
2
a +
3
4
a + 1

=

1 +
31
12
a

g5,
and the invariant probability vector is
g =
12
12 + 31a

a,
a
3
,
a
2
,
3
4
a, 1

.
4) We have the tree
t = 0 t = 1 t = 2 t = 3 t = 4
E1
1
3
−→ E2
1
2
−→ E3
1
2
−→ E4
1
−→ E5
  1
3  1
2  1
2  1
2
E1 E3
1
2
−→ E4
1
−→ E5
 1
3  1
2  1
2
E4
1
−→ E5
When we compute P{T = 2} we have the paths
E1 −→ E3 −→ E5 probability: 1
3 · 1
2 = 1
6 ,
E1 −→ E4 −→ E5 probability: 1
3 · 1 = 1
3 ,
thus
P{T = 2} =
1
6
+
1
3
=
1
2
.
When we compute P{T = 3} we have the paths
E1 −→ E2 −→ E3 −→ E5, probability: 1
3 · 1
2 · 1
2 = 1
12 ,
E1 −→ E2 −→ E4 −→ E5, probability: 1
3 · 1
2 · 1 = 1
6 ,
E1 −→ E3 −→ E4 −→ E5, probability: 1
3 · 1
2 · 1 = 1
6 ,
hence
P{T = 3} =
1
12
+
1
6
+
1
6
=
5
12
.
When we compute P{T = 4} we shall only consider the path
E1 −→ E2 −→ E3 −→ E4 −→ E5, probability:
1
3
·
1
2
·
1
2
=
1
12
.
5) It is only possible to reach E1 via E5. Since a =
1
2
, we have
P{U = 3} = P{T = 2} · P {E5 → E1} =
1
2
·
1
2
=
1
4
,
P{U = 4} = P{T = 2} · P {E5 → E5 → E1} + P{T = 3} · P {E5 → E1}
=
1
2
·
1
2
·
1
2
+
5
12
·
1
2
=
3 + 5
24
=
1
3
.
Download free ebooks at bookboon.com
Stochastic Processes 1
121
3. Markov chains
6) When k ≥ 5, we shall find how much “mass”, which is collected in total in E5 at t = 4. This mass
of probability is
P{T = 2} ·
1
2
·
1
2
+ P{T = 3} ·
1
2
+ P{T = 4} =
1
8
+
5
24
+
1
12
=
10
24
=
5
12
.
In every one of the following steps, half of it remains in E5, and the other half is transferred to
E1, so
P{U = k} =
5
12
·

1
2
k−4
for k ≥ 5.
Challenging? Not challenging? Try more
Try this...
www.alloptions.nl/life
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
122
3. Markov chains
Example 3.48 A Markov chain of 2 states E1 and E2 has the stochastic matrix
Q =
⎛
⎜
⎜
⎝
1
5
4
5
3
5
2
5
⎞
⎟
⎟
⎠ .
1. Prove that Q is regular, and find the invariant probability vector.
Another Markov chain of 4 states E1, E2, E3 and E4 has the stochastic matrix
P =
⎛
⎜
⎜
⎝
1
5
4
5 0 0
3
5
2
5 0 0
1
5 0 2
5
2
5
0 1
5
2
5
2
5
⎞
⎟
⎟
⎠ .
2. Prove that P is not irreducible, and find its closed subsets.
3. Prove for every initial distribution
p(0)
=

p
(0)
1 , p
(0)
2 , p
(0)
3 , p
(0)
4

that
p
(n)
3 + p
(n)
4 =
4
5
'
p
(n−1)
3 + p
(n−1)
4
(
, n ∈ N,
and then prove that
lim
n→∞
p
(n)
3 = lim
n→∞
p
(n)
4 = 0.
4. Show that limn→∞ p(n)
exists and find the limit vector.
5. At time t = 0 the process is in state E3. Find for every n ∈ N the probability that the process for
t = n for the first time is in state E1 without previously having been in state E2.
1) All elements of Q are  0, so the Markov chain is regular. The equations of the invariant probability
vector are
g1 = 1
5 g1 + 3
5 g2, thus g2 = 4
3 g1,
1 = g1 + g2 =

1 + 4
3 g1 = 7
3 g1,
so
g =

3
7
,
4
7

.
2) Clearly, {E1, E2} is a closed subset, and there is no other proper closed subset. However, since
there exists a proper closed subset, the Markov chain is not irreducible.
Download free ebooks at bookboon.com
Stochastic Processes 1
123
3. Markov chains
3) It follows immediately that
p
(n)
3 =
2
5
p
(n−1)
3 +
2
5
p
(n−1)
4 ,
and
p
(n)
4 =
2
5
p
(n−1)
3 +
2
5
p
(n−1)
4 = p
(n)
3 ,
hence
p
(n)
3 + p
(n)
4 =
4
5
'
p
(n−1)
3 + p
(n−1)
4
(
, n ∈ N.
Then by iteration,
p
(n)
3 + p
(n)
4 =

4
5
n '
p
(0)
3 + p
(0)
4
(
→ 0 for n → ∞,
so
0 = lim
n→∞
'
p
(n)
3 + p
(n)
4
(
= 2 lim
n→∞
p
(n)
3 = 2 lim
n→∞
p
(n)
4 .
4) It follows from 3. that the latter two coordinates tend towards 0 for n → ∞. The former two
coordinates are governed by the matrix Q, so we conclude from 1. that
lim
n→∞
p(n)
=

3
7
,
4
7
, 0, 0

.
5) The two states E3 and E4 occur in every step of the same weight, thus
1
2
·
1
5
=
1
10
of the total
weight goes to E1, and
1
2
·
1
5
=
1
10
of the total weight goes to E2. Then we have the diagram
E2
1
10  
E3
4
5
→ {E3, E4}
4
5
→ {E3, E4} →,
1
5  1
10  
E1 E1
hence
P{T = 1} =
1
5
, P{T = 2} =
4
5
·
1
10
, . . . , P{T = n} =
1
10

4
5
n−1
,
which can also be written
P{T = n} =
22n−3
5n
=
1
8

4
5
n
for n ≥ 2,
together with
P{T = 1} =
1
5
for n = 1.
Download free ebooks at bookboon.com
Stochastic Processes 1
124
3. Markov chains
Example 3.49 A Markov chain of the states E1, E2, E3, E4 and E5 has the stochastic matrix
P =
⎛
⎜
⎜
⎜
⎜
⎝
0 1
3
1
3 0 1
3
0 0 1 0 0
0 0 0 1
2
1
2
0 0 0 0 1
a 0 0 0 1 − a
⎞
⎟
⎟
⎟
⎟
⎠
,
where a is a constant in the interval [0, 1].
1. Find the values of a, for which the Markov chain is irreducible.
2. Find the values of a, for which the Markov chain is regular.
3. Find for every a the invariant probability vector.
The process is at time t = 0 in the state E1. Let T denote the random variable, which indicates the
time when the process for the first time is in state E5.
4. Find P{T = k} for k = 1, 2, 3, 4, and then the mean of T.
Assume that a  0 and that the process at time t = 0 is in state E1. Let U denote the random variable,
which indicates the time when the process for the first time returns to E1.
5. Find the mean of U.
1) If a = 0, then E5 is absorbing, and the Markov chain is not irreducible.
If a ∈ ]0, 1], then we have the transitions
E5 −→ E1 −→ E2 −→ E3 −→ E4 −→ E5,
and the Markov chain is irreducible for a ∈ ]0, 1].
2) We have proved for a ∈ ]0, 1[ that the Markov chain is irreducible, and since the element of the
diagonal p5,5 = 1 − a  0, we conclude that the Markov chain is regular for every a ∈ ]0, 1[.
If a = 1, and we let denote elements  0, then
P2
=
⎛
⎜
⎜
⎜
⎜
⎝
0 0
0 0 0 0
0 0 0
0 0 0 0
0 0 0 0
⎞
⎟
⎟
⎟
⎟
⎠
=
⎛
⎜
⎜
⎜
⎜
⎝
0 0
0 0 0 0
0 0 0
0 0 0 0
0 0 0 0
⎞
⎟
⎟
⎟
⎟
⎠
=
⎛
⎜
⎜
⎜
⎜
⎝
0
0 0 0
0 0 0
0 0 0 0
0 0
⎞
⎟
⎟
⎟
⎟
⎠
.
The Markov chain corresponding to P2
has the transitions
E1 −→ E3 −→ E5 −→ E2 −→ E4 −→ E1,
from which follows that it is irreducible. Furthermore, P2
has elements in its diagonal which are
 0, hence it is also regular. This implies again that the original Markov chain is regular for
a ∈ ]0, 1].
Download free ebooks at bookboon.com
Stochastic Processes 1
125
3. Markov chains
3) The equations of the invariant probability vector are
g1 = a g5, thus g1 = a g5,
g2 =
1
3
g1, thus g2 =
1
3
a g5
g3 =
1
3
g1 + g2, thus g3 =
2
3
a g5,
g4 =
1
2
g3, thus g4 =
1
3
a g5,
where the latter equation is used as a check:
g5 =
1
3
g1 +
1
2
g3 + g4 + (1 − a)g5.
We have furthermore the condition
1 = g1 + g2 + g3 + g4 + g5 = g5

a +
1
3
a +
2
3
a +
1
3
a + 1

= g5

7
3
a + 1

,
hence g5 =
3
7a + 3
, and
g =
1
7a + 3
(3a, a, 2a, a, 3).
Stand out from the crowd
Designed for graduates with less than one year of full-time postgraduate work
experience, London Business School’s Masters in Management will expand your
thinking and provide you with the foundations for a successful career in business.
The programme is developed in consultation with recruiters to provide you with
the key skills that top employers demand. Through 11 months of full-time study,
you will gain the business knowledge and capabilities to increase your career
choices and stand out from the crowd.
Applications are now open for entry in September 2011.
For more information visit www.london.edu/mim/
email mim@london.edu or call +44 (0)20 7000 7573
Masters in Management
London Business School
Regent’s Park
London NW1 4SA
United Kingdom
Tel +44 (0)20 7000 7573
Email mim@london.edu
www.london.edu/mim/
Fast-track
your career
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
126
3. Markov chains
4) If to t = 0 we start in E1, then we have the tree
E1
1
3
→ E2
1
→ E3
1
2
→ E4
1
→ E5
  1
3  1
2
E1 E3
1
2
→ E4
1
→ E5
 1
3  1
2
E5 E5
t = 1 t = 2 t = 3 t = 4
From this we infer that
P{T = 1} =
1
3
,
P{T = 2} =
1
3
·
1
2
=
1
6
,
P{T = 3} =
1
3
·
1
2
· 1 +
1
3
· 1 ·
1
2
=
1
3
,
P{T = 4} =
1
3
· 1 ·
1
2
=
1
6
.
The mean of T is
E{T} = 1 ·
1
3
+ 2 ·
1
6
+ 3 ·
1
3
+ 4 ·
1
6
=
1
3
+
1
3
+ 1 +
2
3
=
7
3
.
5) We first notice that we can only reach E1 via E5.
If the process is in E5, then we have the probability a of in the next step to be in E1, and probability
1 − a of to remain in E5. This gives a geometric distribution Pas(1, a) of mean
1
a
. Then finally,
E{U} = E{T} +
1
a
=
7
3
+
1
a
.
Download free ebooks at bookboon.com
Stochastic Processes 1
127
3. Markov chains
Example 3.50 A Markov chain of states E1, E2, E3 and E4 has the stochastic matrix
P =
⎛
⎜
⎜
⎝
0 1 0 0
1
2 0 1
4
1
4
0 0 0 1
0 a 0 1 − a
⎞
⎟
⎟
⎠ ,
where a is a constant in the interval [0, 1].
1. Find the values of a, for which the Markov chain is irreducible.
2. Find the values of a, for which the Markov chain is regular.
3. Find for every a the invariant probability vector.
At t = 0 the process is in state E2. Let T denote the random variable, which indicates the time, when
the process for the first time is in state E4.
4. Find the probabilities P{T = 1} and P{T = 2}.
5. Prove that for every k ∈ N0,
P{T = 2k + 1} = P{T = 2k + 2},
and find these probabilities.
6. Find the mean of the random variable T.
1) By analyzing the stochastic matrix we obtain the diagram
E1 ←→ E2 −→ E3 −→ E4  1 − a
↓ ↓ a
E4 E2
If a = 0, then the Markov chain is absorbing with E4 as absorbing state.
If a  0, then clearly the Markov chain is irreducible.
2) The Markov chain is not irreducible, and therefore not regular either for a = 0.
If a  0, then p
(2)
22  0 and p
(3)
22  0, and the Markov chain is regular.
Alternatively one may prove that all elements of P5
are  0.
Alternatively we have for 0  a  1 that p44  0, and we shall only investigate the case a = 1
separately.
3) Ligningerne g P = g for sandsynlighedsvektoren skrives
1
2
g2 = g1, thus g2 = 2g1,
g1 + a g4 = g2, thus a g4 = g1,
1
4
g2 = g3,
g3 =
1
2
g1,
1
4
g2 + g3 + (1 − a)g4 = g4.
Download free ebooks at bookboon.com
Stochastic Processes 1
128
3. Markov chains
If a = 0, then g1 = 0, hence
g = (0, 0, 0, 1).
If a = 0, then g4 =
1
a
g1, hence
1 =
4

i=1
gi = g1

1 + 2 +
1
2
+
1
a

=
7a + 2
2a
g1,
from which
g1 =
2a
7a + 2
,
and
g =
1
7a + 2
(2a, 4a, a, 2).
4) The event {T = 1} can only occur by the transition E2 −→ E4, so
P{T = 1} =
1
4
.
The event {T = 2} can only occur by the process
E2 −→ E3 −→ E4,
thus
P{T = 2} =
1
4
.
5) We can only obtain E4 to time 2k + 1 by repeating E2 −→ E1 −→ E2 in total k times, follows by
E2 −→ E4.
Analogously for 2k +2, with the modicication that we at last replace E2 −→ E4 by E2 −→ E3 −→
E4. It follows (cf. 4.) that
P{T = 2k + 1} =
1
4
·

1
2
k
= P{T = 2k + 2}, k ∈ N.
When we compare with 4., we see that this is also true for k = 0.
Download free ebooks at bookboon.com
Stochastic Processes 1
129
3. Markov chains
6) Using the results of 5. it follows by straightforward computations that
E{T} =
∞

n=1
n P{Tn} =
∞

k=0
((2k + 1)P{T = 2k + 1} + (2k + 2)P{T = 2k + 2})
=
∞

k=0
{(2k + 1) + (2k + 2)} ·
1
4

1
2
k
=
∞

k=0
1
4
(4k + 3)

1
2
k
=
∞

k=0

(k + 1) −
1
4
 
1
2
k
=
∞

=1
·

1
2
−1
−
1
4
∞

k=0

1
2
k
=
+
d
dx
∞

=0
x
,
x= 1
2
−
1
4
· 2 =

d
dx

1
1 − x
#
x= 1
2
−
1
2
=
1

1 −
1
2
2 −
1
2
= 4 −
1
2
=
7
2
.
©
UBS
2010.
All
rights
reserved.
www.ubs.com/graduates
Looking for a career where your ideas could really make a difference? UBS’s
Graduate Programme and internships are a chance for you to experience
for yourself what it’s like to be part of a global team that rewards your input
and believes in succeeding together.
Wherever you are in your academic career, make your future a part of ours
by visiting www.ubs.com/graduates.
You’re full of energy
and ideas. And that’s
just what we are looking for.
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
130
3. Markov chains
Example 3.51 A Markov chain of states E1, E2, E3, E4 and E5 has the stochastic matrix
P =
⎛
⎜
⎜
⎜
⎜
⎝
a 0 0 0 1 − a
1 0 0 0 0
1
2
1
2 0 0 0
0 0 1 0 0
0 1
2 0 1
2 0
⎞
⎟
⎟
⎟
⎟
⎠
,
where a is a constant in the interval [0, 1].
1. Find the values of a, for which the Markov chain is irreducible.
2. Find the values of a, for which the Markov chain is regular.
3. Find for every a the invariant probability vector.
At time t = 0 the process is in state E5. Let T denote the random variable, which indicates the time,
when the process for the first time is in state E1
4. Find P{T = k} for k = 2, 3, 4, and then the mean and variance of T.
Then assume that a  1, and that the process at t = 0 is in state E5.
Let U denote the random variable, which indicates the time when the process for the first time returns
to E5.
5. Find the mean of U.
1) If a = 1, then E1 is absorbing, and the Markov chain is not irreducible.
If a ∈ [0, 1[, then we have the transitions
E1 −→ E5 −→ E4 −→ E3 −→ E2 −→ E1,
and the Markov chain is irreducible for a ∈ [0, 1[.
2) If a ∈ [0, 1[, then e.g.
E1 −→ E5 −→ E4 −→ E3 −→ E2 −→ E1, 5 steps,
and
E1 −→ E5 −→ E4 −→ E3 −→ E1, 4 steps.
The largest common divisor for 4 and 5 is 1, so the Markov chain is regular.
Alternatively, let denote that pi,j  0, and let A denote that a is a factor (so A = 0 for
a = 0, and A = for a ∈ ]0, 1[). Then successively,
P =
⎛
⎜
⎜
⎜
⎜
⎝
A 0 0 0
0 0 0 0
0 0 0
0 0 0 0
0 0 0
⎞
⎟
⎟
⎟
⎟
⎠
, P2
=
⎛
⎜
⎜
⎜
⎜
⎝
A 0 A
A 0 0 0
0 0 0
0 0 0
0 0 0
⎞
⎟
⎟
⎟
⎟
⎠
,
Download free ebooks at bookboon.com
Stochastic Processes 1
131
3. Markov chains
P4
=
⎛
⎜
⎜
⎜
⎜
⎝
A A
A A A
A
A 0
0
⎞
⎟
⎟
⎟
⎟
⎠
, P8
=
⎛
⎜
⎜
⎜
⎜
⎝
A
⎞
⎟
⎟
⎟
⎟
⎠
,
and we see that all elements of e.g. P16
are  0, so the Markov chain is regular for a ∈ [0, 1[.
3) The equations of the invariant probability vector are
p1 = a p1 + p2 +
1
2
p3,
p2 =
1
2
p3 +
1
2
p5,
p3 = p4,
p4 =
1
2
p5,
p5 = (1 − a)p1.
Then we get, expressed by p1,
p5 = (1 − a)p1,
p3 = p4 =
1
2
p5 =
1
2
(1 − a)p1,
p2 =
1
2
p3 +
1
2
p5 =
3
4
(1 − a)p1.
thus
1 = p1 + p2 + p3 + p4 + p5 = p1 +

3
4
+
1
2
+
1
2
+ 1

(1 − a)p1 = p1

1 +
11
4
(1 − a)

,
hence
p1 =
4
15 − 11a
, p2 =
3(1 − a)
15 − 11a
, p3 = p4 =
2(1 − a)
15 − 11a
, p5 =
4(1 − a)
15 − 11a
,
and the invariant probability vector is
p =
1
15 − 11a
(4, 3(1 − a), 2(1 − a), 2(1 − a), 4(1 − a)).
4) We have here the tree
E5
1
2
−→ E2
1
−→ E1 E2
1
−→ E1,
1
2  1
2 
E4
1
−→ E3
1
2
−→ E1
Download free ebooks at bookboon.com
Stochastic Processes 1
132
3. Markov chains
from which we conclude that
P{T = 2} =
1
2
· 1 =
1
2
,
P{T = 3} =
1
2
· 1 ·
1
2
=
1
4
,
P{T = 4} =
1
2
· 1 ·
1
2
· 1 =
1
4
.
Notice that the sum is 1, so there is no other possibility.
The mean is
E{T} = 2 ·
1
2
+ 3 ·
1
4
+ 4 ·
1
4
=
11
4
.
It follows from
E
%
T2

= 4 ·
1
2
+ 9 ·
1
4
+ 16 ·
1
4
=
33
4
,
that the variance is
V {T} =
33
4
−

11
4
2
=
132 − 121
16
=
11
16
.
Please
click
the
advert
Download free ebooks at bookboon.com
Stochastic Processes 1
133
3. Markov chains
5) We can only reach E5 via E1, and since
P {E1 −→ E5 i k steps} = P
'
E1
a
−→ E1
a
−→ E1
a
−→ · · ·
a
−→ E1
1−a
−→ E5
(
= ak−1
(1 − a),
we get
E{U} = E{T} +
∞

k=1
k ak−1
(1 − a) =
11
4
+ (1 − a) ·
1
(1 − a)2
=
11
4
+
1
1 − a
=
15 − 11a
4(1 − a)
=
1
p5
.
Example 3.52 Given a Markov chain of 5 states E1, E2, E3, E4 and E5 and transition probabilities
p1,1 = p2,2 = p3,3 =
2
3
, p1,2 = p2,3 =
1
3
,
p3,4 =
a
3
, p3,5 =
1 − a
3
, p4,5 = p5,1 = 1,
and pi,j = 0 oherwise.
Here a is a constant in the interval [0, 1].
1. Find the stochastic matrix P.
2. Find the values of a, for which the Markov chain is irreducible.
3. Find the values of a, for which the Markov chain is regular.
4. Find for every a the invariant probability vector.
We assume that the process at time t = 0 is in state E1. Let T denote the random variable, which
indicates the time, when the process for the first time is in state E2.
5. Find the probabilities P{T = k}, k ∈ N, and then the mean E{T}.
Then assume instead that the process at time t = 0 is in state E3. Let U denote the random variable,
which indicates the time, when the process for the first time is in state E5.
6. Find the mean E{U}.
1) If a ∈ [0, 1], then
P =
⎛
⎜
⎜
⎜
⎜
⎝
2
3
1
3 0 0 0
0 2
3
1
3 0 0
0 0 2
3
a
3
1−a
3
0 0 0 0 1
1 0 0 0 0
⎞
⎟
⎟
⎟
⎟
⎠
.
2) We shall consider three cases:
(a) a = 0, (b) a = 1, (c) 0  a  1.
Download free ebooks at bookboon.com
Stochastic Processes 1
134
3. Markov chains
a) If a = 0, then we get the matrix
P =
⎛
⎜
⎜
⎜
⎜
⎝
2
3
1
3 0 0 0
0 2
3
1
3 0 0
0 0 2
3 0 1
3
0 0 0 0 1
1 0 0 0 0
⎞
⎟
⎟
⎟
⎟
⎠
.
We notice that the fourth column is the zero column, so we can never get to E4 from any Ei,
hence the Markov chain is not irreducible for a = 0.
b) If a = 1, then we get the matrix
P =
⎛
⎜
⎜
⎜
⎜
⎝
2
3
1
3 0 0 0
0 2
3
1
3 0 0
0 0 2
3
1
3 0
0 0 0 0 1
1 0 0 0 0
⎞
⎟
⎟
⎟
⎟
⎠
.
Then we have in particular the diagram
E1 −→ E2 −→ E3 −→ E4 −→ E5 −→ E1,
and the Markov chain is irreducible for a = 1.
c) If 0  a  1, then we have in particular the diagram
E1 −→ E2 −→ E3 −→ E4 −→ E5 −→ E1,
and the Markov chain is irreducible for 0  a  1.
Summing up, the Markov chain is irreducible for 0  a ≤ 1.
3) Since the Markov chain is irreducible for 0  a ≤ 1, and p1,1 =
2
3
 0, it follows that the Markov
chain is also regular for 0  a ≤ 1.
4) The equation of the invariant probability vector g is
g P = g,
which we expand as
⎧
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎩
p1 = 2
3 p1 + p5,
p2 = 1
3 p1 + 2
3 p2,
p3 = 1
3 p2 + 2
3 p3,
p4 = a
3 p3,
p5 = 1−a
3 p3 + p4,
thus
⎧
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎩
p1 = 3p5,
p2 = p1,
p3 = p2,
p4 = a
3 p3,
p5 = 1−a
3 p3 + p4.
When p1, . . . , p4 are expressed by p5, then
p1 = p2 = p3 = 3p5 and p4 =
a
3
p3 = a p5,
thus
g = p5 (3, 3, 3, a, 1)
Download free ebooks at bookboon.com
Stochastic Processes 1
135
3. Markov chains
where we have the condition
1 = p1 + p2 + p3 + p4 + p5 = p5 (3 + 3 + 3 + a + 1) = (10 + a)p5,
so
p5 =
1
10 + a
,
and
g =
1
10 + a
(3, 3, 3, a, 1).
5) We start for t = 0 at E1, corresponding to the diagram
E1
2
3
−→ E1
2
3
−→ E1
2
3
−→ E1
2
3
−→ E1 −→ · · ·
 1
3  1
3  1
3  1
3 
E2 E2 E2 E2 · · ·
t = 0 t = 1 t = 2 t = 3 t = 4 · · ·
It follows that
P{T = k} =
1
3

2
3
k−1
, k ∈ N,
thus T is geometrically distributed, T ∈ Pas

1,
1
3

, p =
1
3
.
The mean is
E{T} =
1
p
= 3.
Alternatively,
E{T} =
∞

k=1
k P{T = k} =
1
3
∞

k=1
k

2
3
k−1
=
1
3
·
1

1 −
2
3
2 = 3.
6) Here we get the diagram
E5 E4
1
−→ E5 E4 −→
1−a
3  a
3  1−a
3  a
3  
E3
2
3
−→ E3
2
3
−→ E3
2
3
−→ E3
2
3
−→ E3 −→
 2
3  1−a
3  a
3  1−a
3 
E4
1
−→ E5 E4
1
−→ E5
A simple counting gives
P{U = 1} =
1 − a
3
,
P{U = 2} = P {E3 → E3} · P {E3 → E5} + P {E3 → E4} · P {E4 → E5}
=
2
3
·
1 − a
3
+
a
3
· 1 =
2 − 2a
9
+
a
3
=
2 + a
9
.
Download free ebooks at bookboon.com
Stochastic Processes 1
136
3. Markov chains
Now P{U = 3} is obtained by the paths
E3
2
3
−→ E3
2
3
−→ E3
1
−
a
3
−→ E5, probability
2
3
·
2
3
·
1 − a
3
=
2
3
·
2 − 2a
9
,
E3
2
3
−→ E3
a
3
−→ E4
1
−→ E5, probability
2
3
·
a
3
=
2
3
·
3a
9
,
so
P{U = 3} =
2
3
·
2 + a
9
=

2
3
1
· P{U = 2}.
Then repeat the pattern
P{U = 4} = P {E3 → E3} · P{U = 3} =

2
3
2
P{U = 2},
and in general
P{U = k} =

2
3
k−2
P{U = 2} =
2 + a
9

2
3
k−2
for k ≥ 2.
The mean is
E{U} =
∞

k=1
k P{U = k} =
1 − a
3
· 1 +
2 + a
9
∞

k=2
k

2
3
k−2
=
1 − a
3
+
2 + a
9
∞

k=1
(k + 1)

2
3
k−1
=
1 − a
3
+
2 + a
9
∞

k=1

2
3
k−1
+
2 + a
9
∞

k=1
k

2
3
k+1
=
1 − a
3
+
2 + a
9
·
1
1 −
2
3
+
2 + a
9
·
1

1 −
2
3
2
=
1 − a
3
+
2 + a
3
+ (2 + a) = 3 + a.
Download free ebooks at bookboon.com
Stochastic Processes 1
137
Index
Index
absorbing state, 13, 25
Arcus sinus law, 10
closed subset of states, 13
convergence in probability, 28
cycle, 22
discrete Arcus sinus distribution, 10
distribution function of a stochastic process, 4
double stochastic matrix, 22, 39
drunkard’s walk, 5
Ehrenfest’s model, 32
geometric distribution, 124, 133
initial distribution, 11
invariant probability vector, 11, 22, 23, 25, 26,
28, 30, 32, 36, 39
irreducible Markov chain, 12, 18–23, 32, 36, 39,
41, 43, 45, 47, 50, 53, 62, 65, 67, 70,
73, 75, 78, 80, 86, 88, 91, 93, 98, 103,
106, 108, 114, 116, 122, 125, 128, 131
irreducible stochastic matrix, 83, 120
limit matrix, 13
Markov chain, 10, 18
Markov chain of countably many states, 101
Markov process, 5
outcome, 5
periodic Markov chain, 14
probability of state, 11
probability vector, 11
random walk, 5, 14, 15
random walk of reflecting barriers, 14
random walk with absorbing barriers, 14
regular Markov chain, 12, 18–23, 36, 39, 43, 47,
50, 53, 56, 62, 65, 67, 70, 73, 75, 78,
80, 83, 86, 88, 91, 100, 101, 103, 106,
108, 114, 116, 122, 125, 128, 131
regular stochastic matrix, 26, 30, 120
ruin problem, 7
sample function, 4
state of a process, 4
stationary distribution, 11, 43, 50
stationary Markov chain, 10
stochastic limit matrix, 13
stochastic matrix, 10
stochastic process, 4
symmetric random walk, 5, 9
transition probability, 10, 11
vector of state, 11

More Related Content

PDF
Mathematical induction by Animesh Sarkar
PDF
Bath_IMI_Summer_Project
PDF
Welcome to International Journal of Engineering Research and Development (IJERD)
PDF
a) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdf
PDF
Modeling and Analysis of Stochastic Systems 3rd Kulkarni Solution Manual
PPT
Chapter0
PDF
Ijetr011954
PDF
Mathematical induction by Animesh Sarkar
Bath_IMI_Summer_Project
Welcome to International Journal of Engineering Research and Development (IJERD)
a) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdf
Modeling and Analysis of Stochastic Systems 3rd Kulkarni Solution Manual
Chapter0
Ijetr011954

Similar to stochastic-processes-1.pdf (20)

PDF
The 2 Goldbach's Conjectures with Proof
PDF
Martingales Stopping Time Processes
PDF
PPTX
Unit II PPT.pptx
PPT
pattern recognition
PDF
PTSP PPT.pdf
PDF
Random variables
PPTX
probability assignment help (2)
PPTX
Principle of mathematical induction
PPT
lecture 7
PPT
Markov chains1
PDF
Montecarlophd
PPT
Lecture 01.ppt
PPTX
Statistics Exam Help
PPTX
Week 4 Mathematical Induction Mathematical Induction
DOCX
Probability[1]
PPTX
GROUP-4 PRESENTATION ON NR-METHOD.pptx
PPT
lecture 8
PDF
Interpolation techniques - Background and implementation
PPT
Marketing management planning on it is a
The 2 Goldbach's Conjectures with Proof
Martingales Stopping Time Processes
Unit II PPT.pptx
pattern recognition
PTSP PPT.pdf
Random variables
probability assignment help (2)
Principle of mathematical induction
lecture 7
Markov chains1
Montecarlophd
Lecture 01.ppt
Statistics Exam Help
Week 4 Mathematical Induction Mathematical Induction
Probability[1]
GROUP-4 PRESENTATION ON NR-METHOD.pptx
lecture 8
Interpolation techniques - Background and implementation
Marketing management planning on it is a
Ad

Recently uploaded (20)

PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PDF
Microbial disease of the cardiovascular and lymphatic systems
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PDF
Basic Mud Logging Guide for educational purpose
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PPTX
Cell Types and Its function , kingdom of life
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
Pre independence Education in Inndia.pdf
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
Sports Quiz easy sports quiz sports quiz
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PPTX
Pharma ospi slides which help in ospi learning
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PDF
TR - Agricultural Crops Production NC III.pdf
PPTX
GDM (1) (1).pptx small presentation for students
FourierSeries-QuestionsWithAnswers(Part-A).pdf
O5-L3 Freight Transport Ops (International) V1.pdf
Module 4: Burden of Disease Tutorial Slides S2 2025
Microbial disease of the cardiovascular and lymphatic systems
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
Basic Mud Logging Guide for educational purpose
2.FourierTransform-ShortQuestionswithAnswers.pdf
Cell Types and Its function , kingdom of life
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
Pre independence Education in Inndia.pdf
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Supply Chain Operations Speaking Notes -ICLT Program
Sports Quiz easy sports quiz sports quiz
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Renaissance Architecture: A Journey from Faith to Humanism
Pharma ospi slides which help in ospi learning
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
TR - Agricultural Crops Production NC III.pdf
GDM (1) (1).pptx small presentation for students
Ad

stochastic-processes-1.pdf

  • 1. Leif Mejlbro Stochastic Processes 1 Probability Examples c­8 Download free books at
  • 2. Download free ebooks at bookboon.com 2 Leif Mejlbro Probability Examples c-8 Stochastic Processes 1
  • 3. Download free ebooks at bookboon.com 3 Probability Examples c-2 – Stochastic Processes 1 © 2009 Leif Mejlbro & Ventus Publishing ApS ISBN 978-87-7681-524-0
  • 4. Download free ebooks at bookboon.com Stochastic Processes 1 4 Contents Introduction 5 1 Stochastic processes; theoretical background 6 1.1 General about stochastic processes 6 1.2 Random walk 7 1.3 The ruin problem 9 1.4 Markov chains 12 2 Random walk 17 3 Markov chains 20 Index 137 Contents Stand out from the crowd Designed for graduates with less than one year of full-time postgraduate work experience, London Business School’s Masters in Management will expand your thinking and provide you with the foundations for a successful career in business. The programme is developed in consultation with recruiters to provide you with the key skills that top employers demand. Through 11 months of full-time study, you will gain the business knowledge and capabilities to increase your career choices and stand out from the crowd. Applications are now open for entry in September 2011. For more information visit www.london.edu/mim/ email mim@london.edu or call +44 (0)20 7000 7573 Masters in Management London Business School Regent’s Park London NW1 4SA United Kingdom Tel +44 (0)20 7000 7573 Email mim@london.edu www.london.edu/mim/ Fast-track your career Please click the advert
  • 5. Download free ebooks at bookboon.com Stochastic Processes 1 5 Introduction Introduction This is the eighth book of examples from the Theory of Probability. The topic Stochastic Processes is so huge that I have chosen to split the material into two books. In the present first book we shall deal with examples of Random Walk and Markov chains, where the latter topic is very large. In the next book we give examples of Poisson processes, birth and death processes, queueing theory and other types of stochastic processes. The prerequisites for the topics can e.g. be found in the Ventus: Calculus 2 series and the Ventus: Complex Function Theory series, and all the previous Ventus: Probability c1-c7. Unfortunately errors cannot be avoided in a first edition of a work of this type. However, the author has tried to put them on a minimum, hoping that the reader will meet with sympathy the errors which do occur in the text. Leif Mejlbro 27th October 2009
  • 6. Download free ebooks at bookboon.com Stochastic Processes 1 6 1. Stochastic process; theoretical background 1 Stochastic processes; theoretical background 1.1 General about stochastic processes A stochastic process is a family {X(t) | t ∈ T} of random variables X(t), all defined on the same sample space Ω, where the domain T of the parameter is a subset of R (usually N, N0, Z, [0, +∞[ or R itself), and where the parameter t ∈ T is interpreted as the time. We note that we for every fixed ω in the sample space Ω in this way define a so-called sample function T(·, ω) : T → R on the domain T of the parameter. In the description of such a stochastic process we must know the distribution function of the stochastic process, i.e. P {X (t1) ≤ x1 ∧ X (t2) ≤ x2 ∧ · · · ∧ X (tn) ≤ xn} for every t1, . . . , tn ∈ T, and every x1, . . . , xn ∈ R, for every n ∈ N. This is of course not always possible, so one tries instead to find less complicated expressions connected with the stochastic process, like e.g. means, which to some extent can be used to characterize the distribution. A very important special case occurs when the random variables X(t) are all discrete of values in N0. If in this case X(t) = k, then we say that the process at time t is at state Ek. This can now be further specialized. © UBS 2010. All rights reserved. www.ubs.com/graduates Looking for a career where your ideas could really make a difference? UBS’s Graduate Programme and internships are a chance for you to experience for yourself what it’s like to be part of a global team that rewards your input and believes in succeeding together. Wherever you are in your academic career, make your future a part of ours by visiting www.ubs.com/graduates. You’re full of energy and ideas. And that’s just what we are looking for. Please click the advert
  • 7. Download free ebooks at bookboon.com Stochastic Processes 1 7 1. Stochastic process; theoretical background A Markov process is a discrete stochastic process of values in N0, for which also P {X (tn+1) = kn+1 | X (tn) = kn ∧ · · · ∧ X (t1) = k1} = P {X (tn+1) = kn+1 | X (tn) = kn} for any k1,. . . , kn+1 in the range, for any t1 < t2 < · · · < tn+1 from T, and for any n ∈ N. We say that when a Markov process is going to be described at time tn+1, then we have just as much information, if we know the process at time tn, as if we even know the process at the times t1, . . . , tn, provided that these times are all smaller than tn+1. One may coin this in the following way: If the present is given, then the future is independent of the past. 1.2 Random walk Consider a sequence (Xk) of mutually independent identically distributed random variables, where the distribution is given by P {Xk = 1} = p and P {Xk = −1} = q, p, q > 0 and p + q = 1andk ∈ N. We define another sequence of random variables (Sn) by S0 = 0 and Sn = S0 + n k=1 Xk, for n ∈ N. In this special construction the new sequence (Sn) +∞ n=0 is called a random walk. In the special case of p = q = 1 2 , we call it a symmetric random walk. An outcome of X1, X2, . . . , Xn is a sequence x1, x2, . . . , xn, where each xk is either 1 or −1. A random walk may be interpreted in several ways, of which we give the following two: 1) A person walks on a road, where he per time unit with probability p takes one step to the right and with probability q takes one step to the left. At time 0 the person is at state E0. His position at time n is given by the random variable Sn. If in particular, p = q = 1 2 , this process is also called the “drunkard’s walk”. 2) Two persons, Peter and Paul, are playing a series of games. In one particular game, Peter wins with probability p, and Paul wins with probability q. After each game the winner receives 1 $ from the loser. We assume at time 0 that they both have won 0 $. Then the random variable Sn describes Peter’s gain (positive or negative) after n games, i.e. at time n. We mention Theorem 1.1 (The ballot theorem). At an election a candidate A obtains in total a votes, while another candidate B obtains b votes, where b a. The probability that A is leading during the whole of the counting is equal to a − b a + b . Let Peter and Paul be the two gamblers mentioned above. Assuming that Peter to time 0 has 0 $, then the probability of Peter at some (later) time having the sum of 1 $ is given by α = min 1 , p q ,
  • 8. Download free ebooks at bookboon.com Stochastic Processes 1 8 1. Stochastic process; theoretical background hence the probability of Peter at some (later) time having the sum of N $, where N 0, is given by αN = min 1 , p q N . The corresponding probability that Paul at some time has the sum of 1 $ is β = min 1 , q p , and the probability that he at some later time has a positive sum of N $ is βN = min 1 , q p N . Based on this analysis we introduce pn := P{return to the initial position at time n}, n ∈ N, fn := P{the first return to the initial position at time n}, n ∈ N, f := P{return to the initial position at some later time} = +∞ n=1 fn. Notice that pn = fn = 0, if n is an odd number. We shall now demonstrate how the corresponding generating functions profitably can be applied in such situation. Thus we put P(s) = +∞ n=0 pn sn and F(s) = +∞ n=0 fn sn , where we have put p0 = 1 and f0 = 0. It is easily seen that the relationship between these two generating functions is F(s) = 1 − 1 P(s) . Then by the binomial series P(s) = 1 1 − 4pqs2 , so we conclude that F(s) = +∞ k=1 1 2k − 1 2k k (pq)k s2k , which by the definition of F(s) implies that f2k = 1 2k − 1 2k k (pq)k = p2k 2k − 1 .
  • 9. Download free ebooks at bookboon.com Stochastic Processes 1 9 1. Stochastic process; theoretical background Furthermore, f = lim s→1− F(s) = 1 − 1 − 4pq = 1 − |1 − 2p| = ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 2p, for p 1 2 , 1, for p = 1 2 , 2q, for p 1 2 . In the symmetric case, where p = 1 2 , we define a random variable T by T = n, if the first return occurs at time n. Then it follows from the above that T has the distribution P{T = 2k} = f2k and P{T = 2k − 1} = 0, for k ∈ N. The generating function is F(s) = 1 − 1 − s2, hence E{T} = lim s→1− F(s) = +∞, which we formulate as the expected time of return to the initial position is +∞. 1.3 The ruin problem The initial position is almost the same as earlier. The two gamblers, Peter and Paul, play a series of games, where Peter has the probability p of winning 1 $ from Paul, while the probability is q that he loses 1 $ to Paul. At the beginning Peter owns k $, and Paul owns N − k $, where 0 k N. The games continue, until one of them is ruined. The task here is to find the probability that Peter is ruined. Let ak be the probability that Peter is ruined, if he at the beginning has k $, where we allow that k = 0, 1, . . . , N. If k = 0, then a0 = 1, and if k = N, then aN = 0. Then consider 0 k N, in which case ak = p ak+1 + q ak−1. We rewrite this as the homogeneous, linear difference equation of second order, p ak+1 − ak + q ak−1 = 0, k = 1, 2, . . . , N − 1. Concerning the solution of such difference equations, the reader is referred to e.g. the Ventus: Calculus 3series. We have two possibilities:
  • 10. Download free ebooks at bookboon.com Stochastic Processes 1 10 1. Stochastic process; theoretical background 1) If p = 1 2 , then the probability for Peter being ruined, if he starts with k $, is given by ak = q p k − q p N 1 − q p N , for k = 0, 1, 2, . . . , N. 2) If instead p = q = 1 2 , then ak = N − k N , for k = 1, 2, . . . , N. We now change the problem to finding the expected number of games μk, which must be played before one of the two gamblers is ruined, when Peter starts with the sum of k $. In this case, μk = p μk+1 + q μk−1 + 1, for k = 1, 2, . . . , N − 1. Please click the advert
  • 11. Download free ebooks at bookboon.com Stochastic Processes 1 11 1. Stochastic process; theoretical background We rewrite this equation as an inhomogeneous linear difference equation of second order. Given the boundary conditions above, its solution is 1) For p = 1 2 we get μk = k q − p − N q − p · 1 − q p k 1 − q p N , for k = 0, 1, 2, . . . , N. 2) For p = q = 1 2 we get instead μk = k(N − k), for k = 0, 1, 2, . . . , N. In the special case, where we consider a symmetric random walk, i.e.. p = 1 2 , we sum up the results: Let (Xk) be a sequence of mutually independent identically distributed random variables of distribu- tion given by P {Xk = 1} = P {Xk = −1} = 1 2 , for n ∈ N. In this case, the random variables S0 = 0 and S2n = S0 + 2n k=1 Xk have the distribution given by p2n,2r := P {S2n = 2r} = 2n n + r 2−2n , for r = −n, −n + 1, . . . , n, n ∈ N. In particular, u2n := p2n,0 = 2n n e−2n ∼ 1 √ πn for large n . Then we define a random variable T by T = n, if the first return to E0 occurs to time n. This random variable has the values 2, 4, 6, . . . , with the probabilities f2n := P{T = 2n} = u2n 2n − 1 = 1 2n − 1 2n n 2−2n , for n ∈ N, where E{T} = +∞. For every N ∈ Z there is the probability 1 for the process reaching state EN to some later time. Finally, if the process is at state Ek, 0 k N to time 0, then the process reaches state E0 before EN with the probability 1 − k N , and it reaches state EN before E0 with the probability k N . The expected time for the process to reach either E0 or EN from Ek is k(N − k).
  • 12. Download free ebooks at bookboon.com Stochastic Processes 1 12 1. Stochastic process; theoretical background Theorem 1.2 (The Arcus sinus law for the latest visit). The probability that the process up to time 2n the last time is in state E0 to time 2k, is given by α2k,2n = u2k · u2n−2k, where we have put u2n = P {S2n = 0}. The distribution which has the probability α2k,2n at the point 2k, where 0 ≤ k ≤ n, is also called the discrete Arcus sinus distribution of order n. The reason for this name is the following: If β and γ are given numbers, where 0 β γ 1, then P {last visit of E0 is between 2βn and 2γn} = βn≤k≤γn u2k · u2n−2k ∼ β≤ k n ≤γ 1 n · 1 π k n 1 − k n ∼ γ β 1 π x(1 − x) dx = 2 π Arcsin √ γ − 2 π Arcsin β, where we recognize the sum as a mean sum of the integral of Arcsin. This implies that P {last visit of E0 before 2nx} ∼ 2 π Arcsin √ x, for x ∈ ]0 , 1[. One interpretation of this result is that if Peter and Paul play many games, then there is a large probability that one of them is almost all the time on the winning side, and the other one is almost all the time on the losing side. 1.4 Markov chains A Markov chain is a (discrete) stochastic process {X(t) | t ∈ N0}, which has a finite number of states, e.g. denoted by E1, E2, . . . , Em, and such that for any 1 ≤ k0, k1, . . . , kn ≤ m and every n ∈ N, P {X(n) = kn | X(n − 1) = kn−1 ∧ · · · ∧ X(0) = k0} = P {X(n) = kn | X(n − 1) = kn−1} . If furthermore the Markov chain satisfies the condition that the conditional probabilities pij := P{X(n) = j | X(n − 1) = i} do not depend on n, we call the process a stationary Markov chain. We shall in the following only consider stationary Markov chains, and we just write Markov chains, tacitly assuming that they are stationary. A Markov chain models the situation, where a particle moves between the m states E1, E2, . . . , Em, where each move happens at discrete times t ∈ N. Then pij represents the probability that the particle in one step moves from state Ei to state Ej. In particular, pii is the probability that the particle stays at state Ei. We call the pij the transition probabilities. They are usually lined up in a stochastic matrix: P = ⎛ ⎜ ⎜ ⎜ ⎝ p11 p12 · · · p1m p21 p22 · · · p2m . . . . . . . . . pm1 pm2 · · · pmm ⎞ ⎟ ⎟ ⎟ ⎠ .
  • 13. Download free ebooks at bookboon.com Stochastic Processes 1 13 1. Stochastic process; theoretical background In this matrix the element pij in the i-th row and the j-th column represents the probability for the transition from state Ei to state Ej. For every stochastic matrix we obviously have pij ≥ 0 for every i and j, j pij = 1 for every i, thus all sums of the rows are 1. The probabilities of state p (n) i are defined by p (n) i := P{X(n) = i}, for i = 1, 2 . . . , m og n ∈ N0. The corresponding vector of state is p(n) := p (n) 1 , p (n) 2 , . . . , , p(n) m , for n ∈ N0. In particular, the initial distribution is given by p(0) = p (0) 1 , p (0) 2 , . . . , , p(0) m . Then by ordinary matrix computation (note the order of the matrices), p(n) = p(n−1) P, hence by iteration p(n) = p(0) Pn , proving that the probabilities of state at time t = n is only determined by the initial condition p(0) and the stochastic matrix P, iterated n timers. The elements p (n) ij in Pn are called the transition probabilities at step n, and they are given by p (n) ij = P{X(k + n) = j | X(k) = i}. We define a probability vector α = (α1, α2, . . . , αm) as a vector, for which αi ≥ 0, for i = 1, 2, . . . , m, and m i=1 αi = 1. A probability vector α is called invariant with respect to the stochastic matrix P, or a stationary distribution of the Markov chain, if α P = α. The latter name is due to the fact that if X(k) has its distribution given by α, then every later X(n + k), n ∈ N has also its distribution given by α. In order to ease the computations in practice we introduce the following new concepts:
  • 14. Download free ebooks at bookboon.com Stochastic Processes 1 14 1. Stochastic process; theoretical background 1) We say that a Markov chain is irreducible, if we to any pair of indices (i, j) can find an n = nij ∈ N, such that that pn ij 0. This means that the state Ej can be reached from state Ei, no matter the choice of (i, j). (However, all the nij ∈ N do not have to be identical). 2) If we even can choose n ∈ N independently of the pair of indices (i, j), we say that the Markov chain is regular. Remark 1.1 Notice that stochastic regularity has nothing to do with the concept of a regular matrix known from Linear Algebra. We must not confuse the two definitions. ♦ your chance to change the world Here at Ericsson we have a deep rooted belief that the innovations we make on a daily basis can have a profound effect on making the world a better place for people, business and society. Join us. In Germany we are especially looking for graduates as Integration Engineers for • Radio Access and IP Networks • IMS and IPTV We are looking forward to getting your application! To apply and for all current job openings please visit our web page: www.ericsson.com/careers Please click the advert
  • 15. Download free ebooks at bookboon.com Stochastic Processes 1 15 1. Stochastic process; theoretical background Theorem 1.3 Let P be an m × m regular stochastic matrix. 1) The sequence (Pn ) converges towards a stochastic limit matrix G for n → +∞. 2) Every row in G has the same probability vector g = (g1, g2, . . . , gm) , where gi 0 for every i = 1, 2, . . . , m. 3) If p is any probability vector, then p Pn → g for n → +∞. 4) The regular matrix P has precisely one invariant probability vector, g. The theorem shows that for a regular stochastic matrix P the limit distribution is uniquely determined by the invariant probability vector. It may occur for an irreducible Markov chain that Pn diverges for n → +∞. We have instead Theorem 1.4 Let P be an m × m irreducible stochastic matrix. 1) The sequence 1 n n i=1 Pi converges towards a stochastic limit matrix G for n → +∞. 2) Every row in G is the same probability vector g = (g1, g2, . . . , gm) , where gi 0 for ethvert i = 1, 2, . . . , m. 3) Given any probability vector p, then p 1 n n i=1 Pi → g for n → +∞. 4) The irreducible matrix P has precisely one invariant probability vector, namely g. Given a Markov chain of the m states E1, E2, . . . , Em with the corresponding stochastic matrix P. A subset C of the states E1, E2, . . . , Em is called closed, if no state outside C can be reached from any state in C. This can also be expressed in the following way: A subset C of the m states is closed, if for every Ei ∈ C and every Ej / ∈ C we have pij = 0. If a closed set only contains one state, C = {Ei}, we call Ei an absorbing state. This is equivalent to pii = 1, so we can immediately find the absorbing states from the numbers 1 in the diagonal of the stochastic matrix P. The importance of a closed set is described by the following theorem, which is fairly easy to apply in practice. Theorem 1.5 A Markov chain is irreducible, if and only if it does not contain any proper closed subset of states. A necessary condition of irreducibility of a Markov chain is given in the following theorem:
  • 16. Download free ebooks at bookboon.com Stochastic Processes 1 16 1. Stochastic process; theoretical background Theorem 1.6 Assume that a Markov chain of the m states E1, E2, . . . , Em is irreducible. Then to every pair of indices (i, j), 1 ≤ i, j ≤ m there exists an n = nij, such that 1 ≤ n ≤ m and p (n) ij 0. Concerning the proof of regularity we may use the following method, if the matrix P is not too complicated: Compute successively the matrices Pi , until one at last (hopefully) reaches a number i = n, where all elements of Pn are different from zero. Since we already know that all elements are ≥ 0, we can ease the computations by just writing ∗ for the elements of the matrices, which are = 0. We do not have to compute their exact values. But we must be very careful with the zeros. This method is of course somewhat laborious, and one may often apply the following theorem instead. Theorem 1.7 If a stochastic matrix P is irreducible, and there exists a positive element in the diag- onal pii 0, then P is regular. It is usually easy to prove that P is irreducible. The difficult part is to prove that it is also regular. We give here another result: We introduce for every m × m irreducible stochastic matrix P the following numbers di := largest common divisor of all n ∈ N, for which p (n) ii 0. It can be proved that d1 = d2 = · · · = dm := d, so we only have to find one single of the di-erne. (For one’s own convenience, choose always that di, which gives the easiest computations). Theorem 1.8 An irreducible Markov chain is regular, if and only if d = 1. If d 1, then the Markov chain is periodic of period d. One may consider a random walk on the set {1, 2, 3, . . . , N} as a Markov chain of the transition probabilities pi,i−1 = q and pi,i+1 = p, for i = 2, 3, . . . , N − 1, where p, q 0 og p + q = 1. 1) If p11 = pNN = 1, then we have a random walk of two absorbing barriers. 2) If p12 = pN,N−1 = 1, then we have a random walk of two reflecting barriers. In this case the corresponding Markov chain is irreducible.
  • 17. Download free ebooks at bookboon.com Stochastic Processes 1 17 2. Random walk 2 Random walk Example 2.1 Consider a ruin problem of total capital N $, where p 1 2 . In every game the loss/gain is only 50 cents. Is this game more advantageous for Peter than if the stake was 1 $ (i.e. a smaller probability that Peter is ruined)? We have 2N + 1 states E0, E1, . . . , E2N , where state Ei means that A has i 2 $. If A initially has k $, then a2N = 0 and a0 = 1. We get for the values in between, ak = p ak+1 + q ak−1, 0 k 2N, which we rewrite as p (ak+1 − ak) = q (ak − ak−1) . Hence by recursion, ak − ak−1 = q p (ak−1 − ak−2) = · · · = q p k−1 (a1 − a0) , and we get ak = (ak − ak−1) + (ak−1 − ak−2) + · · · + (a1 − a0) + a0 = q p k−1 + q p k−2 + · · · + q p 1 + 1 (a1 − a0) + a0 = 1 − q p k 1 − q p (a1 − a0) + a0 = q p k − 1 q p − 1 (a1 − a0) + a0. Now, a0 = 1, so we get for k = 2N that 0 = a2N = q p 2N − 1 q p − 1 (a1 − 1) + 1 = a1 q p 2N − 1 q p − 1 + 1 − q p 2N − 1 q p − 1 , hence by a rearrangement, a1 = q p − 1 q p 2N − 1 ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ q p 2N − 1 q p − 1 − 1 ⎫ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎭ = q p q p 2N−1 − 1 q p 2N − 1 . Then by insertion, a2k = q p 2k − 1 q p − 1 ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ − q p − 1 q p 2N − 1 ⎫ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎭ +1 = q p 2N − q p 2k q p 2N − 1 = q p N − q p k q p N − 1 · q p N + q p k q p N + 1 .
  • 18. Download free ebooks at bookboon.com Stochastic Processes 1 18 2. Random walk These expressions should be compared with ãk = q p N − q p k q p N − 1 from the ruin problem with 1 $ at stake in each game. Notice the indices ãk and a2k, because 1 $ = 2 · 50 cents. It clearly follows from q 1 2 p, that a2k ãk. Since the a indicate the probability that Peter is ruined, if follows that there is larger probability that he is ruined if the stake is 50 cents than if the stake is 1 $. what‘s missing in this equation? MAERSK INTERNATIONAL TECHNOLOGY SCIENCE PROGRAMME You could be one of our future talents Are you about to graduate as an engineer or geoscientist? Or have you already graduated? If so, there may be an exciting future for you with A.P. Moller - Maersk. www.maersk.com/mitas Please click the advert
  • 19. Download free ebooks at bookboon.com Stochastic Processes 1 19 2. Random walk Example 2.2 Peter and Paul play in total N games. In each game Peter has the probability 1 2 for winning (in which case he receives 1 $ from Paul) and probability 1 2 for losing (in which case he delivers 1 $ to Paul). The games are mutually independent of each other. Find the probability that Peter’s total gain never after the start of the games is 0 $. The probability that Peter’s gain is never 0 $, is 1 − P{return to the initial position at some time} = 1 − N n=1 fn, where fn = P{first return is to time n}. The parity assures that f2k−1 = 0, because we can only return to the initial position after an even number of steps. It follows from p = q = 1 2 that f2k = 1 2k − 1 2k k (p q)k = 1 2k − 1 2k k 1 4 k . By insertion we get the probability 1 − N n=1 fn = 1 − [N 2 ] k=1 1 2k − 1 2k k 1 4 k = ∞ k=[N 2 ]+1 1 2k − 1 2k k 1 4 k . Alternatively, we may use the following considerations which somewhat simplify the task. 1) If N = 2n is even, then the wanted probability is P {S1 = 0 ∧ S2 = 0 ∧ · · · ∧ S2n = 0} . This expression is equal to u2n, which again is equal to u2n = 2n n 2−2n ∼ 1 √ πn . 2) If N = 2n + 1 is odd, then S2n+1 is always = 0. Hence, the probability is P {S1 = 0 ∧ S2 = 0 ∧ · · · ∧ S2n = 0 ∧ S2n+1 = 0} = P {S1 = 0 ∧ S2 = 0 ∧ · · · ∧ S2n = 0} = 2n n 2−2n ∼ 1 √ πn according to the first question.
  • 20. Download free ebooks at bookboon.com Stochastic Processes 1 20 3. Markov chains 3 Markov chains Example 3.1 Let P be a stochastic matrix for a Markov chain of the states E1, E2, . . . , Em. 1) Prove that p (n1+n2+n3) ij ≥ p (n1) ik · p (n2) kk · p (n3) kj , for 1 ≤ i, j, k ≤ m, n1, n2, n3 ∈ N0. 2) Prove that if the Markov chain is irreducible and pii 0 for some i, then the Markov chain is regular. 1) Since Pn1+n2+n3 = Pn1 Pn2 Pn3 , and since all matrix elements are ≥ 0, it follows that p (n1+n2+n3) ij = m k=1 m =1 p (n1) ik p (n2) k p (n3) j ≥ p (n1) ik · p (n2) kk · p (n3) kj . 2) Assume that the Markov chain is irreducible and that there is an i, such that pii 0. Since p (n) ii ≥ (pii) n 0, we must have p (n) ii 0 for all n ∈ N0. Now, P is irreducible, so to every j there exists an n1, such that p (n1) ji 0, [index “i” as above], and to every k there exists an n2, such that also p (n2) ik 0. Then follow this procedure on all pairs of indices (j, k). If we choose N1 as the largest of the possible n1, and N2 as the largest of the possible n2, then it follows from 1. that p (N1+N2) jk ≥ p (n1) ji p (n2) ik p (N1−n1+N2−n−2) ii 0, where n1 = n1(j) and n2 = n2(k) depend on j and k, respectively. Hence all elements of PN1+N2 are 0, so the stochastic matrix P is regular.
  • 21. Download free ebooks at bookboon.com Stochastic Processes 1 21 3. Markov chains Example 3.2 Let P be an irreducible stochastic matrix. We introduce for every i the number di by di = largest common divisor of all n, for which p (n) ii 0. 1) Prove that di does not depend on i. We denote the common value by d. 2) Prove that if P is regular, then d = 1. 1) If we use Example 3.1 with j = i, we get p (n1+n2+n2) ii ≥ p (n1) ik p (n2) kk p (n3) ki , and analogously p (n1+n2+n3) kk ≥ p (n1) ik p (n2) ii p (n3) ki . Using that P is irreducible, we can find n1 and n3, such that p (n1) ik 0 and p (n3) ki 0. Let n1 and n3 be as small as possible. Then p (n1+n3) ii ≥ p (n1) ik 0 and p (n1+n3) kk ≥ p (n1) ik p (n3) ki 0. Hence, di|n1 + n3 and dk|n1 + n3, where “a |b ” means that a is a divisor in b. By choosing n2 = m dk, such that p (n2) kk 0, we also get p (n1+n2+n3) ii ≥ p (n1) ik p (n2) kk p (n3) ki 0, thus di|n1 + n2 + n3. We conclude that di|n2 = m · dk. If n2 = n di is chosen, such that p (n2) kk 0, then analogously dk|n1 + n2 + n3, thus dk|n · di. It follows that d1 is divisor in all numbers n2 = m · dk, for which p (n2) kk A0. Since dk is the largest common divisor, we must have di|dk. Analogously, dk|di, hence dk = di. Since i and k are chosen arbitrarily, we have proved 1.. 2) If P is regular, there exists an n ∈ N, such that all p (n) ij 0. Then also p (n+m) ij 0, m ∈ N0, and the largest common divisor is clearly 1, hence d = 1. The proof that conversely d = 1 implies that P is regular is given in Example 3.3.
  • 22. Download free ebooks at bookboon.com Stochastic Processes 1 22 3. Markov chains Example 3.3 . Let P be a stochastic matrix of an irreducible Markov chain E1, E2, . . . , En, and assume that d = 1 (cf. Example 3.2). Prove that there exists an N ∈ N, such that we for all n ≥ N and all i and j have p (n) ij 0 (which means that the Markov chain is regular). Hint: One may in the proof use without separate proof the following result from Number Theory: Let a1, a2, . . . , ak ∈ N have the largest common divisor 1. Then there exists an N ∈ N, such that for all n ≥ N there are integers c1(n), c2(n), . . . , ck(n) ∈ N, such that n = k j=1 cj(n) aj. Since P is irreducible, we can to every pair of indices (i, j) find nij ∈ N, such that p (nij ) ij 0. Since d = 1, we have for every index “ i ” a finite sequence ai1, ai2, . . . aini ∈ N, such that the largest common divisor is 1 and p (aij ) ii 0. Then by the result from Number Theory mentioned in the hint there exists an Ni, such that one to every n ≥ Ni can find ci1(n), . . . , cini (n) ∈ N, such that n = ni j=1 cij(n) aij. It all starts at Boot Camp. It’s 48 hours that will stimulate your mind and enhance your career prospects. You’ll spend time with other students, top Accenture Consultants and special guests. An inspirational two days packed with intellectual challenges and activities designed to let you discover what it really means to be a high performer in business. We can’t tell you everything about Boot Camp, but expect a fast-paced, exhilarating and intense learning experience. It could be your toughest test yet, which is exactly what will make it your biggest opportunity. Find out more and apply online. Choose Accenture for a career where the variety of opportunities and challenges allows you to make a difference every day. A place where you can develop your potential and grow professionally, working alongside talented colleagues. The only place where you can learn from our unrivalled experience, while helping our global clients achieve high performance. If this is your idea of a typical working day, then Accenture is the place to be. Turning a challenge into a learning curve. Just another day at the office for a high performer. Accenture Boot Camp – your toughest test yet Visit accenture.com/bootcamp Please click the advert
  • 23. Download free ebooks at bookboon.com Stochastic Processes 1 23 3. Markov chains Then let N ≥ max {nij} + max {Ni}. If n ≥ N, then p (n) ij ≥ p (nij ) ij p (n−pij ) jj . Since n − nij ≥ Ni, it follows that n − nij = nj k=1 c̃jk(n) ajk, and we conclude that p (n) ij ≥ p (nij ) ij p (n−nij ) jj ≥ p (nij ) ij k=1 nj p (ajk) jj c̃jk 0. This is true for every pair of indices (i, j), thus we conclude that P is regular. Example 3.4 Let P be an m × m stochastic matrix. 1) Prove that if P is regular, then P2 is also regular. 2) Assuming instead that P is irreducible, can one conclude that P2 is also irreducible? 1) If P is regular, then there is an N ∈ N, such that p (n) ij 0 for all n ≥ N and all i, j. In particular, p (2N) ij 0 for all (i, j). Now, p (2N) ij are the matrix elements of P2N = P2 N , thus P2 is also regular. 2) The answer is “no”! In fact, 0 1 1 0 is irreducible, while 0 1 1 0 0 1 1 0 = 1 0 0 1 is not. Example 3.5 Let P be an m × m stochastic matrix. Assume that P is irreducible, and that there is an i, such that p (3) ii 0 and p (5) ii 0. Prove that P is regular. The result follows immediately from Example 3.3, because the largest common divisor for 3 and 5 is di = 1. Notice that p (8) ii ≥ p (3) ii · p (5) ii 0, p (9) ii = p (3) ii 3 0, p (10) ii = p (5) ii 2 0, p (11) ii = p (5) ii p (3 ) ii 2 0, hence the succeeding p (n) ii 0, n ≥ 12, because one just multiply this sequence successively by P3 .
  • 24. Download free ebooks at bookboon.com Stochastic Processes 1 24 3. Markov chains Example 3.6 Let P be an m×m irreducible matrix. Prove for every pair (i, j) (where 1 ≤ i, j ≤ m) that there exists an n, depending on i and j, such that 1 ≤ n ≤ m and p (n) ij 0. When P is irreducible, we can get from every state Ei to any other state. When we sketch the graph, we see that it must contain a cycle, Ei1 → Ei2 → · · · → Eim → Ei1 , where (i1, i2, . . . , im) is a permutation of (1, 2, . . . , m). It follows that we can get from every Ei to any other Ej in n steps, where 1 ≤ n ≤ m. This means that the corresponding matrix Pn has p (n) ij 0. Example 3.7 Let P be a stochastic matrix for an irreducible Markov chain of the states E1, E2, . . . , Em. Given that P has the invariant probability vector g = 1 m , 1 m , . . . , 1 m . Prove that P is double stochastic. The condition g P = g is written 1 m m j=1 pij = 1 m , thus m j=1 pij = 1. This proves that the sum of every column is 1, thus the matrix is double stochastic. Example 3.8 Given a regular Markov chain of the states E1, E2, . . . , Em and with the stochastic matrix P. Then lim n→∞ p (n) ij = gi, where g = (g1, g2, . . . , gm) is the uniquely determined invariant probability vector of P. Prove that there exist a positive constant K and a constant a ∈ ]0, 1[, such that ! ! !p (n) ij − gj ! ! ! ≤ K an for i, j = 1, 2, . . . , m and n ∈ N. If P is regular, then there is an n0, such that p (n) ij 0 for all n ≥ n0 and all i, j = 1, 2, . . . , m. Let j be the column vector which has 1 in its row number j and 0 otherwise. Then Pn j = p (n) 1j , . . . , p (n) mj T . If 0 ε := min i, j p (n0) ij clearly 1 2 #
  • 25. Download free ebooks at bookboon.com Stochastic Processes 1 25 3. Markov chains and mj n = min i p (n) ij and Mj n = max i p (n) ij , then Mj n − mj n ≤ (1 − 2 ε)n = an for all j, and mj n ≤ ⎧ ⎨ ⎩ qj p (n) ij ⎫ ⎬ ⎭ Mj n for n ≥ n0. Thus ! ! !p (n) ij − gj ! ! ! ≤ Mj n − mj n ≤ an for all i, j and all n ≥ n0. Now, ! ! !p (n) ij − gj ! ! ! ≤ 1 for all n ∈ N. We therefore get the general inequality if we put K = (1 − 2 ε)−n0 = 1 a n0 . Notice that since 0 ε 1 2 , we have a = 1 − 2 ε ∈ ]0, 1[. Example 3.9 Given a stochastic matrix by P = ⎛ ⎝ 0 0 1 0 0 1 1 3 2 3 0 ⎞ ⎠ . Prove that P is irreducible, but not regular. Find the limit matrix G. Compute P2 and find all invariant probability vectors for P2 . We conclude from the matrix that E1 ↔ E3 ↔ E2, thus P is irreducible. We conclude from P2 = ⎛ ⎝ 0 0 1 0 0 1 1 3 2 3 0 ⎞ ⎠ ⎛ ⎝ 0 0 1 0 0 1 1 3 2 3 0 ⎞ ⎠ = ⎛ ⎝ 1 3 2 3 0 1 3 2 3 0 0 0 1 ⎞ ⎠ and P3 = ⎛ ⎝ 1 3 2 3 0 1 3 2 3 0 0 0 1 ⎞ ⎠ ⎛ ⎝ 0 0 1 0 0 1 1 3 2 3 0 ⎞ ⎠ = ⎛ ⎝ 0 0 1 0 0 1 1 3 2 3 0 ⎞ ⎠ = P,
  • 26. Download free ebooks at bookboon.com Stochastic Processes 1 26 3. Markov chains that P2n+1 = P and P2n = P2 . Since every Pn contains zeros, we conclude that P is not regular. The solution of g P = g, g1 + g2 + g3 = 1, is the solution of 1 3 g3 = g1, 2 3 g3 = g2, g1 + g2 = g3, g1 + g2 + g3 = 1, thus g = 1 6 , 1 3 , 1 2 , and the limit matrix is G = ⎛ ⎜ ⎝ 1 6 1 3 1 2 1 6 1 3 1 2 1 6 1 3 1 2 ⎞ ⎟ ⎠ . Alternatively, G = lim n→∞ 1 n n i=1 Pi = lim n→ 1 2n n i=1 P2i−1 + n i=1 Pn = 1 2 P + 1 2 P2 = 1 2 ⎛ ⎝ 1 3 2 3 1 1 3 2 3 1 1 3 2 3 1 ⎞ ⎠ = ⎛ ⎝ 1 6 1 3 1 2 1 6 1 3 1 2 1 6 1 3 1 2 ⎞ ⎠ . Clearly, P2 is not irreducible (E3 ↔ E3). In Paris or Online International programs taught by professors and professionals from all over the world BBA in Global Business MBA in International Management / International Marketing DBA in International Business / International Management MA in International Education MA in Cross-Cultural Communication MA in Foreign Languages Innovative – Practical – Flexible – Affordable Visit: www.HorizonsUniversity.org Write: Admissions@horizonsuniversity.org Call: 01.42.77.20.66 www.HorizonsUniversity.org Please click the advert
  • 27. Download free ebooks at bookboon.com Stochastic Processes 1 27 3. Markov chains The probability vectors for P2 are the solutions of g P2 = g, g1 + g2 + g3 = 1, gi ≥ 0, thus g1 = 1 3 g1 + 1 3 g2, g2 = 2 3 g1 + 2 3 g2, g3 = g3, g1 + g2 + g3 = 1, and hence g = (x, 2x, 1 − 3x), x ∈ 0, 1 3 # . Remark 3.1 We see that in P2 we have the closed set {E1, E2}, and {E3} is another closed set. ♦ Example 3.10 A Markov chain of three states E1, E2 and E3 has the stochastic matrix P = ⎛ ⎜ ⎝ 1 0 0 0 1 2 1 2 1 3 2 3 0 ⎞ ⎟ ⎠ . Prove that the state E1 is absorbing, and prove that the only invariant probability vector for P is g = (1, 0, 0). Prove for any probability vector p that p Pn → g for n → ∞. Obviously E1 is an absorbing state. We get the equations for the probability vectors from g = g P, i.e. g1 = g1 + 1 3 g3, g2 = 1 2 g2 + 2 3 g3, g3 = 1 2 g2. It follows from the former equation that g3 = 0, which by insertion into the latter one implies that g2 = 0, so g = (1, 0, 0) is the only invariant probability vector. Let p be any probability vector. We put p Pn = p (n) 1 , p (n) 2 , p (n) 3 . Then p (n+1) 1 , p (n+1) 2 , p (n+1) 3 = p (n) 1 , p (n) 2 , p (n) 3 P, implies that p (n+1) 1 = p (n) 1 + 1 3 p (n) 3 , p (n+1) 2 = 1 2 p (n) 2 + 2 3 p (n) 3 , p (n+1) 3 = 1 2 p (n) 2 . In particular, p (n) 1 is (weakly) increasing, and since the sequence is bounded ≤ 1, we get lim n→∞ p (n) 1 = p1 ≤ 1.
  • 28. Download free ebooks at bookboon.com Stochastic Processes 1 28 3. Markov chains By taking the limit of the first coordinate it follows that p (n) 3 is also convergent, and that lim n→∞ p (n) 3 = lim n→∞ p (n+1) 1 − lim n→ p (n) 1 = p1 − p1 = 0. Finally we get from the third coordinate that p (n) 2 is convergent, lim n→∞ p (n) 2 = 2 lim n→∞ p (n+1) 3 = 0. Then p1 = 1, and we have proved that p Pn → (1, 0, 0) = g for n → ∞. Example 3.11 1) Find for every a ∈ [0, 1] the invariant probability vector(s) (g1, g2, g3) of the stochastic matrix P = ⎛ ⎝ 0 1 0 0 a 1 − a 1 3 0 2 3 ⎞ ⎠ . 2) Prove for every a ∈ ]0, 1[ that Q = ⎛ ⎜ ⎜ ⎝ 1 2 0 1 2 0 1 3 0 2 3 0 0 a 0 1 − a 0 3 4 0 1 4 ⎞ ⎟ ⎟ ⎠ is a regular stochastic matrix, and find the invariant probability vector (g1, g2, g3, g4) of Q. 1) We write the equation g P = g as the following system of equations, ⎧ ⎨ ⎩ g1 = 1 3 g3, g2 = g1 + a g2 g3 = (1 − a)g2 + 2 3 g3. Thus g3 = 3 g1 and g1 = 1 − a, so 1 + g1 + g2 + g3 = g2 {1 + 1 − a + 3 − 3a} = g2(5 − 4a). If a ∈ [0, 1], then g2 = 1 5 − 4a ∈ 1 5 , 1 # for a ∈ [0, 1], and the probability vector is g = 1 − a 5 − 4a , 1 5 − 4a , 3 − 3a 5 − 4a .
  • 29. Download free ebooks at bookboon.com Stochastic Processes 1 29 3. Markov chains 2) When 0 a 1, it follows from Q2 = ⎛ ⎜ ⎜ ⎝ 1 2 0 1 2 0 1 3 0 2 3 0 0 a 0 1 − a 0 3 4 0 1 4 ⎞ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎝ 1 2 0 1 2 0 1 3 0 2 3 0 0 a 0 1 − a 0 3 4 0 1 4 ⎞ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎝ 1 4 a 2 1 4 1 2 (1 − a) 1 6 2 3 a 1 6 2 3 (1 − a) a 3 3 4 (1 − a) 2 3 a 1 4 (1 − a) 1 4 3 16 1 2 1 6 ⎞ ⎟ ⎟ ⎟ ⎠ , that all elements of Q2 are 0, hence Q is regular. Then g = g Q implies that g1 = 1 2 g1 + 1 3 g2 g2 = a g3 + 3 4 g4, g3 = 1 2 g1 + 2 3 g2 g4 = (1 − a)g3 + 1 4 g4. We get from the first equation that 1 2 g1 = 1 3 g2, hence g2 = 3 2 g1. By adding the first and the third equation we get g1 + g3 = g1 + g2, so g2 = g3. Finally, we conclude from the fourth equation that 3 4 g4 = (1 − a)g3, thus g4 = 4 3 (1 − a)g3 = 4 3 (1 − a) 3 2 g1 = 2(1 − a)g1, and 1 = g1 + g2 + g3 + g4 = g1 1 + 3 2 + 3 2 + 2(1 − a) = g1{4 + 2(1 − a)} = g1(6 − 2a), By 2020, wind could provide one-tenth of our planet’s electricity needs. Already today, SKF’s innovative know- how is crucial to running a large proportion of the world’s wind turbines. Up to 25 % of the generating costs relate to mainte- nance. These can be reduced dramatically thanks to our systems for on-line condition monitoring and automatic lubrication. We help make it more economical to create cleaner, cheaper energy out of thin air. By sharing our experience, expertise, and creativity, industries can boost performance beyond expectations. Therefore we need the best employees who can meet this challenge! The Power of Knowledge Engineering Brain power Plug into The Power of Knowledge Engineering. Visit us at www.skf.com/knowledge Please click the advert
  • 30. Download free ebooks at bookboon.com Stochastic Processes 1 30 3. Markov chains hence g1 = 1 6 − 2a , and the invariant probability vector is g = 1 6 − 2a , 3 12 − 4a , 3 12 − 4a , 1 − a 3 − a . Example 3.12 Given a Markov chain of four states E1, E2, E3 and E4 and with the stochastic matrix P = ⎛ ⎜ ⎜ ⎝ 1 4 1 4 1 4 1 4 0 1 2 1 4 1 4 0 0 3 4 1 4 0 0 0 1 ⎞ ⎟ ⎟ ⎠ . 1. Find for P its invariant probability vector(s). 2. Prove for a randomly chosen initial distribution p(0) = (α0, β0, γ0, δ0) for the distribution p(n) = (αn, βn, γn, δn) that αn + βn + γn = 3 4 n (α0 + β0 + γ0) . 3. Let p(0) = (1, 0, 0, 0). Find p(1) , p(2) and p(3) . Given a sequence of random variables (Yn) ∞ n=0 by the following: The possible values of Yn are 1, 2, 3, 4, and the corresponding probabilities are αn, βn, γn and δn, resp. (as introduced above). Prove for any initial distribution p(0) = (α0, β0, γ0, δ0) that the sequence (Yn) converges in probability towards a random variable Y . Find the distribution of Y . 1) The last coordinate of the matrix equation g = g P with g = (α, β, γ, δ) is given by δ = 1 4 (α + β + γ) + δ, thus α + β + γ = 0. Now α, β, γ ≥ 0, so α = β = γ = 0, and hence δ = 1. The only invariant probability vector is (0, 0, 0, 1). 2) Consider again the last coordinate, δn = 1 4 (αn−1 + βn−1 + γn−1) + δn−1. We have in general δ = 1 − (α + β + γ), so 1 − (α + βn + γn) = 1 4 (αn−1 + βn−1 + γn−1) + 1 − (αn−1 + βn−1 + γn−1) ,
  • 31. Download free ebooks at bookboon.com Stochastic Processes 1 31 3. Markov chains thus αn + βn + γn = 3 4 (αn−1 + βn−1 + γn−1) , and hence by recursion, αn + βn + γn = 3 4 n (α0 + β0 + γ0) . 3) In general, αn = 1 4 αn−1, βn = 1 4 αn−1 + 1 2 βn−1, γn = 1 4 αn−1 + 1 4 βn−1 + 3 4 γn−1, δn = 1 4 (αn−1 + βn−1 + γn−1) + δn−1. Put p(0) = (α0, β0, γ0) = (1, 0, 0, 0), then p(1) = 1 4 , 1 4 , 1 4 , 1 4 , p(2) = 1 16 , 1 16 + 1 8 , 1 16 + 1 16 + 3 16 , 3 16 + 1 4 = 1 16 , 3 16 , 5 6 , 7 16 , p(3) = 1 64 , 1 64 + 3 32 , 1 64 + 3 64 + 15 64 , 9 64 + 7 16 = 1 64 , 7 64 , 19 64 , 37 64 . 4) A qualified guess is that Yn → Y in probability, where P{Y = 4} = 1 and P{Y = j} = 0 for j = 1, 2, 3. We shall prove that P {|Yn − Y | ≥ ε} → 0 for n → ∞ for every fixed ε 0. If ε ∈ ]0, 1[, then P {|Yn − Y | ≥ ε} = 1 − P {|Yn − Y | ε} = 1 − P {Yn − Y = 0} = 1 − δn = αn + βn + γn = 3 4 n (α0 + β0 + γ0) = 3 4 n (1 − δ0) → 0 for n → ∞, and the claim is proved.
  • 32. Download free ebooks at bookboon.com Stochastic Processes 1 32 3. Markov chains Example 3.13 Five balls, 2 white ones and 3 black ones, are distributed in two boxed A and B, such that A contains 2, and B contains 3 balls. At time n (where n = 0, 1, 2, . . . ) we choose at random from each of the two boxes one ball and let the two chosen balls change boxes. In this way we get a Markov chain with 3 states: E0, E1 and E2, according to whether A contains 0, 1 or 2 black balls. 1. Find the corresponding stochastic matrix P. 2. Prove that it is regular, and its invariant probability vector. We let in the following p(n) = (αn, βn, γn) denote the distribution immediately befor the interchange at time t = n. 3. Given the initial distribution p(0) = (1, 0, 0), find the probabilities of state, p(3) = (α3, β3, γ3) and p(4) = (α4, β4, γ4) and prove that $ (α3 − α4) 2 + (β3 − β4) 2 + (γ3 − γ4) 2 0, 07. Figure 1: The two boxes with two black balls in A to the left and 1 black ball in B to the right. 1) Since pij = P{X(n) = j | X(n − 1) = i}, the stochastic matrix is with i as the row number and j as the column number, P = ⎛ ⎝ 0 1 0 1 2 · 1 3 1 2 1 2 · 2 3 0 2 3 1 3 ⎞ ⎠ = ⎛ ⎝ 0 1 0 1 6 1 2 1 3 0 2 3 1 3 ⎞ ⎠ . 2) All elements of P2 = ⎛ ⎝ 0 1 0 1 6 1 2 1 3 0 2 3 1 3 ⎞ ⎠ ⎛ ⎝ 0 1 0 1 6 1 2 1 3 0 2 3 1 3 ⎞ ⎠ = ⎛ ⎝ 1 6 1 2 1 3 1 12 23 36 5 18 1 9 5 9 1 3 ⎞ ⎠
  • 33. Download free ebooks at bookboon.com Stochastic Processes 1 33 3. Markov chains are 0, thus P is regular. We imply from g = g P, i.e. g1 = 1 6 g2, g2 = g1 + 1 2 g2 + 2 3 g3, g3 = 1 3 g2 + 1 3 g3, that (1) g2 = 6 g1, g2 = 2 g1 + 4 3 g3, 2 g3 = g2, hence g3 = 1 2 g2 = 3 g1, and thus by insertion, g1 + g2 + g3 = g1 + 6 g1 + 3 g1 = 10 g1 = 1. The probability vector is g = 1 10 , 6 10 , 3 10 . 3) From (1) follows αn = 1 6 βn−1, βn = αn−1 + 1 2 βn−1 + 2 3 γn−1, γn = 1 3 βn−1 + 1 3 γn−1. www.simcorp.com MITIGATE RISK REDUCE COST ENABLE GROWTH The financial industry needs a strong software platform That’s why we need you SimCorp is a leading provider of software solutions for the financial industry. We work together to reach a common goal: to help our clients succeed by providing a strong, scalable IT platform that enables growth, while mitigating risk and reducing cost. At SimCorp, we value commitment and enable you to make the most of your ambitions and potential. Are you among the best qualified in finance, economics, IT or mathematics? Find your next challenge at www.simcorp.com/careers Please click the advert
  • 34. Download free ebooks at bookboon.com Stochastic Processes 1 34 3. Markov chains Put p(0) = (α0, β0, γ0) = (1, 0, 0). Then p(1) = (α1, β1, γ1) = (0, 1, 0), p(2) = (α2, β2, γ2) = 1 6 , 1 2 , 1 3 , p(3) = (α3, β3, γ3) = 1 12 , 23 36 , 5 18 , p(4) = (α4, β4, γ4) = 23 216 , 127 216 , 11 36 . Then by insertion, $ (α3 − α4) 2 + (β3 − β4) 2 + (γ3 − γ4) = 1 12 − 23 216 2 + 23 36 − 127 216 2 + 5 18 − 11 36 2 = 18 − 23 216 2 + 138 − 127 216 2 + 62 10 − 11 216 2 = 1 216 52 + 112 + 62 = 1 216 √ 25 + 121 + 36 = √ 182 216 14 216 = 7 108 0.07. Example 3.14 Consider a Markov chain of the states E0, E1, . . . , Em and transition probabilities pi,i+1 = 1 − i m , i = 0, 1, 2, . . . , m − 1; pi,i−1 = i m , i = 1, 2, . . . , m; pij = 0 otherwise. Prove that the Markov chain is irreducible, and find its invariant probability vector. (This Markov chain is called Ehrenfest’s model: There are in total m balls in two boxes A and B; at time n we choose at random one ball and move it to the other box, where Ei denotes the state that there are i balls in the box A). The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 0 0 · · · 0 0 1 m 0 1 − 1 m 0 · · · 0 0 0 2 m 0 1 − 2 m · · · 0 0 . . . . . . . . . . . . . . . . . . 0 0 0 0 · · · 0 1 m 0 0 0 0 · · · 1 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ .
  • 35. Download free ebooks at bookboon.com Stochastic Processes 1 35 3. Markov chains We get from the second oblique diagonal the diagram E0 → E1 → · · · → Em−1 → Em, and similarly from the first oblique diagonal, Em → En−1 → · · · → E1 → E0. Hence P is irreducible. The first and last coordinate of g = g P give g0 = 1 m g1 and gm = 1 m gm−1. For the coordinates in between, i.e. for i = 1, . . . , m − 1, we get gi = gi−1pi−1,i + gi+1pi+1,i = 1 − 1 − i m gi−1 + i + 1 m gi+1. Hence, g1 = m g0. If i = 1, then g1 = g0 + 2 m g2, thus g2 = m(m − 1) 2 g0 = m 2 g0. A qualified guess is that gi = m i g0. This is obviously true for i = 0 and i = 1. Then a check gives 1 − i − 1 m m i − 1 + i + 1 m m i + 1 = m + 1 − i m · m! (m + 1 − i)! · 1 (i − 1)! + i + 1 m · m! (i + 1)!(m − 1 − i)! = m! (m − i)!i! i m + m − i m = m i , and we have tested the claim. The rest then follows from that the solution is unique and from the fact that gi = m i g0 solves the problem. Therefore, gi = m i g0, i = 0, 1, 2, . . . , m. We conclude that m i=0 gi = g0 m i=0 m i 1i 1m−i = g0 · 2m = 1, thus g0 = 2−m , and gi = 1 2m m i , i = 0, 1, 2, . . . , m,
  • 36. Download free ebooks at bookboon.com Stochastic Processes 1 36 3. Markov chains corresponding to the probability vector g = 1 2m 1, m 1 , m 2 , · · · , m m − 1 , 1 . Example 3.15 (Continuation of Example 3.14). Let Y (n) = 2X(n) − m, and let en = E{Y (n)}, n ∈ N0. Find en+1 expressed by en and m. Find en, assuming that the process at time t = 0 is in state Ej. If we put p (n) i = P{X(n) = i}, then (cf. Example 3.14), p (n+1) 0 = p (n) 1 p1,0 = 1 m p (n) 1 and pm(n + 1) = p (n) m−1pm−1,m = 1 m p (n) m−1, and p (n+1) i = 1 − i − 1 m p (n) i−1 + i + 1 m p (n) i+1, i = 1, . . . , m − 1. Furthermore, en = E{Y (n)} = 2E{X(n)} − m = 2 m i=1 i p (n) i − m. Hence en+1 = E{Y (n + 1)} = 2E{X(n + 1)} − m = 2 m i=1 i p (n+1) i − m = 2 m−1 i=1 i p (n+1) i + 2m p(n+1) m − m = 2 m−1 i=1 i 1 − i − 1 m p (n) i−1 + 2 m−1 i=1 i · i + 1 m p (n) i+1 + 2m p(n+1) m − m = 2 m−2 i=0 (i + 1) 1 − i m p (n) i + 2 m i=2 (i − 1) · i m p (n) i + 2m p(n+1) m − m = 2 m−2 i=0 (i + 1)p (n= i − 2 m−2 i=0 i m (i + 1)p (n) i + 2 m i=0 i m (i − 1) p (n) i + 2m p(n+1) m − m = 2 m i=0 (i + 1)p (n) i − 2m p (n) m−1 − 2(m + 1)p(n) m − 2 m i=0 i m (i + 1) p (n) i + 2 m − 1 m · m p (n) m−1 +2 · m m (m + 1)P(n) m + 2 m i=0 i m (i − 1)p (n) i + 2m p(n+1) m − m,
  • 37. Download free ebooks at bookboon.com Stochastic Processes 1 37 3. Markov chains and thus en+1 = 2 m i=0 i p (n) i − m + 2 m i=0 p (n) i − 2 m i=0 i m {i + 1 − i + 1}p (n) i +2(m − 1) p (n) m−1 − 2m p (n) m−1 + 2(m + 1)p(n) m − 2(m + 1)p(n) m + 2m p(n+1) m = en + 2 − 4 m m i=0 i p (n) i − 2p (n) m−1 + 2m · 1 m · p (n) m−1 = en + 2 − 2 m 2 m i=0 i p (n) i − m − 2 m · m = en + 2 − 2 m · en − 2 = 1 − 2 m en. Then by recursion, en 1 − 2 m n e0, where e0 = 1 (j − 1) · j m + (j + 1) · 1 − j m − m = 2 j m (j − 1 − j − 1) + j + 1 − m = 2j + 2 − 4j m − m = 2j 1 − 2 m − m 1 − 2 m = (2j − m) 1 − 2 m , hence en = 1 − 2 m n+1 (2j − m). Please click the advert
  • 38. Download free ebooks at bookboon.com Stochastic Processes 1 38 3. Markov chains Example 3.16 Consider a Markov chain of the states E0, E1, . . . , Em and transition probabilities pi,i+1 = p, i = 2, 3, . . . , m − 1; pi,i−1 = q, i = 2, 3, . . . , m − 1; p1,2 = 1, pm,m−1 = 1; pij = 0 otherwise. Her er p 0, q 0, p + q = 1. 1) Prove that the Markov chain is irreducible, and find its invariant probability vector. 2) Is the given Markov chain regular? 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 0 0 · · · 0 0 q 0 p 0 · · · 0 0 0 q 0 p · · · 0 0 . . . . . . . . . . . . . . . . . . 0 0 0 0 · · · 0 p 0 0 0 0 · · · 1 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ . The first oblique diagonal gives Em → Em−1 → · · · → E1 → E0, and the last oblique diagonal gives the diagram E0 → E1 → · · · → Em−1 → Em. It is therefore possible to come from every state to any other by using these diagrams. Hence, the Markov chain is irreducible. The equations of the invariant probability vector are g0 = q g1, g1 = g0 + q g2, gm−1 = p gm−2 + gm, gm = p gm−1, and for the coordinates in between, gj = p gj−1 + q gj+1, j = 2, 3, 4, . . . , m − 2. It follows from the first equation that g1 = 1 q g0. Similarly, from the second equation, q g2 = g1 − g0 = 1 q g0 − g0 = 1 − q q g0 = p q g0, thus g2 = p q2 g0.
  • 39. Download free ebooks at bookboon.com Stochastic Processes 1 39 3. Markov chains A qualified guess is (2) gj = 1 q p q j−1 g0, for j ≥ 1 and j ≤ m − 1. It follows from the above that this formula is true for j = 1 and j = 2. Then we check for j = 2, 3, 4, . . . , m − 2, i.e. p gj−1 + q gj+1 = p q · p q j−2 g0 + q q p q j g0 = 1 q p q j−1 q + q · p q = gj. If j = m − 1, then gm−1 = p gm−2 + gm = p gm−2 + p gm−1, i.e. gm−1 = p q gm−2, proving that (2) also holds for j = m − 1. Finally, gm = p gm−1 = p q p q m−2 g0 = p q m−1 g0. Summing up we get (3) g = g0 1, 1 q , p q2 , . . . , 1 q p q m−2 , p q m−1 . If p = q, i.e. p = 1 2 , then we get the condition 1 = n j=0 gj = 1 + 1 q 1 + p q + p q 2 + · · · + p q m−2 + p q m−1 g0 = ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ 1 + 1 q · 1 − p q m−1 1 − p q + p q m−1 ⎫ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎭ g0 = ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ 1 + 1 − p q m−1 q − p + p q m−1 ⎫ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎭ g0 = 1 q − p q − p + 1 + (q − p − 1) p q m−1 g0 = 1 q − p 2q − 2p p q m−1 g0 = 2q q − p 1 − p q m g0, hence g0 = q − p 2q 1 − p q m, which is inserted into (3). When p = q = 1 2 , formula (3) is reduced to g = g0(1, 2, 2, · · · , 2, 1), where 1 = g0{1 + 2(m − 1) + 1} = 2m g0,
  • 40. Download free ebooks at bookboon.com Stochastic Processes 1 40 3. Markov chains so g0 = 1 2m , and g = 1 2m , 1 m , 1 m , · · · , 1 m , 1 2m . 2) The Markov chain is not regular. In fact, all Pn contain zeros. This is easily seen by an example. Let m = 3, and let denote any number 0. Then P = ⎛ ⎜ ⎜ ⎝ 0 0 0 0 0 0 0 0 0 0 ⎞ ⎟ ⎟ ⎠ , P2 = ⎛ ⎜ ⎜ ⎝ 0 0 0 0 0 0 0 0 ⎞ ⎟ ⎟ ⎠ , P = ⎛ ⎜ ⎜ ⎝ 0 0 0 0 0 0 0 0 ⎞ ⎟ ⎟ ⎠ , and we see that P2 has zeros at all places where i + j is odd, and P3 has zeros at all places, where i + i is even. This pattern is repeated, so P2n has the same structure as P2 , and P2n+1 has the same structure as P3 , concerning zeros. Challenging? Not challenging? Try more Try this... www.alloptions.nl/life Please click the advert
  • 41. Download free ebooks at bookboon.com Stochastic Processes 1 41 3. Markov chains Example 3.17 A circle is divided into m arcs E1, E2, . . . , Em, where m ≥ 3. A particle moves in the following way between the states E1, E2, . . . , Em: There is every minute the probability p ∈ ]0, 1[ that it moves from a state to the neighbouring state in the positive sense of the plane, and the probability q = 1 − p that it moves to the neighbouring state in the negative sense of the plane, i.e- pi,i+1 = p, i = 1, 2, . . . , m − 1; pi,i−1 = q, i = 2, 3, . . . , m; pm,1 = p; p1,m = q; pij = 0 otherwise. 1) Find the stochastic matrix. 2) Prove that the Markov chain is irreducible. 3) Prove that the Markov chain is double stochastic, and find the invariant probability vector. 4) Prove that if the particle at time t = 0 is in state Ei, then there is a positive probability that it is in the same state to all of the times t = 2, 4, 6, 8, . . . . 5) Prove that if the particle at time t = 0 is in state Ei, then there is a positive probability that the particle is in the same state at time t = m. 6) Find the values of m, for which the Markov chain is regular. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 0 p 0 0 · · · 0 0 q q 0 p 0 · · · 0 0 0 0 q 0 p · · · 0 0 0 . . . . . . . . . . . . . . . . . . . . . 0 0 0 0 · · · q 0 p p 0 0 0 · · · 0 q 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ .
  • 42. Download free ebooks at bookboon.com Stochastic Processes 1 42 3. Markov chains 2) It follows from E1 → E2 → · · · → Em → E1, that P is irreducible. 3) The sum of each column is p + q = 1, hence the Markov chain is double stochastic. Since P is irreducible, 1 m , 1 m , . . . , 1 m is the only invariant probability vector. 4) This is obvious from the parity. (The same number of steps forward as backward, hence in total an even number). 5) This is also obvious, because the probability is ≥ pm + qm , because pm is the probability of m steps forward, and qm is the probability of m steps backward. 6) It follows from 4. and 5. that if m is odd, then there is a positive probability to be in any given state, when t 3m is even, thus the Markov chain is regular, when m is odd. If m is even, then the Markov chain is not regular, because the difference of the indices of the possible states must always be an even number. We shall therefore have zeros in every matrix Pn . Stand out from the crowd Designed for graduates with less than one year of full-time postgraduate work experience, London Business School’s Masters in Management will expand your thinking and provide you with the foundations for a successful career in business. The programme is developed in consultation with recruiters to provide you with the key skills that top employers demand. Through 11 months of full-time study, you will gain the business knowledge and capabilities to increase your career choices and stand out from the crowd. Applications are now open for entry in September 2011. For more information visit www.london.edu/mim/ email mim@london.edu or call +44 (0)20 7000 7573 Masters in Management London Business School Regent’s Park London NW1 4SA United Kingdom Tel +44 (0)20 7000 7573 Email mim@london.edu www.london.edu/mim/ Fast-track your career Please click the advert
  • 43. Download free ebooks at bookboon.com Stochastic Processes 1 43 3. Markov chains Example 3.18 Given a Markov chain of the states E0, E1, E2, . . . , Em (where m ≥ 2) and the transition probabilities p0,j = ai, i = 1, 2, · · · , m, pi,i−1 = 1, i = 1, 2, · · · , m, pij = 0, otherwise, where ai ≥ 0, i = 1, 2, . . . , m − 1, am 0, m i=1 ai = 1. 1) Prove that the Markov chain is irreducible. 2) Assume that the process at time t = 0 is at state E0; let T1 denote the random variable, which indicates the time of the first return to E0. Find the distribution of T1. 3) Compute the mean of T1. 4) Let T1, T2, . . . , Tk denote the times of the first, second, . . . , k-th return to E0. Prove for every ε 0 that P ! ! ! ! Tk k − μ ! ! ! ! ε → 0 for k → ∞. Hint: Apply that Tk = T1 + (T2 − T1) + · · · + (Tk − Tk−1) . 1) The corresponding stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 0 a1 a2 · · · am−1 am 1 0 0 · · · 0 0 0 1 0 · · · 0 0 0 0 1 · · · 0 0 . . . . . . . . . . . . . . . 0 0 0 · · · 1 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ , From this we immediately get the diagram (cf. the oblique diagonal in the matrix consisting of only ones) Em → Em−1 → · · · → E1 → E0. Now am 0, hence also E0 → Em, and we conclude that we can get from any state to any other state, so P is irreducible.
  • 44. Download free ebooks at bookboon.com Stochastic Processes 1 44 3. Markov chains 2) The probability is aj for getting from E0 to Ej. We shall use j steps in order to move from Ej back to E0, so P {T1 = j + 1} = aj, j = 1, 2, . . . , m. 3) The mean is μ = E {T1} = m j=1 (j + 1)aj = 1 + m j=1 j aj. 4) Clearly, Tk = T1 + (T2 − T1) + · · · + (Tk − Tk−1) . Furthermore, Xj = Tj − Tj−1 has the same distribution as T1, and T1 and the Xj are mutually independent. Using V {T1} = σ2 ∞, it follows from Chebyshev’s inequality that P ! ! ! ! Tk k − μ ! ! ! ! ≥ ε ≤ σ2 k ε2 → 0 for k → ∞. © UBS 2010. All rights reserved. www.ubs.com/graduates Looking for a career where your ideas could really make a difference? UBS’s Graduate Programme and internships are a chance for you to experience for yourself what it’s like to be part of a global team that rewards your input and believes in succeeding together. Wherever you are in your academic career, make your future a part of ours by visiting www.ubs.com/graduates. You’re full of energy and ideas. And that’s just what we are looking for. Please click the advert
  • 45. Download free ebooks at bookboon.com Stochastic Processes 1 45 3. Markov chains Example 3.19 Given a Markov chain of the states E0, E1, . . . , Em (where m ≥ 2) and transition probabilities pi,i+1 = p, i = 0, 1, 2, . . . , m − 1, pi,0 = q, i = 0, 1, 2, . . . , m − 1, pm,0 = 1, pi,j = 0 otherwise, (where p 0, q 0, p + q = 1). The above can be considered as a model of the following situation: A person participates in a series of games. He has in each game the probability p of winning and probability q of losing; if he loses in a game, he starts from the beginning in the next game. He does the same after m won games in a row. The state Ei corresponds for i = 1, 2, . . . , m to the situation that he has won the latest i games. 1) Find the stochastic matrix. 2) Prove that the Markov chain is irreducible. 3) Prove that the Markov chain is regular. 4) Find the stationary distribution. 5) Assume that the process at time t = 0 is in state E0; let T1 denote the random variable, which indicagtes the time of the first return to E0. Find P {Tk = k + 1} , k = 0, 1, 2, . . . , m. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ q p 0 0 · · · 0 0 q 0 p 0 · · · 0 0 q 0 0 p · · · 0 0 . . . . . . . . . . . . . . . . . . q 0 0 0 · · · 0 p 1 0 0 0 · · · 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ . 2) The Markov chain is irreducible, because we have the transitions E0 → E1 → E2 → · · · → Em → E0. 3) Since the Markov chain is irreducible, and p00 = q 0, it is also regular. 4) The matrix equation g P = g, where g = (g0, g1, . . . , gm), is written g0 = (g0 + · · · + gm−1) q + gm, g1 = g0 · p, g2 = g1 · p, . . . . . . gm = gm−1 · p,
  • 46. Download free ebooks at bookboon.com Stochastic Processes 1 46 3. Markov chains hence gk = g0 · pk , k = 0, 1, . . . , m. It follows from 1 = m k=0 gk = g0 m k=0 pk = g0 · 1 − pm+1 1 − p = g0 · 1 − pm+1 q , that gk = q · pk 1 − pm+1 , k = 0, 1, . . . , m. 5) If k ≤ m − 1, then P {T1 = k + 1} = P{k games are won successively, and the (k + 1)-th game is lost} = pk q. If k = m, then P {T1 = m + 1} = P{m games are won successively} = pm . Please click the advert
  • 47. Download free ebooks at bookboon.com Stochastic Processes 1 47 3. Markov chains Example 3.20 Given a Markov chain of states E1, E2, E3, E4 and E5 and of its stochastic matrix P given by P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 0 0 0 1 3 0 2 3 0 0 0 1 3 0 2 3 0 0 0 1 3 0 2 3 0 0 0 1 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . 1. Prove that the Markov chain is irreducible, and find its invariant probability vector. A Markov chain of the states E1, E2, E3, E4 and E5 has its stochastic matrix Q given by Q = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 5 0 0 4 5 4 5 0 0 0 1 5 0 0 1 0 0 0 0 0 1 0 1 5 4 5 0 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . 2. Find all probability vectors which are invariant with respect to Q. 3. Assume that an initial distribution is given by q(0) = (1, 0, 0, 0, 0). Prove that limn→∞ q(n) exists and find the limit vector. 1) The two oblique diagonals next to the main diagonal give E5 → E4 → E3 → E2 → E1 and E1 → E2 → E3 → E4 → E5, so we can get from any state to any other one, hence P is irreducible. The equations of the invariant probability vector are p1 = 1 3 p2, p2 = p1 + 1 3 p3, p3 = 2 3 p2 + 1 3 p4, p4 = 2 3 p3 + p5, p5 = 2 3 p4,
  • 48. Download free ebooks at bookboon.com Stochastic Processes 1 48 3. Markov chains thus p2 = 3p1, p3 = 3p2 − 3p1 = 9p1 − 3p2 = 6p1, p4 = 2p3 = 12p1, p5 = 8p1. We get 1 = p1 + p2 + p3 + p4 + p5 = p1{1 + 3 + 6 + 12 + 8} = 30p1, thus p1 = 1 30 , and hence p = 1 30 , 1 10 , 1 5 , 2 5 , 4 15 . 2) The equations of the probability vectors are q1 = 4 5 q2 + 1 5 q5, q2 = 1 5 q1 + 4 5 q5, q3 = q3, q4 = q4, q5 = 4 5 q1 + 1 5 q2, from which it is seen that q3 and q4 can be chosen arbitrarily, if only q3 ≥ 0, q4 ≥ 0 and q3 + q4 ≤ 1. The remaining equations are 5q1 − 4q2 = q5, −q1 + 5q2 = 4q5. thus q1 = q2 = q5. The invariant probability vectors of Q are given by q = (x, x, y, z, x), x, y, z ≥ 0 and 3x + y + z = 1. 3) We have for every n that q (n) 3 = q (n) 4 = 0, so it suffices to consider Q(1) = ⎛ ⎝ 0 1 5 4 5 4 5 0 1 5 1 5 4 5 0 ⎞ ⎠ .
  • 49. Download free ebooks at bookboon.com Stochastic Processes 1 49 3. Markov chains Now Q(1) Q(1) contains only positive elements, so Q(1) is regular. The equations of the corresponding simplified invariant probability vector are q1 = 4 5 q2 + 1 5 q5, q2 = 1 5 q1 + 4 5 q5, q5 = 4 5 q1 + 1 5 q2. There are the same equations as in 2., and since q1 + q2 + q5 = 1, we have q1 = q2 = q5 = 1 3 . It follows that lim n→∞ q(n) = 1 3 , 1 3 , 0, 0, 1 3 . Example 3.21 Given a Markov chain of the 5 states E1, E2, E3, E4 and E5 and the transition probabilities pi,i+1 = i i + 2 , pi,1 = 2 i + 2 , i = 1, 2, 3, 4; p5,1 = 1; pi,j = 0 otherwise. 1) Find the stochastic matrix. 2) Prove the Markov chain is irreducible. 3) Prove that the Markov chain is regular. 4) Find the stationary distribution (the invariant probability vector). 5) Assume that the process at time t = 0 is in state E1. Denote by T the random variable, which indicates the time of the first return to E1. Find P{T = k}, k = 1, 2, 3, 4, 5, and compute the mean and variance of T. 1) The matrix P is given by P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 2 3 1 3 0 0 0 1 2 0 1 2 0 0 2 5 0 0 3 5 0 1 3 0 0 0 2 3 1 0 0 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . 2) It follows from the diagram E1 → E2 → E3 → E4 → E5 → E1, that P is irreducible.
  • 50. Download free ebooks at bookboon.com Stochastic Processes 1 50 3. Markov chains 3) Since p1,1 = 2 3 0, and P is irreducible, we conclude that P is regular. 4) The equations of the invariant probability vector are p1 = 2 3 p1 + 1 2 p2 + 2 5 p3 + 1 3 p4 + p5, p2 = 1 3 p1, p3 = 1 2 p2, p4 = 3 5 p3, p5 = 2 3 p4. We get successively, p4 = 3 2 p5, p3 = 5 3 p4 = 5 2 p5, p2 = 2p3 = 5p5, p1 = 3p2 = 15p5, so 1 = p1 + p2 + p3 + p4 + p5 = p5 1 + 3 2 + 5 2 + 5 + 15 = 25p5. your chance to change the world Here at Ericsson we have a deep rooted belief that the innovations we make on a daily basis can have a profound effect on making the world a better place for people, business and society. Join us. In Germany we are especially looking for graduates as Integration Engineers for • Radio Access and IP Networks • IMS and IPTV We are looking forward to getting your application! To apply and for all current job openings please visit our web page: www.ericsson.com/careers Please click the advert
  • 51. Download free ebooks at bookboon.com Stochastic Processes 1 51 3. Markov chains Thus p5 = 1 25 , p4 = 3 50 , p3 = 1 10 , p2 = 1 5 and p1 = 3 5 , and hence p = 3 5 , 1 5 , 1 10 , 3 50 , 1 25 . 5) We immediately get, P{T = 1} = 2 3 and P{T 1} = 1 3 . We are in the latter case in state E2, so P{T = 2} = 1 3 · 1 2 = 1 6 and P{T 2} = 1 6 . In the latter case we are in state E3, so P{T = 3} = 1 6 · 2 15 = 1 15 and P{T 3} = 1 6 · 3 5 = 1 10 . In the latter case we are in state E4, so P{T = 4} = 1 3 · 1 10 = 1 30 and P{Y = 5} = 2 3 · 1 10 = 1 15 . Summing up, P{T = 1} = 2 3 , P{T = 2} = 1 6 , P{T = 3} = 1 15 , P{T = 4} = 1 30 , P{T = 5} = 1 15 . The mean is E{T} = 2 3 + 2 · 1 6 + 3 · 1 15 + 4 · 1 30 + 5 · 1 15 = 1 + 1 3 + 1 3 = 5 3 . Then we get E % T2 = 2 3 + 4 · 1 6 + 9 · 1 15 + 16 · 1 30 + 25 · 1 15 = 2 3 + 2 3 + 3 5 + 8 15 + 5 3 = 3 + 17 15 = 62 15 , hence V {T} = E % T2 − (E{T})2 = 62 15 − 25 9 = 1 45 (186 − 125) = 61 45 .
  • 52. Download free ebooks at bookboon.com Stochastic Processes 1 52 3. Markov chains Example 3.22 Consider a Markov chain of the m states E1, E2, . . . , Em (where m ≥ 3) and the transition probabilities pi,i+1 = i i + i + 2 , pi,1 = 1 i + 2 , i = 1, 2, . . . , m − 1, pm,1 = 1, pi,j = 0 otherwise. 1) Find the stochastic matrix. 2) Prove that the Markov chain is irreducible. 3) Prove that the Markov chain is regular. 4) Find the stationary distribution. 5) Assume that the process at time t = 0 is in state E1. Let T denote the random variable, which indicates the time of the first return to E1. find P{T = k}, k = 1, 2, . . . , m. 6) Find the mean E{T}. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 2 3 1 3 0 0 · · · 0 0 2 4 0 2 4 0 · · · 0 0 2 5 0 0 3 5 · · · 0 0 . . . . . . . . . . . . . . . . . . 2 m+1 0 0 0 · · · 0 m−1 m+1 1 0 0 0 · · · 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ . 2) We get immediately the diagram E1 → E2 → E3 → · · · → Em. Since also Em → E1, we conclude that the Markov chain is irreducible. 3) We have proved that P is irreducible, and since p1,1 = 2 3 0, it follows that P is regular. 4) The equations of the stationary distribution are g1 = 2 3 g1 + 2 4 g2 + 2 5 g3 + · · · + 2 m + 1 gm−1 + gm, g2 = 1 3 g1, g3 = 2 4 g2, . . . . . . gm−1 = m − 2 m gm−2, gm = m − 1 m + 1 gm−1,
  • 53. Download free ebooks at bookboon.com Stochastic Processes 1 53 3. Markov chains thus g1 = 3 2 g2, g2 = 4 2 g3, . . . . . . gm−2 = m m − 2 gm−1, gm−1 = m + 1 m − 1 gm. Since gj = j + 2 j gj+1, it follows by recursion for j ≤ m − 2 that gj = j + 2 j gj+1 = j + 2 j · j + 3 j + 1 · j + 4 j + 2 · · · m − 1 m − 3 · m m − 2 · m + 1 m − 1 gm = m(m + 1) j(j + 1) gm. A check shows that this is also true for j = m − 1 and j = m, so in general, gj = m(m + 1) j(j + 1) gm, j = 1, 2, . . . , m. Then 1 = m j=1 gj = m(m + 1)gm m j=1 1 j(j + 1) = m(m + 1) 1 − 1 m + 1 gm = m2 gm, i.e. gm = 1 m2 , and gj = m + 1 m · 1 j(j + 1) , j = 1, . . . , m. 5) Clearly, P{T = 1} = 2 3 , so by inspecting the matrix we get P{T = 2} = 1 3 · 2 4 = 1 6 , and P{T = 3} = 1 3 · 2 4 · 2 5 = 1 15 . In order to compute P{T = k} we must in step number m − 1 be on the oblique diagonal, so P{T = k} = 1 3 · 2 4 · 3 5 · · · k − 1 k + 1 · 2 k + 2 = 1 · 2 · 2 k(k + 1)(k + 2) = 4 k(k + 1)(k + 2) .
  • 54. Download free ebooks at bookboon.com Stochastic Processes 1 54 3. Markov chains A small check shows that this result is correct for k = 1, 2, 3. Finally, P{T = n} = 1 3 · 2 4 · · · m − 1 m + 1 = 2 m(m + 1) . 6) The mean is E{T} = m k=1 k · P{T = k} = m−1 k=1 4 (k + 1)(k + 2) + 2 m + 1 = 4 m−1 k=1 1 k + 1 − 1 k + 2 + 2 m + 1 = 4 1 2 − 1 m + 1 + 2 m + 1 = 2 1 − 1 m + 1 = 2m m + 1 . what‘s missing in this equation? MAERSK INTERNATIONAL TECHNOLOGY SCIENCE PROGRAMME You could be one of our future talents Are you about to graduate as an engineer or geoscientist? Or have you already graduated? If so, there may be an exciting future for you with A.P. Moller - Maersk. www.maersk.com/mitas Please click the advert
  • 55. Download free ebooks at bookboon.com Stochastic Processes 1 55 3. Markov chains Example 3.23 Given a Markov chain of the m states E1, E2, . . . , Em (where m ≥ 3) and the transition probabilities pi,i = 1 − 2p, i = 1, 2, . . . , m, pi,i−1 = pi,i+1 = p, i = 2, . . . , m − 1, p1,2 = pm,m−1 = 2p, pi,j = 0 otherwise, where p ∈ # 0, 1 2 # . 1) Find then stochastic matrix. 2) Prove that the Markov chain is irreducible. 3) Find the invariant probability vector. 4) Prove that the Markov chain is regular for p ∈ # 0, 1 2 , and not regular for p = 1 2 . 5) Compute p (2) 1,1. 6) In the case of m = 5 and p = 1 2 one shall compute lim n→∞ p (2n−1) 1,1 and lim n→∞ p (2n) 1,1 . 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 1 − 2p 2p 0 0 · · · 0 0 0 p 1 − 2p p 0 · · · 0 0 0 0 p 1 − 2p p · · · 0 0 0 . . . . . . . . . . . . . . . . . . . . . 0 0 0 0 · · · p 1 − 2p p 0 0 0 0 · · · 0 2p 1 − 2p ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ . 2) It follows from the diagram E1 → E2 → · · · → Em → Em−1 → · · · → E1, that P is irreducible. 3) The equations are p1 = (1 − 2p)p1 + p · p2, p2 = 2p · p1 + (1 − 2p)p2 + p · p3, pi = p · pi−1 + (1 − 2p)pi + p · pi+1, i = 3, 4, . . . , m − 2, pm−1 = p · pm−2 + (1 − 2p)pm−1 + 2p · pm, pm = p · pm−1 + (1 − 2p)pm.
  • 56. Download free ebooks at bookboon.com Stochastic Processes 1 56 3. Markov chains They are reduced to p2 = 2p1, p3 = 2p2 − 2p1, pi+1 = 2pi − pi−1, i = 3, 4, . . . , m − 2, pm−2 = 2pm−1 − pm, pm−1 = 2pm. Hence p2 = 2p1, p3 = 2p2 − 2p1 = 4p1 − 2p1 = 2p1, and pi+1 − pi = pi − pi−1 = · · · = p3 − p2 = 2p1 − 2p1 = 0, thus pm−1 = pm−2 = · · · = p3 = p2. Finally, pm−1 = 2pm. Summing up we get pm = p1 and p2 = · · · = pm−1 = 2p1, whence 1 = m n=1 pn = p1{1 + 2(m − 2) + 1} = 2(m − 1)p1, i.e. p1 = 1 2(m − 1) , and the invariant probability vector is p = 1 2(m − 1) , 1 m − 1 , 1 m − 1 , · · · , 1 m − 1 , 1 2(m − 1) . 4) If p = 1 2 , then all pi,i = 0. Since P according to 2. is irreducible, P is regular. If p = 1 2 , then p = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 0 0 · · · 0 0 0 1 2 0 1 2 0 · · · 0 0 0 0 1 2 0 1 2 · · · 0 0 0 . . . . . . . . . . . . . . . . . . . . . 0 0 0 0 · · · 1 2 0 1 2 0 0 0 0 · · · 0 1 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ . It follows that if p (2) i,j = 0 then i − j is an even integer, and if p (3) i,j = 0, then i − j is an odd integer, etc.. It therefore follows that Pn will always contain zeros, and we conclude that P is not regular for p = 1 2 .
  • 57. Download free ebooks at bookboon.com Stochastic Processes 1 57 3. Markov chains 5) The element p (2) 1,1 of P2 is p (2) 1,1 = (1 − 2p)(1 − 2p) + 2p · p = 1 − 4p + 6p2 . 6) If we put m = 5 and p = 1 2 , then we get the stochastic matrix P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 0 0 0 1 2 0 1 2 0 0 0 1 2 0 1 2 0 0 0 1 2 0 1 2 0 0 0 1 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . Since i = j = 1, clearly i − j = 0 is an even integer, so it follows immediately from 4. that p (2n−1) 1,1 = 0 for all n, and hence lim n→∞ p (2n−1) 1,1 = 0. In the computation of p (2) 1,1 we consider instead q (n) 1,1 i Qn = P2 n , where Q = P2 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 1 2 0 1 2 0 0 0 3 4 0 1 4 0 1 4 0 1 2 01 4 0 1 4 03 4 0 0 0 1 2 0 1 2 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . By 3. the invariant probability vector is given by 1 8 , 1 4 , 1 4 , 1 4 , 1 8 , and 1 n n i=1 P → G, for n → ∞, where each row in G is 1 8 , 1 4 , 1 4 , 1 4 , 1 8 . From p (2i−1) 1,1 = 0 follows that 1 2n 2n i=1 p (i) 1,1 = 1 2n n j=1 p (2j) 1,1 → g1 = 1 8 for n → ∞. If limn→∞ p (2n) 1,1 exists, then this will imply that 1 2 lim n→∞ p (2n) 1,1 = 1 8 , hence lim n→∞ p (2n) 1,1 = 1 4 . Now, since q (n+1) 1,1 = p (2n+2) 1,1 = 1 2 p (2n) 1,1 + 1 4 p (2n) 1,3 p (2n) 1,1 = q (n) 1,1 , the sequence p (n) 1,1 is decreasing and bounded from below, hence convergent. We therefore con- clude that lim n→∞ p (2n) 1,1 = 1 4 .
  • 58. Download free ebooks at bookboon.com Stochastic Processes 1 58 3. Markov chains Example 3.24 A Markov chain has its stochastic matrix Q given by Q = ⎛ ⎝ 0 1 2 1 2 1 3 0 2 3 1 3 2 3 0 ⎞ ⎠ . 1. Prove that the Markov chain is regular, and find the invariant probability vector. Another Markov chain with 5 states has its stochastic matrix P given by P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 2 1 2 0 0 1 3 0 2 3 0 0 1 3 2 3 0 0 0 1 3 1 3 0 1 6 1 6 1 2 0 0 3 8 1 8 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . 2. Prove that this Markov chain is not irreducible. 3. Prove for any initial distribution p(0) = p (0) 1 , p (0) 2 , p (0) 3 , p (0) 4 , p (0) 5 that p (n) 4 + p (n) 5 ≤ 1 2 ' p (n−1) 4 + p (n−1) 5 ( , n ∈ N, and then prove that p (n) 4 + p (n) 5 ≤ 1 2 n , n ∈ N. 4. Prove that the limit limn→∞ p(n) exists and find the limit vector. 1) It follows from Q = ⎛ ⎝ 0 1 2 1 2 1 3 0 2 3 1 3 2 3 0 ⎞ ⎠ that Q2 = ⎛ ⎝ 1 3 1 3 1 3 2 9 11 18 1 6 2 9 1 6 11 18 ⎞ ⎠ . All elements of Q2 are positive, thus Q is regular. The invariant probability vector g = (g1, g2, g3) satisfies i) g1, g2, g3 ≥ 0, ii) g1 + g2 + g3 = 1, iii) g Q = g. The latter condition iii) is written ⎧ ⎨ ⎩ g1 = 1 3 g2 + 1 3 g3, (1) g2 = 1 2 g1 + 2 3 g3, (2) g3 = 1 2 g1 + 2 3 g2. (3)
  • 59. Download free ebooks at bookboon.com Stochastic Processes 1 59 3. Markov chains When we insert (1) into (2), we get g2 = 1 6 g2 + 1 6 g3 + 2 3 g3, thus g2 = g3. Then it follows from (1) that g1 = 2 3 g2. Furthermore, ii) implies that 2 3 g2 + g2 + g2 = 8 3 g2 = 1, thus g2 = 3 8 = g3 and g1 = 2 3 · 3 8 = 2 8 , so g = 2 8 , 3 8 , 3 8 = 1 4 , 3 8 , 3 8 . 2) We notice that Q is included in P as the upper (3 × 3) sub-matrix P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 2 1 2 | 0 0 1 3 0 2 3 | 0 0 1 3 2 3 0 | 0 0 − − − − 1 3 1 3 0 1 6 1 6 1 2 0 0 3 8 1 8 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ , hence {E1, E2, E3} is a proper closed subset. Then P is not irreducible. It all starts at Boot Camp. It’s 48 hours that will stimulate your mind and enhance your career prospects. You’ll spend time with other students, top Accenture Consultants and special guests. An inspirational two days packed with intellectual challenges and activities designed to let you discover what it really means to be a high performer in business. We can’t tell you everything about Boot Camp, but expect a fast-paced, exhilarating and intense learning experience. It could be your toughest test yet, which is exactly what will make it your biggest opportunity. Find out more and apply online. Choose Accenture for a career where the variety of opportunities and challenges allows you to make a difference every day. A place where you can develop your potential and grow professionally, working alongside talented colleagues. The only place where you can learn from our unrivalled experience, while helping our global clients achieve high performance. If this is your idea of a typical working day, then Accenture is the place to be. Turning a challenge into a learning curve. Just another day at the office for a high performer. Accenture Boot Camp – your toughest test yet Visit accenture.com/bootcamp Please click the advert
  • 60. Download free ebooks at bookboon.com Stochastic Processes 1 60 3. Markov chains 3) It follows from p(n) = p(n−1) P that p (n) 4 = 1 6 p (n−1) 4 + 3 8 p (n−1) 5 , p (n) 5 = 1 6 p (n−1) 4 + 1 8 p (n−1) 5 , hence by addition, p (n) 4 + p (n) 5 = 1 3 p (n−1) 4 + 1 2 p (n−1) 5 ≤ 1 2 ' p (n−1) 4 + p (n−1) 5 ( . When this inequality is iterated, we get p (n) 4 + p (n) 5 ≤ 1 2 ' p (n−1) 4 + p (n−1) 5 ( ≤ · · · ≤ 1 2 n ' p (0) 4 + p (0) 5 ( ≤ 1 2 n . 4) It follows from 3. that p (n) 4 → 0 and p (n) 5 → 0 for n → ∞, so we end in the closed subset {E1, E2, E3}. Inside this closed subset the behaviour is governed by the stochastic matrix Q, the probability vector of which was found in 1.. Hence it also follows for P that p(n) → 1 4 , 3 8 , 3 8 , 0, 0 for n → ∞.
  • 61. Download free ebooks at bookboon.com Stochastic Processes 1 61 3. Markov chains Example 3.25 Given a Markov chain of 5 states E1, E2, E3, E4 and E5 and transition probabilities p11 = p55 = 1, p23 = p34 = p45 = 2 3 , p21 = p32 = p43 = 1 3 , pij = 0 otherwise. 1) Find the stochastic matrix P. 2) Find all invariant probability vectors of P. 3) Compute the matrix P2 . 4) Prove for any initial distribution p(0) = p (0) 1 , p (0) 2 , p (0) 3 , p (0) 4 , p (0) 5 that p (n+2) 2 + p (n+2) 3 + p (n+2) 4 ≤ 2 3 ' p (n) 2 + p (n) 3 + p (n) 4 ( , n ∈ N0, and then prove that lim n→∞ p (n) 2 = lim n→∞ p (n) 4 = lim n→∞ p (n) 4 = 0. 5) Let the initial distribution be given by q(0) = (0, 1, 0, 0, 0). Find limn→∞ q(n) . 6) We assume that the process at time t = 0 is in state E2. Let T denote the random variable, which indicates the time when the process for the first time gets to either the state E1 or the state E5. Find the mean E{T}. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 1 0 0 0 0 1 3 0 2 3 0 0 0 1 3 0 2 3 0 0 0 1 3 0 2 3 0 0 0 0 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . Remark 3.2 We notice that the situation can be interpreted as a random walk on the set {1, 2, 3, 4, 5} with two absorbing barriers. ♦
  • 62. Download free ebooks at bookboon.com Stochastic Processes 1 62 3. Markov chains 2) The equations of the invariant probability vectors are g1 = g1 + 1 3 g2, g2 = 1 3 g3, g3 = 2 3 g2 + 1 3 g4, g4 = 2 3 g3 g5 = 2 3 g4 + g5, from which it is immediately seen that g2 = 0, g3 = 0, g4 = 0, so the only constraint on g1 ≥ 0 and g5 ≥ 0 is that g1 + g5 = 1. Putting g1 = x ∈ [0, 1], it follows that all invariant probability vectors are given by gx = (x, 0, 0, 0, 1 − x), x ∈ [0, 1]. Remark 3.3 We note that we have a single infinity of invariant probability vectors. ♦ 3) By a simple computation, P2 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 1 0 0 0 0 1 3 0 2 3 0 0 0 1 3 0 2 3 0 0 0 1 3 0 2 3 0 0 0 0 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 1 0 0 0 0 1 3 0 2 3 0 0 0 1 3 0 2 3 0 0 0 1 3 0 2 3 0 0 0 0 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 1 0 0 0 0 1 3 2 9 0 4 9 0 1 9 0 4 9 0 4 9 0 1 9 0 2 9 2 3 0 0 0 0 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . 4) It follows from p(n+2) = p(n) P2 that p(n+2) = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ p (n) 1 + 1 3 p (n) 2 + 1 9 p (n) 3 2 9 p (n) 2 + 1 9 p (n) 4 4 9 p (n) 3 4 9 p (n) 2 + 2 9 p (n) 4 4 9 p (n) 3 + 2 3 p (n) 4 + p (n) 5 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ , hence p (n+2) 2 + p (n+2) 3 + p (n+2) 4 = 2 9 p (n) 2 + 1 9 p (n) 4 + 4 9 p (n) 3 + 4 9 p (n) 2 + 2 9 p (n) 4 = 2 9 + 4 9 p (n) 2 + 4 9 p (n) 3 + 1 9 + 2 9 p (n) 4 = 2 3 p (n) 2 + 4 9 p (n) 3 + 1 3 p (n) 4 ≤ 2 3 ' p (n) 2 + p (n) 3 + p (n) 4 ( .
  • 63. Download free ebooks at bookboon.com Stochastic Processes 1 63 3. Markov chains Since p (n) i ≥ 0, this implies that 0 ≤ p (2n) 2 + p (2n) 3 + p (2n) 4 ≤ 2 3 n ' p (0) 2 + p (0) 3 + p (0) 4 ( → 0 for n → ∞, and 0 ≤ p (2n+1) 2 + p (2n+1) 3 + p (2n+1) 4 ≤ 2 3 n ' p (1) 2 + p (1) 3 + p (1) 4 ( → 0 for n → ∞, thus lim n→∞ ' p (n) 2 + p (n) 3 + p (n) 4 ( = 0. Now, p (n) i ≥ 0, so we conclude that lim n→∞ p (n) 2 = lim n→∞ p (n) 3 = lim n→∞ p (n) 4 = 0. 5) Let us rename the states to E 0 , E 1 , E 2 , E 3 , E 4 . Let us first compute the probability of starting at E 1 and then ending again in E 1 . The parameters are here N = 4, k = 1, q = 1 3 , p = 2 3 , so this probability is given by some known formula, 1 2 1 − 1 2 4 1 − 1 4 4 = 7 15 . It follows from the first results of the example that the structure of the limit vector is (x, 0, 0, 0, 1− x). Hence q(n) → 7 15 , 0, 0, 0, 8 15 . 6) Here it is again advantageous to use the theory of the ruin problem. We get μ = E{T} = 1 − 1 3 − 4 − 1 3 · 1 − 1 2 1 1 − 1 2 4 = −3 + 12 · 1 2 15 16 = −3 + 32 5 = 17 5 so the mean is 17 5 . Remark 3.4 The questions 5 and 6 can of course also be solved without using the theory of the ruin problem. ♦
  • 64. Download free ebooks at bookboon.com Stochastic Processes 1 64 3. Markov chains Example 3.26 Given a Markov chain of the states E1, E2, . . . , Em, where m ≥ 3, and of the transition probabilities pi,i = 1 − 2p, i = 2, 3, . . . , m; pi,i+1 = p, i = 1, 2, . . . , m − 1; pi,i−1 = p, i = 2, 3, . . . , m − 1; p1,1 = 1 − p; pm,m−1 = 2p; pi,j = 0 otherwise, (here p is a number in the interval ) 0, 1 2 ) ). 1. Find the stochastic matrix. 2. Prove that the Markov chain is irreducible. 3. Prove that the Markov chain is regular. 4. Find the invariant probability vector of P. If the process to time t = 0 is in state Ek, then the process will with probability 1 på reach either state E1 or Em at some time. Let ak denote the probability of getting to E1 before Em, when we start at Ek, for k = 1, 2, . . . , m. In particular, a1 = 1 and am = 0. 5. Prove that (4) ak = p ak+1 + (1 − 2p)ak + p ak−1, k = 2, 3, . . . , m − 1. 6. Apply (4) to find the probabilities ak, k = 2, 3, . . . , m − 1. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 1 − p p 0 0 · · · 0 0 0 p 1 − 2p p 0 · · · 0 0 0 0 p 1 − 2p p · · · 0 0 0 . . . . . . . . . . . . . . . . . . . . . 0 0 0 0 · · · 1 − 2p p 0 0 0 0 0 · · · p 1 − 2p p 0 0 0 0 · · · 0 2p 1 − 2p ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ . 2) Since p = 0, it follows by the two oblique diagonals that we have the following transitions E1 → E2 → · · · → Em−1 → Em → Em−1 · · · → E2 → E1, proving that P is irreducible. 3) It follows from P being irreducible, and p1,1 = 1 − p 0 for p ∈ ) 0, 1 2 ) , that P is regular.
  • 65. Download free ebooks at bookboon.com Stochastic Processes 1 65 3. Markov chains 4) The equations of the invariant probability vector are g1 = (1 − p)g1 + p g2, gj = p gj−1 + (1 − 2p)gj + p gj+1, for j = 2, 3, . . . , m − 2, gm−1 = p gm−2 + (1 − 2p)gm−1 + 2p gm, gm = p gm−1 + (1 − 2p)gm, thus g2 = g1, 2gj = gj−1 + gj+1, for j = 2, 3, . . . , m − 2, 2gm−1 = gm−2 + 2gm, 2gm = gm−1. In Paris or Online International programs taught by professors and professionals from all over the world BBA in Global Business MBA in International Management / International Marketing DBA in International Business / International Management MA in International Education MA in Cross-Cultural Communication MA in Foreign Languages Innovative – Practical – Flexible – Affordable Visit: www.HorizonsUniversity.org Write: Admissions@horizonsuniversity.org Call: 01.42.77.20.66 www.HorizonsUniversity.org Please click the advert
  • 66. Download free ebooks at bookboon.com Stochastic Processes 1 66 3. Markov chains We get by a backwards recursion that 2gmgm−1 = gm−2 = · · · = g2 = g1, so 1 = m k=1 gk = 2(m − 1)gm + gm = (2m − 1)gm, and the invariant probability vector is g = 2 2m − 1 , 2 2m − 1 , · · · , 2 2m − 1 , 1 2m − 1 . 5) If the process starts at state Ek, k = 2, 3, . . . , m − 1, we end after 1 step with probability p in state Ek−1, with probability 1 − 2p in Ek, and with probability p in Ek+1. If ak is defined as above, this gives precisely (4), so ak = p ak+1 + (1 − 2p)ak + p ak−1, k = 2, 3, . . . , m − 1. 6) A reduction of (4) gives 2ak = ak+1 + ak−1, k = 2, 3, . . . , m − 1, or more convenient ak−1 − ak = ak − ak+1, k = 2, 3, . . . , m − 1. Hence 1 − a2 = a1 − a2 = a2 − a3 = · · · = am−1 − am = am−1 − 0 = am−1, so 1 = a2 + am−1. On the other hand, a2 = (a2 − a3) + (a3 − a4) + · · · + (am−1 − am) = (m − 2)am−1, hence by insertion 1 = a2 + am−1 = (m − 1)am−1, thus am−1 = 1 m − 1 . In general, ak = (ak − ak+1) + (ak+1 − ak+2) + · · · + (am−1 − am) = (m − k)am−1 = m − k m − 1 for k = 2, . . . , m − 1. A simple check shows that this is also true for k = 1 and k = m, so ak = m − k m − 1 , k = 1, 2, . . . , m.
  • 67. Download free ebooks at bookboon.com Stochastic Processes 1 67 3. Markov chains E_5 E_4 E_3 E_2 E_1 Example 3.27 A circle is divided into 5 arcs E1, E2, E3, E4 and E5. A particle moves in the following way between the 5 states: At every time unit there is the probability p ∈ ]0, 1[ of walking two steps forward in the positive direction, and the probability q = 1 − p of walking one step backwards, so the transition probabilities are p1,3 = p2,4 = p3,5 = p4,1 = p5,2 = p, p1,5 = p2,1 = p3,2 = p4,3 = p5,4 = q, pi,j = 0 otherwise. 1) Find the stochastic matrix P of the Markov chain. 2) Prove that the Markov chain is irreducible. 3) Find the invariant probability vector. 4) Assume that the particle at t = 0 is in state E1. Find the probability that the particle is in state E1 for t = 3, and find the probability that the particle is in state E1 for t = 4. 5) Check if the Markov chain is regular. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 0 p 0 q q 0 0 p 0 0 q 0 0 p p 0 q 0 0 0 p 0 q 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . 2) The oblique diagonal in the stochastic matrix below the main diagonal gives the transitions E5 → E4 → E3 → E2 → E1. Since also E1 → E5, we conclude that P is irreducible.
  • 68. Download free ebooks at bookboon.com Stochastic Processes 1 68 3. Markov chains 3) Since the matrix is double stochastic, the invariant probability vector is 1 5 , 1 5 , 1 5 , 1 5 , 1 5 . 4) Let the particle start at E1. Then we have the tree E1 p → E3 p → E5 p → E2 p → E4 q ↓ q ↓ q ↓ q ↓ E5 p → E2 p → E4 p → E1 q ↓ q ↓ q ↓ E4 p → E1 p → E3 q ↓ q ↓ E3 p → E5 q ↓ E2 We see that we can back to E1 in three steps by E1 p → E3 q → E2 q → E1, probability p · q2 , E1 q → E5 p → E2 q → E1, probability p · q2 , E1 q → E5 q → E4 p → E1, probability p · q2 , so P{T = 3} = 3pq2 . By 2020, wind could provide one-tenth of our planet’s electricity needs. Already today, SKF’s innovative know- how is crucial to running a large proportion of the world’s wind turbines. Up to 25 % of the generating costs relate to mainte- nance. These can be reduced dramatically thanks to our systems for on-line condition monitoring and automatic lubrication. We help make it more economical to create cleaner, cheaper energy out of thin air. By sharing our experience, expertise, and creativity, industries can boost performance beyond expectations. Therefore we need the best employees who can meet this challenge! The Power of Knowledge Engineering Brain power Plug into The Power of Knowledge Engineering. Visit us at www.skf.com/knowledge Please click the advert
  • 69. Download free ebooks at bookboon.com Stochastic Processes 1 69 3. Markov chains Analogously we reach E1 in four steps along four paths, all of probability p3 q, so P{T = 4} = 4p3 q. 5) It follows from the tree that we from E1 with positive probability can reach any other state in 4 steps. It follows by the symmetry that this also holds for any other initial state Ek, so P4 has only positive elements, so we have that P is regular. Example 3.28 Given a Markov chain with 5 states E0, E1, E2, E3 and E4 and the transition prob- abilities p0,2 = p4,2 = 1, p1,2 = p2,3 = p3,4 = 1 4 , p1,0 = p2,1 = p3,2 = 3 4 , pi,j = 0 otherwise. This can be considered as a model of the following situation: Two gamblers, Peter and Poul, participate in a series of games, in each of which Peter wins with the probability 1 4 and loses with the probability 3 4 ; if Peter wins, he receives 1 $ from Paul, and if he loses, he gives 1 $ to Paul. The two gamblers have in total 4 $, and the state Ei corresponds to that Peter has i $ (and Paul has 4 − i $). To avoid that the game stops, because one of the gamblers has 0 $, they agree that in that case the gambler with 0 $ then receives 2 $ from the gambler with 4 $. 1) Find the stochastic matrix. 2) Prove that the Markov chain is irreducible. 3) Find the invariant probability vector. 4) Compute p (2) 22 and p (3) 22 . 5) Check if the Markov chain is regular. 6) At time t = 0 the process is in state E0. Find the probability that the process returns to E0, before it arrives at E4. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 0 1 0 0 3 4 0 1 4 0 0 0 3 4 0 1 4 0 0 0 3 4 0 1 4 0 0 1 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . 2) An analysis of P provides us with the diagram E0 ← E1 ↔ E2 ↔ E3 → E4 ↓ ↓ E2 = E2 = E2
  • 70. Download free ebooks at bookboon.com Stochastic Processes 1 70 3. Markov chains It follows from this diagram that we can come from any state Ei to any other state Ej, so the Markov chain is irreducible. 3) The equations of the invariant probability vector are g0 = 3 4 g1, g1 = 3 4 g2, g2 = g0 + 1 4 g1 + 3 4 g3 + g4, g3 = 1 4 g2, g4 = 1 4 g3, thus g3 = 4g4, g2 = 16g4, g1 = 12g4, g0 = 0g4, and 1 = g0 + g1 + g2 + g3 + g4 = (9 + 12 + 16 + 4 + 1)g4 = 42g4, hence g = (g0, g1, g2, g3, g4) = 3 14 , 2 7 , 8 21 , 2 21 , 1 42 . 4) It follows from P2 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 0 1 0 0 3 4 0 1 4 0 0 0 3 4 01 4 0 0 0 3 4 0 1 4 0 0 1 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 0 1 0 0 3 4 0 1 4 0 0 0 3 4 01 4 0 0 0 3 4 0 1 4 0 0 1 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 3 4 0 1 4 0 0 3 16 3 4 1 16 0 9 16 0 3 8 0 1 16 0 9 16 1 4 3 16 0 0 3 4 0 1 4 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , that p (2) 22 = 3 8 . (Notice that the indices start at 0). Hence p (3) 22 = 0, 3 4 , 0, 1 4 , 0 · 0, 3 4 , 3 8 , 1 4 , 0 = 9 16 + 1 16 = 5 8 . 5) From P2 we get E0 → E1 → E2 and E2 → E0, thus E0 ↔ E1 ↔ E2.
  • 71. Download free ebooks at bookboon.com Stochastic Processes 1 71 3. Markov chains In this case we get the diagram E0 E3 → E1 ← E4, E2 and P2 is seen to be irreducible. Since p (2) 22 0, it follows that P2 , and hence also P, is regular. 6) When t = 0 we are in state E0. When t = 1 we are with probability 1 in state E2 If t ≥ 1 we denote by ak the probability that we can get from Ek to E0 before E4. Since t 0, it follows that a0 = 1 and a4 = 0, and a2 = 3 4 a1 + 1 4 a3, a1 = 3 4 a0 + 1 4 a2 = 3 4 + 1 4 a2, a3 = 3 4 a2 + 1 4 a4 = 3 4 a2. www.simcorp.com MITIGATE RISK REDUCE COST ENABLE GROWTH The financial industry needs a strong software platform That’s why we need you SimCorp is a leading provider of software solutions for the financial industry. We work together to reach a common goal: to help our clients succeed by providing a strong, scalable IT platform that enables growth, while mitigating risk and reducing cost. At SimCorp, we value commitment and enable you to make the most of your ambitions and potential. Are you among the best qualified in finance, economics, IT or mathematics? Find your next challenge at www.simcorp.com/careers Please click the advert
  • 72. Download free ebooks at bookboon.com Stochastic Processes 1 72 3. Markov chains When the latter two equations are inserted into the first one, we get a2 = 3 4 3 4 + 1 4 a2 + 1 4 · 3 4 a2 = 9 16 + 3 16 a2 + 3 16 a2, thus 16a2 = 9 + 6a2 or a2 = 9 10 . The wanted probability is a2 = 9 10 , because we might as well start from E2 as from E0. Example 3.29 Given a Markov chain of 5 states E1, E2, E3, E4 and E5 and the transition proba- bilities p1,2 = p1,3 = p1,4 = p1,5 = 1 4 , p2,3 = p2,4 = p2,5 = 1 3 , p3,4 = p3,5 = 1 2 , p4,5 = 1, p5,1 = 1, pi,j = 0 otherwise. 1) Find the stochastic matrix. 2) Prove that the Markov chain is irreducible. 3) Find the invariant probability vector. 4) Check if the Markov chain is regular. 5) To time t = 0 the process is in state E1. Denote by T1 the random variable, which indicates the time of the first return to E1. Compute P {T1 = k} , k = 2, 3, 4, 5, and find the mean of T1. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 4 1 4 1 4 1 4 0 0 1 3 1 3 1 3 0 0 0 1 2 1 2 0 0 0 0 1 1 0 0 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . 2) It follows from the diagram E5 → E1 → E2 → E3 → E4 → E5, that the Markov chain is irreducible.
  • 73. Download free ebooks at bookboon.com Stochastic Processes 1 73 3. Markov chains 3) The equations of the invariant probability vector are g1 = g5, g2 = 1 4 g1, g3 = 1 4 g1 + 1 3 g2 = 1 3 g1, g4 = 1 4 g1 + 1 3 g2 + 1 2 g3 = 1 6 g1, where 1 = g1 + g2 + g3 + g4 + g5 = g1 1 + 1 4 + 1 3 + 1 2 + 1 = 37 12 g1, thus g1 = 12 37 and g = 1 37 (12, 3, 4, 6, 12). 4) It follows from the computation P2 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 4 1 4 1 4 1 4 0 0 1 3 1 3 1 3 0 0 0 1 2 1 2 0 0 0 0 1 1 0 0 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 4 1 4 1 4 1 4 0 0 1 3 1 3 1 3 0 0 0 1 2 1 2 0 0 0 0 1 1 0 0 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 1 4 0 1 12 5 24 11 24 1 3 0 0 1 6 1 2 1 2 0 0 0 1 2 1 0 0 0 0 0 1 4 1 4 1 4 1 4 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , that we have in P2 , E2 E3 E3 → E1 → E4 E4 E5, i.e. E1 ↔ E3 ↔ E4 and E2 → E1 → E5 → E2, and we can get from any state Ei to any other state Ej, so P2 is irreducible. Now, p (2) 1,1 = 1 4 0, so P2 is regular, which implies that also P is regular, because there is an n, such that P2n has only positive elements.
  • 74. Download free ebooks at bookboon.com Stochastic Processes 1 74 3. Markov chains 5) In this case we have the tree E5 1 → E1 1 3 E2 1 3 → E3 1 2 → E4 1 → E5 1 → E1 1 4 1 3 1 2 E1 1 4 → E3 1 2 → E4 1 → E5 1 → E1 1 4 1 2 E4 1 → E5 1 → E1 1 4 E5 1 → E1. We conclude from an analysis of this tree that P {T1 = 1} = 0, and P {T1 = 2} = 1 4 · 1 = 1 4 , P {T1 = 3} = 1 4 1 3 + 1 2 + 1 · 1 = 11 24 , P {T1 = 4} = 1 4 1 3 · 1 2 + 1 3 · 1 + 1 2 · 1 · 1 = 1 4 1 6 + 1 3 + 1 2 = 1 4 , P {T1 = 5} = 1 4 · 1 3 · 1 2 · 1 · 1 = 1 24 . Please click the advert
  • 75. Download free ebooks at bookboon.com Stochastic Processes 1 75 3. Markov chains The mean is E {T1} = 2 · 1 4 + 3 · 11 24 + 4 · 1 4 + 5 · 1 24 = 1 24 (12 + 33 + 24 + 5) = 74 24 = 37 12 . Example 3.30 Given a Markov chain of 4 states and the transition probabilities p11 = 1 − a, p12 = a, p23 = p34 = p21 = p32 = 1 2 , p43 = 1, pij = 0 otherwise. (Here a is a constant in the interval [0, 1]). 1. Find the stochastic matrix P. 2. Prove that the Markov chain is irreducible for a ∈ ]0, 1], though not irreducible for a = 0. Find for a ∈ [0, 1] the invariant probability vector. Find all values of a for which the Markov chain is regular. Assume in the following that a = 0. 5. Prove that p (3) i1 ≥ 1 4 , i = 1, 2, 3, 4. 6. Let p(0) = p (0) 1 , p (0) 2 , p (0) 3 , p (0) 4 be any initial distribution. Prove that p (n+3) 2 + p (n+3) 3 + p (n+3) 4 ≤ 3 4 p (n) 2 + p (n) 3 + p (n) 4 , for alle n ∈ N, and then find lim n→∞ p(0) Pn . 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎝ 1 − a a 0 0 1 2 0 1 2 0 0 1 2 01 2 0 0 1 0 ⎞ ⎟ ⎟ ⎠ . 2) If a 0, then E1 → E2 → E3 → E4 → E3 → E2 → E1, so the Markov chain is irreducible. If a = 0, then E1 is an absorbing state, so the Markov chain is not irreducible for a = 0.
  • 76. Download free ebooks at bookboon.com Stochastic Processes 1 76 3. Markov chains 3) The equations of the invariant probability vector are g1 = (1 − a)g1 + 1 2 g2, i.e. g2 = 2ag1 g2 = a g1 + 1 2 g3, i.e. g3 = 2ag1, g3 = 1 2 g2 + g4, g4 = 1 2 g3, i.e. g4 = 4ag1. It follows from 1 = g1 + g2 + g3 + g4 = g(1 + 2a + 2a + 4a) = (1 + 8a)g1, that g1 = 1 1 + 8a , and g = 1 1 + 8a (1, 2a, 2a, 4a). 4) Since the Markov chain is irreducible, we must at least require that a ∈ ]0, 1]. If a 1, then p11 = 1 − a 0, so the Markov chain is regular for a ∈ ]0, 1[. If a = 1, then P = ⎛ ⎜ ⎜ ⎝ 0 1 0 0 1 2 0 1 2 0 0 1 2 0 1 2 0 0 1 0 ⎞ ⎟ ⎟ ⎠ , and it is obvious that Pn contains zeros for every n ∈ N. Hence the Markov chain is not regular for a = 1. Summing up, the Markov chain is regular, if and only if a ∈ ]0, 1[. 5) If a = 0, then P = ⎛ ⎜ ⎜ ⎝ 1 0 0 0 1 2 0 1 2 0 0 1 2 0 1 2 0 0 1 0 ⎞ ⎟ ⎟ ⎠ , P2 = ⎛ ⎜ ⎜ ⎝ 1 0 0 0 1 2 1 4 0 1 4 1 4 0 3 4 0 0 1 2 01 4 ⎞ ⎟ ⎟ ⎠ , P3 = ⎛ ⎜ ⎜ ⎝ 1 0 0 0 5 8 0 3 8 0 1 4 3 8 0 3 8 1 4 0 3 4 0 ⎞ ⎟ ⎟ ⎠ . (It is not necessary to compute the full matrix P3 ; however, the alternative proof is just as long as the above). It follows that p (3) i1 ≥ 1 4 for i = 1, 2, 3, 4. 6) By 5., p (n+3) 2 + p (n+3) 3 + p (n+3) 4 = 1 − p (n+3) 1 ≤ 1 − 1 4 p (n) 2 + p (n) 3 + p (n) 4 . Hence by recursion, 0 ≤ p (n+3p) 2 + p (n+3p) 3 + p (n+3p) 4 ≤ 3 4 p p (n) 2 + p (n) 3 + p (n) 4 → 0 for p → ∞, so 0 ≤ ⎧ ⎪ ⎨ ⎪ ⎩ limn→∞ p (n) 2 limn→∞ p (n) 3 limn→∞ p (n) 4 ⎫ ⎪ ⎬ ⎪ ⎭ ≤ lim n→∞ p (n) 2 + p (n) 3 + p (n) 4 = 0,
  • 77. Download free ebooks at bookboon.com Stochastic Processes 1 77 3. Markov chains and lim n→∞ p (n) 1 = 1 − lim n→∞ p (n) 2 + p (n) 3 + p (n) 4 = 1 − 0 = 1. Therefore, lim n→∞ p(0) Pn = (1, 0, 0, 0). Example 3.31 Given a Markov chain of 5 states E0, E1, E2, E3 and E4, and transition probabilities p0,1 = p0,1 = p0,3 = p0,4 = 1 4 , p1,1 = p2,2 = p3,3 = p4,4 = 3 4 , p1,0 = p2,1 = p3,2 = p4,3 = 1 4 , pi,j = 0 otherwise. 1. Find the stochastic matrix. 2. Prove that the Markov chain is irreducible. 3. Prove that the Markov chain is regular. 4. Find the invariant probability vector. For t = 0 the process is at state E1. We denote by T1 the stochastic variable, which indicates the time, when process for the first time is in state E0. 5. Find P {T1 = k}, k ∈ N, and the mean of T1 (i.e. the expected time of getting from E1 to E0). 6. Find for i = 2, 3, 4, the expected time for getting from Ei to E0. When t = 0, the process is in state E0. Let T denote the time of the first return to E0. 7. Find the mean of T. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 4 1 4 1 4 1 4 1 4 3 4 0 0 0 0 1 4 3 4 0 0 0 0 1 4 3 4 0 0 0 0 1 4 3 4 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . 2) It follows from the diagram E0 → E4 → E3 → E2 → E1 → E0, that the Markov chain is irreducible.
  • 78. Download free ebooks at bookboon.com Stochastic Processes 1 78 3. Markov chains 3) Since e.g. p2,2 = 3 4 0, and the Markov chain is irreducible, it is also regular. 4) The equations of the invariant probability vector are g0 = 1 4 g1, thus g1 = 4g0, g1 = 1 4 g0 + 3 4 g1 + 1 4 g2, thus g2 = 4g1 − g0 − 3g1 = g1 − g0 = 3g0, g2 = 1 4 g0 + 3 4 g2 + 1 4 g3, thus g3 = 4g2 − g0 − 3g2 = g2 − g0 = 2g0, g3 = 1 4 g0 + 3 4 g3 + 1 4 g4, g4 = 1 4 g0 + 3 4 g4, thus g4 = g0, so 1 = g0 + g1 + g2 + g3 + g4 = g0(1 + 4 + 3 + 2 + 1) = 11g0, and g = (g0, g1, g2, g3, g4) = 1 11 (1, 4, 3, 2, 1). 5) It follows from the matrix that P {T1 = 1} = 1 4 and P {T1 = 2} = 1 4 · 3 4 1 , Challenging? Not challenging? Try more Try this... www.alloptions.nl/life Please click the advert
  • 79. Download free ebooks at bookboon.com Stochastic Processes 1 79 3. Markov chains and in general, P {T1 = k} = 1 4 3 4 k−1 with E {T1} = 1 4 · 1 1 − 3 4 2 = 4. 6) Let T̃1 denote the random variable, which gives the time when the process is at state Ei−1 for the first time when we start at Ei. Then P ' T̃i = k ( = 1 4 3 4 k−1 with E ' T̃i ( = 4. Let Ti denote the time, when the process for the first time is in state E0, when we start at Ei. Then Ti = T̃i + T̃i−1 + · · · + T̃1, hence E {Ti} = 4i, i = 1, 2, 3, 4.
  • 80. Download free ebooks at bookboon.com Stochastic Processes 1 80 3. Markov chains 7) In the first step we get to one of the states E1, E2, E3, E4, each of the probability 1 4 , hence E{T} = 1 + 1 4 · 4{1 + 2 + 3 + 4} = 11. Example 3.32 Given a Markov chain of 7 states E0, E1, E2, E3, E4, E5, E6, and the transition probabilities p0,i = 1 6 , i = 1, 2, 3, 4, 5, 6, pi,0 = r, i = 1, 2, 3, 4, 5, 6, pi,i+1 = p, i = 1, 2, 3, 4, 5, pi,i−1 = p, i = 2, 3, 4, 5, 6, p1,6 = p6,1 = p, pi,j = 0 otherwise, where p ≥ 0, r 0 and 2p + r = 1. 1. Find the stochastic matrix P. 2. Prove that the Markov chain is irreducible. 3. Prove that the Markov chain is regular, if p ∈ ) 0, 1 2 * , but not regular for p = 0. 4. Find the value of r, for which the invariant probability vector g is given by g = 1 7 , 1 7 , 1 7 , 1 7 , 1 7 , 1 7 , 1 7 . At time t = 0 the process is in state E0. Let T0 denote the time of the first return to E0. 5. Find for every value of p ∈ * 0, 1 2 * 4, P {T0 = k} , k = 2, 3, 4, . . . . 6. Find the mean of T0. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 6 1 6 1 6 1 6 1 6 1 6 r 0 p 0 0 0 p r p 0 p 0 0 0 r 0 p 0 p 0 0 r 0 0 p 0 p 0 r 0 0 0 p 0 p r p 0 0 0 p 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ . 2) It follows from the first row that E0 → Ei for all i = 1, . . . , 6. It follows from the first column that Ei → E0 for all i = 1, . . . , 6. Thus E0 ↔ Ei for all i = 1, . . . 6, and we can via E0 always get from any Ei to any other Ej, and the Markov chain is irreducible.
  • 81. Download free ebooks at bookboon.com Stochastic Processes 1 81 3. Markov chains 3) If 0 p 1 2 , then clearly p (2) ij 0 for all (i, j), and the Markov chain is regular. If p = 0, then p (2) ij = 0 for i, j = 1, . . . , 6, and hence also for p (n) ij , which means that every Pn contains zeros, and the Markov chain is not regular for p = 0. 4) If the vector g = 1 7 , 1 7 , 1 7 , 1 7 , 1 7 , 1 7 , 1 7 satisfies g = g P, then we get from the first coordinate that 1 7 = 1 7 · 6r, so r = 1 6 is the only possibility. In this case, p = 1 2 (1 − r) = 5 12 , and it follows that the matrix is double stochastic for this value, and the given vector is indeed an invariant probability vector. 5) Due to the extreme symmetry we may introduce the new state E = E1 ∪ E2 ∪ E3 ∪ E4 ∪ E5 ∪ E6. Then we have a new stochastic matrix for E0 and E alone, Q = 0 1 r 2p = 0 1 1 − 2p 2p . It follows that P {T0 = 2} = 1 − 2p and P {T0 = 3} = (1 − 2p) · (2p)1 , and in general, P {T0 = k} = (1 − 2p) · (2p)k−2 , k ≥ 2. 6) The mean is E {T0} = (1 − 2p) ∞ k=2 k (2p)k−2 = (1 − 2p) ∞ k=1 (k + 1) (2p)k−1 = (1 − 2p) 1 (1 − 2p)2 + 1 1 − 2p = 1 1 − 2p + 1 = 2 1 − p 1 − 2p .
  • 82. Download free ebooks at bookboon.com Stochastic Processes 1 82 3. Markov chains Example 3.33 Given a Markov chain of 5 states E0, E1, E2, E3 and E4 and transition probabilities p0,1 = 1, p1,2 = p2,3 = p3,4 = 2 3 , p1,0 = p2,1 = p3,2 = 1 3 , p4,2 = p4,3 = 1 2 , pi,j = 0 otherwise. 1) Find the stochastic matrix P. 2) Prove that the Markov chain is irreducible. 3) Check if the Markov chain is regular. 4) Find the invariant probability vector. 5) Find p (2) 2,2. 6) At time t = 0 the process is in state E2. Find for every n ∈ N the probability that the process is in state E0 to time 2n without in the meantime having been in any of the states E0 or E4. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 0 0 0 1 3 0 2 3 0 0 0 1 3 0 2 3 0 0 0 1 3 0 2 3 0 0 1 2 1 2 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . 2) We obtain from the oblique diagonals the transitions E0 → E1 → E2 → E3 → E4 → E3 → E2 → E1 → E0, so we conclude that the Markov chain is irreducible. 3) We get by a computition, P2 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 0 0 0 1 3 0 2 3 0 0 0 1 3 0 2 3 0 0 0 1 3 0 2 3 0 0 1 2 1 2 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 0 0 0 1 3 0 2 3 0 0 0 1 3 0 2 3 0 0 0 1 3 0 2 3 0 0 1 2 1 2 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 1 3 0 2 3 0 0 0 5 9 0 4 9 0 1 9 0 4 9 0 4 9 0 1 9 1 3 5 9 0 0 1 6 1 6 1 3 1 3 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . The elements of the diagonal of P2 are 0, so we shall only check if P2 is irreducible. We have the transitions E0 ↔ E2, E1 ↔ E3 and E2 ↔ E4,
  • 83. Download free ebooks at bookboon.com Stochastic Processes 1 83 3. Markov chains so we can get from any “even” state to any other “even” state, and from any “odd” state to any other “odd” state. Since also E4 ↔ E1 and E3 ↔ E2, we have also a connection in both directions between “even” and “odd” states, so P2 is irreducible, and hence also regular, because p (2) 1,1 0. Then also P is regular. 4) The equations of the invariant probability vector are g0 = 1 3 g1, thus g1 = 3g0, g1 = g0 + 1 3 g2, thus g2 = 3g1 − 3g0 = 6g0, g2 = 2 3 g1 + 1 3 g3 + 1 2 g4, thus 6g0 = 2g0 + 1 3 g3 + 1 2 g4, g3 = 2 3 g2 + 1 2 g4, thus g3 = 4g0 + 1 2 g4, g4 = 2 3 g3, so in particular, 4g0 = 1 3 g0 + 1 2 g4 = 1 3 g3 + 1 3 g3 = 2 3 g3, and g3 = 6g0 and g4 = 2 3 g3 = 4g0. Stand out from the crowd Designed for graduates with less than one year of full-time postgraduate work experience, London Business School’s Masters in Management will expand your thinking and provide you with the foundations for a successful career in business. The programme is developed in consultation with recruiters to provide you with the key skills that top employers demand. Through 11 months of full-time study, you will gain the business knowledge and capabilities to increase your career choices and stand out from the crowd. Applications are now open for entry in September 2011. For more information visit www.london.edu/mim/ email mim@london.edu or call +44 (0)20 7000 7573 Masters in Management London Business School Regent’s Park London NW1 4SA United Kingdom Tel +44 (0)20 7000 7573 Email mim@london.edu www.london.edu/mim/ Fast-track your career Please click the advert
  • 84. Download free ebooks at bookboon.com Stochastic Processes 1 84 3. Markov chains It follows from 1 = g0 + g1 + g2 + g3 + g4 = g0(1 + 3 + 6 + 6 + 4) = 20g0 that g0 = 1 20 and g = (g0, g1, g2, g3, g4) = 1 20 , 3 20 , 3 10 , 3 10 , 1 5 . 5) According to the computation in 3. we have p (2) 22 = 4 9 . 6) Starting at E2 we get in the first step either to E1 or to E3, thus neither to E0 nor to E4. It follows from the matrix of P2 that P {E0 after 2 steps} = 1 9 , P {E2 after 2 steps} = 4 9 , P {E4 after 2 steps} = 4 9 . Then the process can be iterated, P {E0 after 2n steps without passing E0 or E4} = 1 9 4 9 n−1 .
  • 85. Download free ebooks at bookboon.com Stochastic Processes 1 85 3. Markov chains Example 3.34 Given a Markov chain of 2 states E1 and E2 and of the stochastic matrix Q = 1 4 3 4 1 2 1 2 . 1. Prove that Q is regular, and find the invariant probability vector. Another Markov chain with 6 states, E1, E2, E3, E4, E5 and E6, has the stochastic matrix P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 1 4 3 4 0 0 0 0 1 2 1 2 0 0 0 0 0 1 3 0 2 3 0 0 0 0 1 3 0 2 3 0 0 0 0 1 3 0 2 3 0 0 0 0 0 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ . 2. Prove that P is not irreducible, and find all closed subsets. 3. Find all invariant probability vectors of P. 4. Prove for any initial distribution p(0) = p (0) 1 , p (0) 2 , p (0) 3 , p (0) 4 , p (0) 5 , p (0) 6 that p (n+2) 3 + p (n+2) 4 + p (n+2) 5 ≤ 2 3 p (n) 3 + p (n) 4 + p (n) 5 , n N0, and then prove that lim n→∞ p (n) 3 = lim n→∞ p (n) 4 = lim n→∞ p (n) 5 = 0. 5. Let the initial distribution be given by q(0) = (0, 1, 0, 0, 0, 0). Find limn→∞ q(n) . Then let the initial distribution be given by r(0) = (0, 0, 0, 1, 0, 0). Prove that limn→∞ r(n) . 1) All elements of Q are 0, so Q is regular. The equation of the invariant probability vector is g1 = 1 4 g1 + 1 2 g2 thus g2 = 3 2 g1, so the probability vector is g = 2 5 , 3 5 .
  • 86. Download free ebooks at bookboon.com Stochastic Processes 1 86 3. Markov chains 2) Obviously, {E1, E2} and {E6} are closed subsets. Since we from E5 can get to both E4 and E6, from E4 can reach both E3 and E5, and from E3 get to both E2 and E4, we have – with the exception of the union {E1, E2, E6} – no other possibility of closed subsets. Since we have non-trivial closed subsets, the Markov chain is not irreducible. 3) The equations of the invariant probability vectors are g1 = 1 4 g1 + 1 2 g2, g2 = 3 4 g1 + 1 2 g2 + 1 3 g3, g3 = 1 3 g4, g4 = 2 3 g3 + 1 3 g5, g5 = 2 3 g4, g6 = 1 2 g5 + g6. When we solve these equations backwards, we obtain successively, g5 = 0, g4 = 0, and g3 = 0. The closed system {E1, E2} corresponds to the matrix Q, so the invariant probability vectors are g = 2 5 x, 3 5 x, 0, 0, 0, 1 − x , 0 ≤ x ≤ 1. 4) It follows from P2 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 1 4 3 4 0 0 0 0 1 2 1 2 0 0 0 0 0 1 3 0 2 3 0 0 0 0 1 3 0 2 3 0 0 0 0 1 3 0 2 3 0 0 0 0 0 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 1 4 3 4 0 0 0 0 1 2 1 2 0 0 0 0 0 1 3 0 2 3 0 0 0 0 1 3 0 2 3 0 0 0 0 1 3 0 2 3 0 0 0 0 0 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 7 16 9 16 0 0 0 0 3 8 5 8 0 0 0 0 1 6 1 6 2 9 0 4 9 0 0 1 9 0 4 9 0 4 9 0 0 1 9 0 2 9 2 3 0 0 0 0 0 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ , that p (n+2) 3 + p (n+2) 4 + p (n+2) 5 = 2 9 + 4 9 p (n) 3 + 4 9 p (n) 4 + 1 9 + 2 9 p (n) 5 ≤ 2 3 ' p (n) 3 + p (n) 4 + p (n) 5 ( . This estimate gives by recursion, 0 ≤ p (n+2p) 3 + p (n+2p) 4 + p (n+2p) 5 ≤ 2 3 p ' p (n) 3 + p (n) 4 + p (n) 5 ( → 0 for p → ∞. Since all p (n) i ≥ 0, we get lim n→∞ p (n) 3 = lim n→∞ p (n) 4 = lim n→∞ p (n) 5 = 0. 5) If q(0) = (0, 1, 0, 0, 0, 0), we are in the closed set {E1, E2}, which actually is described by the matrix Q given in 1.. Then lim n→∞ q(n) = 2 5 , 3 5 , 0, 0, 0, 0 .
  • 87. Download free ebooks at bookboon.com Stochastic Processes 1 87 3. Markov chains 6) If r(0) = (0, 0, 0, 1, 0, 0), then we start in E4. Hence by 4., r(2) = 0, 1 9 , 0, 4 9 , 0, 4 9 , so 1 9 reaches the closed set {E1, E2}, and 4 9 reaches the closed set {E6}. The rest, 4 9 , lies again in E4, and the process is repeated. In total, 5 9 disappears in each step, where 1 5 goes to {E1, E2}, and 4 5 goes to E6. Hence we conclude that lim n→∞ r(n) = 2 25 , 3 25 , 0, 0, 0, 4 5 . Alternatively we may apply the theory of the ruin problem. We first re-numerate the states to F0 = {E1, E2} , F1 = E3, F2 = E4, F3 = E5, and F4 = E6. Then we get the diagram F0 ←− F1 ←→ F2 ←→ F3 −→ F4. Starting at E4(= F2), the parameters of the ruin problem in order to reach F0 = {E1, E2} before F4 = E6 is given by k = 2, N = 4, q = 1 3 , p = 2 3 , hence the probability is a2 = 1 2 2 − 1 2 4 1 − 1 4 4 = 1 4 − 1 16 1 − 1 16 = 3 15 = 1 5 . Once one has arrived to F0 = {E1, E2}, one stays there forever, an we approach the stationary distribution of (1). This gives r(n) → 1 5 · 2 5 , 1 5 · 3 5 , 0, 0, 0, 4 5 = 2 25 , 3 25 , 0, 0, 0, 4 5 .
  • 88. Download free ebooks at bookboon.com Stochastic Processes 1 88 3. Markov chains Example 3.35 Given a Markov chain of 5 states E1, E2, E3, E4 and E5, and transition probabilities p1,1 = a, p1,2 = 1 − a, p3,1 = p3,5 = 1 2 , p2,3 = p4,5 = p5,4 = 1, pi,j = 0 otherwise. (Here a is a constant in the interval [0, 1]). 1) Find the stochastic matrix P. 2) Prove that the Markov chain is irreducible for a 1, and not irreducible for a = 1. 3) Find for every a them invariant probability vector. 4) Find the values of a, for which the Markov chain is regular. 5) To time t = 0 the process is in state E5. Denote by T the time when the process for the first time is in state E1. Find the distribution of the random variable T. 6) Find the mean of T. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ a 1 − a 0 0 0 0 0 1 0 0 1 2 0 0 0 1 2 0 0 1 0 0 0 0 0 1 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . © UBS 2010. All rights reserved. www.ubs.com/graduates Looking for a career where your ideas could really make a difference? UBS’s Graduate Programme and internships are a chance for you to experience for yourself what it’s like to be part of a global team that rewards your input and believes in succeeding together. Wherever you are in your academic career, make your future a part of ours by visiting www.ubs.com/graduates. You’re full of energy and ideas. And that’s just what we are looking for. Please click the advert
  • 89. Download free ebooks at bookboon.com Stochastic Processes 1 89 3. Markov chains 2) For a 1 we get the diagrams E1 → E2 → E3 → E1, and E3 → E5 → E4 → E3, so we can get from any state Ei to any other state Ej, thus the Markov chain is irreducible. If a = 1, then E1 is absorbing, and the Markov chain is not irreducible. 3) The equations of the invariant probability vectors are g1 = a g1 + 1 2 g3, thus g3 = 2(1 − a)g1, g2 = (1 − a)g1, thus g2 = (1 − a)g1, g3 = g2 + g4, g4 = g5, g5 = 1 2 g3, thus g4 = g5 = (1 − a)g1. We now get 1 = g1 + g2 + g3 + g4 + g5 = g1 {1 + 1 − a + 2 − 2a + 1 − a + 1 − a} = g1(6 − 5a). Since 6 − 5a 0 for a ∈ [0, 1], we get g1 = 1 6 − 5a , and g = 1 6 − 5a (1, 1 − a, 2(2/1 − a), 1 − a, 1 − a). 4) Now, P is irreducible for a ∈ [0, 1[, and p1,1 = a 0 for a ∈ ]0, 1[, hence the Markov chain is (at least) regular for a ∈ ]0, 1[. When a = 0, then P2 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 0 0 0 0 0 1 0 0 1 2 0 0 0 1 2 0 0 1 0 0 0 0 0 1 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 0 0 0 0 0 1 0 0 1 2 0 0 0 1 2 0 0 1 0 0 0 0 0 1 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 0 1 0 0 1 2 0 0 0 1 2 0 1 2 0 1 2 0 1 2 0 0 0 1 2 0 0 1 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , and P3 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 0 0 0 0 0 1 0 0 1 2 0 0 0 1 2 0 0 1 0 0 0 0 0 1 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 0 1 0 0 1 2 0 0 0 1 2 0 1 2 0 1 2 0 1 2 0 0 0 1 2 0 0 1 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 1 2 0 0 0 1 2 0 1 2 0 1 2 0 0 0 1 0 0 0 1 2 0 1 2 0 1 2 0 0 0 1 2 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . It follows that E3 is an absorbing state for P3 , hence the Markov chain corresponding to P3 is not irreducible. In particular, P is not regular. 5) We derive from the diagram E1, 1 2 E5 1 → E4 1 → E3 1 2 E5,
  • 90. Download free ebooks at bookboon.com Stochastic Processes 1 90 3. Markov chains that P{T = 1} = P{T = 2} = 0 and P{T = 3} = 1 2 , and the process is repeated from E5. Hence P{T = 3k} = 1 2 k and P{T = j} = 0 for j = 3k. 6) The mean is P{T} = ∞ k=1 3k 1 2 k = 3 2 ∞ k=1 k 1 2 k−1 = 3 2 · 1 1 − 1 2 2 = 6. Example 3.36 Given a Markov chain of 5 states E0, E1, E2, E3 and E4, and transition probabilities p0,i = 1 4 , i = 1, 2, 3, 4; pi,i−1 = 1 i , i = 1, 2, 3, 4; pi,i = i−1 i , i = 2, 3, 4; pi,j = 0 ellers. 1. Find the stochastic matrix P. 2. Prove that the Markov chain is irreducible. 3. Check if the Markov chain is regular. Find the invariant probability vector. At time t = 0 the process is in state Ei, where i is one of the numbers 1, 2, 3, 4. Let Ti denote the random variable, which indicates the time, when the process for the first time is in state Ei−1. 5. Find for i = 1, 2, 3, 4, the probabilities P {Ti = k}, k ∈ N, and find the mean of Ti (i.e. the expected time for getting from Ei to Ei−1). 6. Find for i = 1, 2, 3, 4, the expected time for getting from Ei to E0. Let the process at time t = 0 be in state E0. Denote by T the time of the the first return to E0. 7. Find the mean of T. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 4 1 4 1 4 1 4 1 0 0 0 0 0 1 2 1 2 0 0 0 0 1 3 2 3 0 0 0 0 1 4 3 4 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ .
  • 91. Download free ebooks at bookboon.com Stochastic Processes 1 91 3. Markov chains 2) It follows from the diagram E4 → E3 → E2 → E1 → E0 → E4, that the Markov chain is irreducible. 3) Since e.g. p2,2 = 1 2 0, and the Markov chain is irreducible, it is also regular. 4) The equations of the invariant probability vectors are g0 = g1, thus g1 = g0, g1 = 1 4 g0 + 1 2 g2, thus g2 = 3 2 g0, g2 = 1 4 g0 + 1 2 g2 + 1 3 g3, thus g3 = 3 1 2 g2 − 1 4 g0 = 3 2 g0, g3 = 1 4 g0 + 2 3 g3 + 1 4 g4, g4 = 1 4 g0 + 3 4 g4, thus g4 = g0. Hence 1 = g0 + g1 + g2 + g3 + g4 = g0 1 + 1 + 3 2 + 3 2 + 1 = 6 g0, from which g0 = 1 6 , and g = (g0, g1, g2, g3, g4) = 1 6 , 1 6 , 1 4 , 1 4 , 1 6 . 5) Clearly, P {T1 = 1} = 1, P {T1 = k} = 0 for k ≥ 2, and E {T1} = 1. We get for T2, P {T2 = k} = 1 2 k , k ∈ N, and E {T2} = 2. We get for T3, P {T3 = k} = 1 3 2 3 k−1 , k ∈ N, and E {T3} = 3. We get for T4, P {T4 = k} = 1 4 3 4 k−1 , k ∈ N, and E {T4} = 4. 6) Let T̃i denote the time when the process of initial state Ei for the first time is in E0. Then E ' T̃1 ( = E {T1} = 1, E ' T̃2 ( = E {T2} + E {T1} = 2 + 1 = 3, E ' T̃3 ( = E {T3} + E ' T̃2 ( = 3 + 3 = 6, E ' T̃4 ( = E {T4} + E ' T̃3 ( = 4 + 6 = 10.
  • 92. Download free ebooks at bookboon.com Stochastic Processes 1 92 3. Markov chains 7) In the first step we get to one of the states E1, E2, E3, E4, each of probability 1 4 . Then we shall move back to E0, so E{T} = 1 + 1 4 E ' T̃1 ( + E ' T̃2 ( + E ' T̃3 ( + E ' T̃4 ( = 1 + 1 4 (1 + 3 + 6 + 10) = 1 + 1 4 · 20 = 6. Please click the advert
  • 93. Download free ebooks at bookboon.com Stochastic Processes 1 93 3. Markov chains Example 3.37 Given a Markov chain of 4 states E1, E2, E3 and E4, and transition probabilities p1,1 = a, p1,2 = 1 − a, p2,3 = p3,2 = 2 3 , p2,1 = p3,4 = 1 3 , p4,3 = 1, pi,j = 0 otherwise. (Here a is a constant in the interval [0, 1]). 1) Find the stochastic matrix P. 2) Find the values of a, for which the Markov chain is irreducible. 3) Find the values of a, for which the Markov chain is regular. 4) Find for every a the invariant probability vector. 5) Find for every a the limit limn→∞ p (2n) 22 . 6) Put a = 1, and assume that the process at time t = 0 is in state E2. Find the probability that the process at any later time reaches state E4. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎝ a 1 − a 0 0 1 3 0 2 3 0 0 2 3 0 1 3 0 0 1 0 ⎞ ⎟ ⎟ ⎠ . 2) When a ∈ [0, 1[, we have the diagram E1 → E2 → E3 → E4 → E3 → E2 → E1, and we conclude that the Markov chain is irreducible. If a = 1, then E1 is absorbing, and the Markov chain is not irreducible for a = 1. 3) When we check the possible regularity we shall only consider a ∈ [0, 1[. If a ∈ ]0, 1[, then p1,1 = a 0, and the Markov chain is regular for a ∈ ]0, 1[. If a = 0, then every second element of Pn is 0 for every n ∈ N, thus the Markov chain is not regular for a = 0. 4) The equations of the invariante probability vector are g1 = a g1 + 1 3 g2, thus g2 = 3(1 − a)g1, g2 = (1 − a)g1 + 2 3 g3, thus g3 = 3 2 (3 − 3a − 1 + a)g1 = 3(1 − a)g1, g3 = 2 3 g2 + g4, thus g4 = (1 − a)g1. It follows from 1 = g1 + g2 + g3 + g4 = g1 {1 + 3(1 − a) + 3(1 − a) + (1 − a)} = (8 − 7a)g1 and 7a ≤ 7 8 that g1 = 1 8 − 7a ≤ 1, hence g = 1 8 − 7a (1, 3(1 − a), 3(1 − a), 1 − a).
  • 94. Download free ebooks at bookboon.com Stochastic Processes 1 94 3. Markov chains 5) If a ∈ ]0, 1[, the Markov chain is regular, so Pn konverges, hence P(2n) also converges towards G, where each row of g is the invariant probability vector found in 4.. Hence lim n→∞ p (2n) 2,2 = g2 = 3(1 − a) 8 − 7a) . If a = 0, the Markov chain is irreducible, but not regular. Then compute P2 = ⎛ ⎜ ⎜ ⎝ a 1 − a 0 0 1 3 0 2 3 0 0 2 3 0 1 3 0 0 1 0 ⎞ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎝ a 1 − a 0 0 1 3 0 2 3 0 0 2 3 0 1 3 0 0 1 0 ⎞ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎝ 1 3 0 2 3 0 0 7 9 0 2 9 4 5 0 5 9 0 0 2 3 0 1 3 ⎞ ⎟ ⎟ ⎠ . It follows that {E2, E4} is a closed system. The corresponding stochastic sub-matrix Q = 7 9 2 9 2 3 1 3 is regular, and the equations of the invariant probability vector are g2 = 7 9 g2 + 2 3 g4, g4 = 2 9 g2 + 1 3 g4, thus g2 = 3g4, hence (g2, g4) = 3 4 , 1 4 , and lim n→∞ p (2n) 2,2 = 3 4 , (and not 3 8 , which we get by inserting a = 0 into the formula of 4.. This result is in agreement with the theoretical result, because p (2n+1) 2,2 = 0, so 1 2n 2n i=1 P = 1 2n n i=1 P2i + 1 2n n i=1 P2i−1 → G for n → ∞. If a = 1, then P2 = ⎛ ⎜ ⎜ ⎜ ⎝ 1 0 0 0 1 3 0 2 3 0 0 2 3 0 1 3 0 0 1 0 ⎞ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎝ 1 0 0 0 1 3 0 2 3 0 0 2 3 0 1 3 0 0 1 0 ⎞ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎝ 1 0 0 0 1 3 4 9 0 2 9 2 9 0 7 9 0 0 2 3 0 1 3 ⎞ ⎟ ⎟ ⎠ . Thus g (n+2) 2 + g (n+2) 3 + g (n+2) 4 ≤ 7 9 g (n) 2 + g (n) 3 + g (n) 4 , and hence g (2n) 2 → 0 for n → ∞, and in particular, p (2n) 2,2 → 0. 6) Let a = 1. If the proces at time t = 0 starts in E2, we get P{T = 1} = 0. If t = 2, then P{T = 2} = 2 9 , while 4 9 of the “mass” lies in E2, and 1 3 is “lost” in the absorbing state E1. Thus P{T = 2k + 1} = 0 for k ∈ N0,
  • 95. Download free ebooks at bookboon.com Stochastic Processes 1 95 3. Markov chains and P{T = 2k} = 2 9 · 4 9 k−1 for k ∈ N. Finally, by a summation, the wanted probability is ∞ k=1 2 9 · 4 9 k−1 = 2 9 1 − 4 9 = 2 5 . Example 3.38 Given a Markov chain of 5 states E0, E1, E2, E3 and E4, and transition probabilities pi,i = 4 5 , i = 1, 2, 3, 4, pi,i−1 = 1 5 , i = 1, 2, 3, 4, p0,2 = p0,4 = a, p0,0 = 1 − 2a, pi,1 = 0 otherwise. Here a is a constant in the interval * 0, 1 2 ) . 1. Find the stochastic matrix P. 2. Find the values of a, for which the Markov chain is irreducible. 3. Find for every a the invariant probability vector. Ay time t = 0 the process is in state Ei, where i is one of the numbers 1, 2, 3, 4. Let Ti denote the random variable, which indicates the time when the process for the first time is in state E0. 4. Find P {T2 = k}, k = 2, 3, 4, . . . , and the mean of T2 (i.e. the expected time for getting from E2 to E0). 5. Find the mean of T4. Now put a = 1 2 , and assume that the process to time t = 0 is in state E0. Let T denote the time of the first return to E0. 6. Find the mean of T. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 1 − 2a 0 a 0 a 1 5 4 5 0 0 0 0 1 5 4 5 0 0 0 0 1 5 4 5 0 0 0 0 1 5 4 5 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . 2) When a = 0, we see that E0 is absorbing, and the Markov chain is not irreducible. For 0 a ≤ 1 2 we get the diagram E0 → E4 → E3 → E2 → E1 → E0, proving that the Markov chain is irreducible, and even regular, because e.g. p1,1 = 4 5 0.
  • 96. Download free ebooks at bookboon.com Stochastic Processes 1 96 3. Markov chains 3) The equations of the invariant probability vector are g0 = (1 − 2a)g0 + 1 5 g1, thus g1 = 10a g0, g1 = 4 5 g1 + 1 5 g2, thus g2 = g1 = 10a g0, g2 = a g0 + 4 5 g2 + 1 5 g3, thus g3 = 10a g0 − 5a g0 = 5a g0, g3 = 4 5 g3 + 1 5 g4, thus g4 = g3 = 5a g0, g4 = a g0 + 4 5 g4, thus g4 = 5a g0. It follows that 1 = g0 + g1 + g2 + g3 + g4 = g0 (1 + 10a + 10a + 5a + 5a) = (1 + 30a) g0, thus g0 = 1 1 + 30a , and therefore g = (g0, g1, g2, g3, g4) = 1 1 + 30a (1, 10a, 10a, 5a, 5a). 4) Let T̃i denote the random variable, which indicates the first time, when the process is in state Ei−1, when we start in Ei. Then P ' T̃i = k ( = 1 5 · 4 5 k−1 , k ∈ N, med E ' T̃i ( = 5. your chance to change the world Here at Ericsson we have a deep rooted belief that the innovations we make on a daily basis can have a profound effect on making the world a better place for people, business and society. Join us. In Germany we are especially looking for graduates as Integration Engineers for • Radio Access and IP Networks • IMS and IPTV We are looking forward to getting your application! To apply and for all current job openings please visit our web page: www.ericsson.com/careers Please click the advert
  • 97. Download free ebooks at bookboon.com Stochastic Processes 1 97 3. Markov chains It follows from T2 = T̃2 + T̃1 for k ≥ 2 that P {T2 = k} = k−1 j=1 P ' T̃2 = j ( · P ' T̃1 = k − j ( = k−1 j=1 1 5 4 5 j−1 · 1 5 4 5 k−j−1 = 1 25 (k − 1) 4 5 k−2 , k ≥ 2. The mean is E {T2} = E ' T̃2 ( + E ' T̃1 ( = 5 + 5 = 10. Alternatively, E {T2} = 1 25 ∞ k=2 k(k − 1) 4 5 k−2 = 1 25 · 2! 1 − 4 5 3 = 10. 5) The mean of T4 is E {T4} = E ' T̃4 ( + E ' T̃3 ( + E ' T̃2 ( + E ' T̃1 ( = 4 · 5 = 20. 6) Let a = 1 2 , and assume that we at time t = 0 are in state E0. Then we are to time t = 1 either in E2 or in E4, each of probability 1 2 . This gives E{T} = 1 + 1 2 E {T2} + 1 2 E {T4} = 1 + 1 2 · 10 + 1 2 · 20 = 16.
  • 98. Download free ebooks at bookboon.com Stochastic Processes 1 98 3. Markov chains Example 3.39 Given a Markov chain of the states E1, E2, . . . , Em. We assume that {E1, E2, . . . , Er} is a closed subset C, and that we from any other of the states Er+1, Er+2, . . . , Em, have positive probability eventually of reaching the closed subset. Thus, the stochastic matrix looks like P = ⎛ ⎝ S | 0 − + − R | Q ⎞ ⎠ , where S is an r × r stochastic matrix, Q is an (m − r) × (m − r)-matrix, 0 is an r × (m − r)-matrix consisting of zeros, and R is an (m − r) × r-matrix. 1. Prove for every pair (i, j) with r + 1 ≤ i, j ≤ m that p (n) ij → 0 for n → ∞, and conclude that Qn → 0 for n → ∞. 2. Prove that there are constants b 0, c ∈ ]0, 1[, such that p (n) ij ≤ b cn for every pair (i, j) with r + 1 ≤ i, j ≤ m and every n ∈ N, and conclude that for every (i, j) as above, the infinite series ∞ n=0 p (n) ij is convergent. 3. Prove that the matrix I − Q has the reciprocal matrix N = ∞ k=0 Qk . We define for every j ∈ {r + 1, . . . , m} a random variable Xj by Xj = k, if the process is in state Ej in total k times. For i ∈ {r + 1, . . . , m} we let Ei {Xj} denote the expected number of times the process is in state Ej, if the processen at time t = 0 is in state Ei. 4. Prove that Ei {Xj} = ∞ n=0 p (n) ij . 5. Prove that Ei {Xj} can be found as the (i, j)-th element of the matrix N = (I − Q)−1 . We denote for i ∈ {r + 1, . . . , m} and j ∈ {1, 2, . . . , r} by bij the probability that the process by starting in Ei reachers state Ej before any of the states in C. 6. Prove that bij is equal to the (i, j)-th element of the matrix B = N R. 1) For every fixed i ∈ {r + 1, . . . , m} there exists an ni, such that the i-th row in Pni contains elements. Then m j=r+1 p (m+ni) i,j ≤ αi m j=r+1 p (n) i,j ≤ αi, where 0 ≤ αi 1, hence m j=r+1 p (n+s ni) i,j ≤ αs i → 0 for s → ∞. We conclude that p (n) i,j → 0 for n → ∞ and i, j = r+1, . . . , m. This implies precisely that Qn → 0 for n → ∞.
  • 99. Download free ebooks at bookboon.com Stochastic Processes 1 99 3. Markov chains 2) It follows from 1. that p (n+sni) i,j = αs i = ( ni √ αi) sni = α −n/ni i ( ni √ αi) n+sni ≤ 1 αi ( ni √ αi) n+sni , (with a trivial modification for αi = 0). If we choose bi = 1 αi and ci = ni √ αi, then we get the estimate p (n) i,j ≤ bi · cn i . Then choose b = maxi bi 0 and c = maxi ci 1, and we get the inequality ∞ n=0 p (n) i,j ≤ b ∞ n=0 cn = b 1 − c ∞. 3) If we put N(n) = n k=0 Qn , then N(n) (I − Q) = (O − Q)N(n) = I − Q(n) → I for n → ∞, hence (I − Q)−1 = N = ∞ k=0 Qk . 4) Since p (n) i,j is the probability that we are in state Ej after n steps, when we start in Ei, then the expected number of times, the process is in state Ej, is the sum of all these probabilities, thus Ei {Xj} = ∞ n=0 p (n) i,j . 5) The claim follows from that Ei {Xj} = ∞ n=0 p (n) i,j is the (i, j)-th element of the matrix ∞ n=0 Qn = N = I − Q. 6) We can only reach a state Ej, j ∈ {1, 2, . . . , r}, through the matrix R, i.e. through one of the possibilities Q0 R, Q1 R, . . . , Qn R, . . . . An addition of these gives precisely B = N R, where the (i, j)-th element bij is the probability that we end in Ej ∈ C, without (by the construction) being in any state earlier from C.
  • 100. Download free ebooks at bookboon.com Stochastic Processes 1 100 3. Markov chains Example 3.40 Given an irreducible Markov chain E1, E2, . . . , Em with the stochastic matrix P and invariant probability vector α = (α1, α2, . . . , αn) , where αj = 0 for every j. 1. Prove that if the process to time t = 0 is in state Ei, then for every j ∈ {1, 2, . . . , m}, the process reaches eventually with probability 1 the state Ej. We define for every j an random variable Tj by putting Tj = n, if the process is in state Ej for the first time after the time 0 to the time n. Denote for every i by mij the mean of Tj, if the process to time t = 0 is in state Ei. 2. Prove that mij is finite for every (i, j). 3. Prove by a convenient splitting of what happens in the first step, (5) mij = k pikmkj − pijmjj + 1 for every (i, j). 4. Prove that the mean of the time of return mii is given by mii = 1 αi , i = 1, 2, . . . , m. Hint: Multiply the i-th equation of (5) by αi, and then sum over i. 1) This follows from the fact that the Markov chain is irreducible, so Ei is transferred into Ek after some steps. 2) When the Markov chain is irreducible, it follows by considering the graph that there exists a transition diagram of M transitions, by which one comes from any Ei back to Ei through all the other states in at most M steps. Hence there exists an a ∈ ]0, 1[, such that mij ≤ ∞ k=0 ak = 1 1 − a . 3) If we start in Ei, then after the first step the one on the right hand side of (5) is in state Ek of probability pi,k, hence m k=1 pikmkj + 1. However, if we end in the state Ej, we count pijmjj too much, so mij = k pikmkj − pijmjj + 1 for every (i, j).
  • 101. Download free ebooks at bookboon.com Stochastic Processes 1 101 3. Markov chains 4) By using the hint and that α is invariant, we get i αimij = k i αipik mkj − i αipij mjj + i αi = k αkmkj − αjmjj + 1. The two sums are equal, hence by a rearrangement, mjj = 1 αj . what‘s missing in this equation? MAERSK INTERNATIONAL TECHNOLOGY SCIENCE PROGRAMME You could be one of our future talents Are you about to graduate as an engineer or geoscientist? Or have you already graduated? If so, there may be an exciting future for you with A.P. Moller - Maersk. www.maersk.com/mitas Please click the advert
  • 102. Download free ebooks at bookboon.com Stochastic Processes 1 102 3. Markov chains Example 3.41 Given a Markov chain of the states 0, 1, 2, . . . , and transition probabilities pi,i+1 = p, pi,0 = q, i ∈ N0, pij = 0 otherwise, (where p 0, q 0, p + q = 1). Prove that the Markov chain is regular, and find its stationary distribution. The corresponding stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ q p 0 0 0 · · · q 0 p 0 0 · · · q 0 0 p 0 · · · q 0 0 0 p · · · . . . . . . . . . . . . . . . ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ . We have trivially the transitions Ei → E0 → E1 → E2 → · · · , i ∈ N, so we can get from any state Ei to any other state Ej. This shows that the Markov chain is irreducible. From p0,0 = q 0 follows that d = 1, and the Markov chain is regular. A possible stationary distribution g must fulfil the equations g0 = q ∞ j=0 gj (convergent series), and gn = p gn−1, n ∈ N0. When we divide the recursion formula by pn 0, we get 1 pn gn = 1 pn−1 gn−1 = · · · = g0, hence gn = pn g0, n ∈ N0, is the only possibility. We see by insertion that the series ∞ j=0 gj is in fact convergent and 1 = ∞ j=0 gj = g0 ∞ j=0 pn = g0 · 1 1 − p = g0 · 1 q , from which we conclude that g0 = q. Therefore, the Markov chain has a stationary distribution, which is given by the coordinates gn = q pn , n ∈ N0.
  • 103. Download free ebooks at bookboon.com Stochastic Processes 1 103 3. Markov chains Example 3.42 A Markov chain has the countably many states E1, E2, E3, . . . , and transition prob- abilities pi,i+1 = i i + 1 , pi,1 = 2 i + 2 , i ∈ N, pij = 0 otherwise. 1) Prove that the Markov chain is regular. 2) Prove that there exists a stationary distribution, and then find it. 3) Assume that the process at time t = 0 is in state E1. Let T denote the random variable, which indicates the time of the first return to E1. Find P{T = k}, k ∈ N. 4) Find the mean E{T}. 5) Prove that T does not have a variance. 1) The infinite stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎝ 2 3 1 3 0 0 · · · 2 4 0 2 4 0 · · · 2 5 0 0 3 5 · · · . . . . . . . . . . . . ⎞ ⎟ ⎟ ⎟ ⎠ . We conclude from E1 → E2 → E3 → · · · → En → · · · , and En → E1 for alle n, that the Markov chain is irreducible. Since d1 = 1, the Markov chain is regular. 2) The equations of a possible invariant probability vector are gi+1 = i i + 2 gi, i ∈ N. Then by recursion, gi = 2 (i + 1)i g1, i ∈ N. Using that the sectional series is telescopic we conclude from ∞ i=1 2 (i + 1)i = 2 ∞ i=1 1 i − 1 i + 1 = 2, that the stationary distribution exists and that its coordinates are given by gi = 1 (i + 1)i , i ∈ N.
  • 104. Download free ebooks at bookboon.com Stochastic Processes 1 104 3. Markov chains 3) It follows that P{T = 1} = 2 3 , P{T = 2} = 1 − 2 3 · 2 4 = 1 6 , P{T = 3} = 1 − 2 3 1 − 2 4 · 2 5 = 1 15 , P{T = 4} = = 1 − 2 3 1 − 2 4 1 − 2 5 · 2 6 = 1 30 . We conclude from this pattern that P{T = k} = k−1 i=1 1 − 2 i + 2 · 2 k + 2 = k−1 i=1 i i + 2 · 2 k + 2 = 2(k − 1)! (k + 2)! · 2 = 4 (k + 2)(k + 1)k . 4) The mean is E{T} = 4 ∞ k=1 1 (k + 2)(k + 1) = 4 ∞ k=1 1 k + 1 − 1 k + 2 = 4 2 = 2. It all starts at Boot Camp. It’s 48 hours that will stimulate your mind and enhance your career prospects. You’ll spend time with other students, top Accenture Consultants and special guests. An inspirational two days packed with intellectual challenges and activities designed to let you discover what it really means to be a high performer in business. We can’t tell you everything about Boot Camp, but expect a fast-paced, exhilarating and intense learning experience. It could be your toughest test yet, which is exactly what will make it your biggest opportunity. Find out more and apply online. Choose Accenture for a career where the variety of opportunities and challenges allows you to make a difference every day. A place where you can develop your potential and grow professionally, working alongside talented colleagues. The only place where you can learn from our unrivalled experience, while helping our global clients achieve high performance. If this is your idea of a typical working day, then Accenture is the place to be. Turning a challenge into a learning curve. Just another day at the office for a high performer. Accenture Boot Camp – your toughest test yet Visit accenture.com/bootcamp Please click the advert
  • 105. Download free ebooks at bookboon.com Stochastic Processes 1 105 3. Markov chains 5) Now k2 · 4 (k + 2)(k + 1)k ∼ 4 k , and the series 4 k = ∞ is divergent, hence the variance does not exist. Example 3.43 Given a Markov chain of the states E1, E2, E3, E4 and E5 and with the stochastic matrix P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 3 4 1 4 0 0 0 0 2 3 1 3 0 0 0 0 1 2 1 2 0 0 0 0 1 a 0 0 0 1 − a ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , where a is a constant in the interval [0, 1]. 1) Find the values of a, for which the Markov chain is irreducible. 2) Find the values of a, for which the Markov chain is regular. 3) Find for every a the invariant probability vector. 4) At time t = 0 the process is in state E1. Let T denote the time when the process for the first time is in state E5. Find the distribution of the random variable T. 5) Find the mean and variance of T. 6) Assume that a = 0. Prove that all the matrices Pn for n ≥ 4 are equal to the same matrix Q, and find Q. 1) If a = 0, then E5 is absorbing, and the Markov chain is not irreducible. If a ∈ ]0, 1], then we get the transitions E5 → E1 → E2 → E3 → E4 → E5, proving that the Markov chain is irreducible for a ∈ ]0, 1]. 2) If a ∈ ]0, 1[, then the diagonal element p55 = 1 − a 0, hence the Markov chain is regular. On the other hand, if a = 1, then P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 3 4 1 4 0 0 0 0 2 3 1 3 0 0 0 0 1 2 1 2 0 0 0 0 1 a 0 0 0 1 − a ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , P2 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 0 1 2 3 8 1 8 0 0 0 1 3 2 3 1 2 0 0 0 1 2 1 0 0 0 0 0 3 4 1 4 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , and P4 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 5 8 3 32 1 32 0 1 4 1 3 1 2 1 6 0 0 0 3 8 3 8 3 16 1 16 0 0 1 2 3 8 1 8 1 8 0 0 1 4 5 8 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ .
  • 106. Download free ebooks at bookboon.com Stochastic Processes 1 106 3. Markov chains Since P4 has the transitions E5 → E1 → E2 → E3 → E4 → E5, the corresponding Markov chain is irreducible. Furthermore, all elements of the diagonal are 0, so the Markov chain corresponding to the matrix P4 is regular. This implies that the original Markov chain is also regular. 3) The equations of the invariant probability vector are g1 = ag5, g2 = 3 4 g1, g3 = 1 4 g1 + 2 3 g2, g4 = 1 3 g2 + 1 2 g2, g5 = 1 2 g3 + g4 + (1 − a)g5, thus g1 = ag5, g2 = 3 4 a g5, g3 = 1 4 a g5 + 1 2 a g5 = 3 4 a g5, g4 = 1 4 a g5 + 3 8 a g5 = 5 8 a g5. A check gives a g5 = 3 8 a g5 + 5 8 a g5 = a g5, so it is OK. Furthermore, 1 = g1 + g2 + g3 + g4 + g5 = g5 a + 3 4 a + 3 4 a + 5 8 a + 1 = 1 + 25 8 a g5, from which g5 = 8 8 + 25a , and thus g = 1 8 + 25a (8a, 6a, 6a, 5a, 8). 4) Here a consideration of the graph is the easiest method: E1 3 4 → E2 2 3 → E3 1 2 → E4 1 → E5 1 4 1 3 1 2 E3 1 2 → E4 1 → E5 1 2 E5 t = 0 t = 1 t = 2 t = 3 t = 4
  • 107. Download free ebooks at bookboon.com Stochastic Processes 1 107 3. Markov chains It follows that P{T = 1} = 0 and P{T = 2} = 1 4 · 1 2 = 1 8 . To time t = 3 we get the paths E1 → E2 → E3 → E5, sandsynlighed: 3 4 · 2 3 · 1 2 = 1 4 , E1 → E2 → E4 → E5, sandsynlighed: 3 4 · 1 3 · 1 = 1 4 , E1 → E3 → E4 → E5, sandsynlighed: 1 4 · 1 2 · 1 = 1 8 , hence P{T = 3} = 1 4 + 1 4 + 1 8 = 5 8 . Finally, P{T = 4} = 3 4 · 2 3 · 1 2 = 1 4 , hence, summing up, P{T = 2} = 1 8 , P{T = 3} = 5 8 , P{T = 4} = 1 4 . 5) The mean is E{T} = 2 · 1 8 + 3 · 5 8 + 4 · 1 4 = 25 8 . Furthermore, E % T2 = 4 · 1 8 + 9 · 5 8 + 16 · 1 4 = 81 8 , so V {T} = 81 8 − 625 64 = 648 − 625 64 = 23 64 . 6) If a = 0, then P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 3 4 1 4 0 0 0 0 2 3 1 3 0 0 0 0 1 2 1 2 0 0 0 0 1 0 0 0 0 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , P2 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 0 1 2 3 8 1 8 0 0 0 1 3 2 3 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , P4 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . Then it is obvious that P5 = P P4 = P4 , because the sums of the rows of P are 1. We conclude that Pn = P4 , for n ≥ 4.
  • 108. Download free ebooks at bookboon.com Stochastic Processes 1 108 3. Markov chains Example 3.44 Given a Markov chain of 5 states E1, E2, E3, E4 and E5, and transition probabilities p1,1 = 1 − 4a, p1,2 = p1,3 = p1,4 = p1,5 = a, p2,1 = p2,2 = p3,2 = p3,3 = p4,3 = p4,4 = p5,4 = p5,5 = 1 2 , Pi,j = 0 otherwise. Here a is a constant in the interval 0, 1 4 # . 1. Find the stochastic matrix P. 2. Find the values of a, for which the Markov chain is irreducible. 3. Find the values of a, for which the Markov chain is regular. 4. Find for every a the invariant probability vector. At time t = 0 the process is in state E2. Let T2 denote the random variable, which indicates the time, when the process for the first time is in state E1. 5. Find P {T2 = k}, k ∈ N, and compute the mean of T2. Then put a = 1 4 and assume that the process at time t = 0 is in state E1, and let T denote the time of its first return to E1. 6. Find the mean of T. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 1 − 4a a a a a 1 2 1 2 0 0 0 0 1 2 1 2 0 0 0 0 1 2 1 2 0 0 0 0 1 2 1 2 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . 2) If a = 0, then E1 is clearly an absorbing state, so P is not irreducible for a = 0. When 0 a ≤ 1 4 we notice the oblique diagonal below the main diagonal. All elements of this diagonal are 1 2 , so we have always the flow E5 → E4 → E3 → E2 → E1. Now a 0 implies that also E1 → E5, so we get e.g. E1 → E5 → E4 → E3 → E2 → E1, proving that P is irreducible for 0 a ≤ 1 4 . 3) If a ∈ # 0, 1 4 # , then P is irreducible. Since there exist positive elements in the diagonal (e.g. p2,2 = 1 2 ), it follows that P is regular for every a ∈ # 0, 1 4 # .
  • 109. Download free ebooks at bookboon.com Stochastic Processes 1 109 3. Markov chains 4) The system of equations g P = g is written (1 − 4a)g1 = 1 2 g2 = g1, a g1 = 1 2 g2 = 1 2 g3 = g2, a g1 + 1 2 g3 + 1 2 g4 = g3, a g1 + 1 2 g4 = 1 2 g5 = g4, a g1 + 1 2 g5 = g5, from which clearly g2 = 8a g1 and g5 = 2a g1. Then by insertion of these values, g3 = 6a g1 and g4 = 4a g1. Finally, 1 = 5 i=1 gi = g1(1 + 8a + 6a + 4a + 2a) = (20a + 1)g1, so g = 1 20a + 1 (1, 8a, 6a, 4a, 2a). In particular, g = (1, 0, 0, 0, 0) for a = 0. In Paris or Online International programs taught by professors and professionals from all over the world BBA in Global Business MBA in International Management / International Marketing DBA in International Business / International Management MA in International Education MA in Cross-Cultural Communication MA in Foreign Languages Innovative – Practical – Flexible – Affordable Visit: www.HorizonsUniversity.org Write: Admissions@horizonsuniversity.org Call: 01.42.77.20.66 www.HorizonsUniversity.org Please click the advert
  • 110. Download free ebooks at bookboon.com Stochastic Processes 1 110 3. Markov chains 5) We note that T2 is geometrically distributed, so P {T2 = k} = 1 2 k , k ∈ N, and we have E {T2} = 2. 6) Starting at E1 we reach in the first step one of the states E2, E3, E4 or E5, all of probability 1 4 . From these states it takes in average 2, 4, 6 or 8 steps to get back to E1. Consequently E{T} = 1 + 1 4 {2 + 4 + 6 + 8} = 6. Example 3.45 Given a Markov chain of 5 states E0, E1, E2, E3 and E4, and transition probabilities p0,1 = p4,3 = 1, p3,2 = p2,1 = p1,0 = 2 3 , p1,2 = p3,4 = 1 3 , p2,3 = a 3 , p2,4 = 1 − a 3 , pi,j = 0 otherwise. Here a is a constant in the interval [0, 1] 1. Find the stochastic matrix P. 2. Prove that the Markov chain is irreducible for every a ∈ [0, 1]. 3. Find for a ∈ [0, 1] the invariant probability vector. 4. Prove that the Markov chain is regular for a ∈ [0, 1[, but not for a = 1. We assume in the following that the process at time t = 0 is in state E2. 5. Find for a = 1 the probability that the process gets to the state E4 before the state E0. Hint: One may apply results concerning the ruin problem. 6. Find for a = 0 the probability that the process gets to state E4 before to state E0- 7. Find for every a ∈ [0, 1] the probability that the process reaches state E4 before state E0. 1) The stochastic matrix is P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 0 0 0 2 3 0 1 3 0 0 0 2 3 0 a 3 1−a 3 0 0 2 3 0 1 3 0 0 0 1 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , a ∈ [0, 1].
  • 111. Download free ebooks at bookboon.com Stochastic Processes 1 111 3. Markov chains 2) When a ∈ ]0, 1], then we have the transitions E0 ←→ E1 ←→ E2 ←→ E3 ←→ E4, and when a ∈ [0, 1[, then we have the transitions E0 ←→ E1 ←→ E2 ←− E3, E4, and it follows that the chain is irreducible. One might e.g. split into the three cases a = 0 : E0 ←→ E1 ←→ E2 ←− E3, E4, a = 1 : E0 ←→ E1 ←→ E2 ←→ E3, E4, 0 a 1 : E0 ←→ E1 ←→ E2 ←→ E3, E4, from which one also derives the irreducibility. 3) The system of equations g P = g is written ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 2 3 g1 = g0, g0 + 2 3 g2 = g1, 1 3 g1 + 2 3 g3 = g2, a 3 g2 +g4 = g3, 1−a 3 g3 + 1 3 g3 = g4, thus g1 = 3 2 g0, g2 = 3 2 (g1 − g0) = 3 4 g0, g3 = 3 2 g2 − 1 3 g1 = 3 8 g0, g4 = g3 − a 3 g2 = 3 8 − a 4 g0. Recalling that 1 = 4 i=0 gi = g0 1 − 3 2 + 3 4 + 3 8 + 3 8 − a 4 = g0 ' 4 − a 4 ( = 16 − a 4 g0,
  • 112. Download free ebooks at bookboon.com Stochastic Processes 1 112 3. Markov chains we conclude that g0 = 4 16 − a , and hence g = 4 16 − a 1, 3 2 , 3 4 , 3 8 , 3 8 − a 4 = 1 32 − 2a (8, 12, 6, 3, 3 − 2a). 4) If a = 1, then we have a random walk {E0, E1, E2, E3, E4}. It follows from the diagram E0 ←→ E1 ←→ E2 ←→ E3 ←→ E4, that it is only possible to get from E0 back to E0 through an even number of steps. Hence the Markov chain is periodic of period 2, and it is not regular. If a 1, then it was mentioned previously that we have the diagram E0 ←→ E1 ←→ E2 ←→ E3, E4. The chain E4 −→ E3 −→ E4 shows that p (2) 44 0, and thus p (2n) 44 0. The chain E4 −→ E3 −→ E2 −→ E4 shows that p (3) 44 0. By a composition with the first chain it follows that p (2n+1) 44 0, thus p (n) 44 0 for n 2. Since P is irreducible, we conclude that P is regular. Alternatively we compute Pn , and it is easily seen that all elements of P6 are 0, and the claim is proved- 5) We now return to the diagram for a = 1, i.e. E0 ←→ E1 ←→ E2 ←→ E3 ←→ E4, where E2 is the initial state. This can be interpreted as a ruin problem, where we shall get to E4 before E0. The interpretation gives N = 4, k = 2, p = 1 3 , q = 2 3 , so we are in case b) with q p = 2, hence the probability of reaching E0 before E4 is a2 = 22 − 24 1 − 24 = 12 15 = 4 5 . Thus the wanted probability (of the complementary event) is b2 = 1 − a2 = 1 5 . Alternatively. Let bk denote the probability that we by starting in Ek reaches E4 before E0. When we split the investigation according to what happens after one step, we get bk = 1 3 bk+1 + 2 3 bk−1, k = 1, 2, 3, and b0 = 0, b4 = 1,
  • 113. Download free ebooks at bookboon.com Stochastic Processes 1 113 3. Markov chains thus 1 3 bk+1 − bk + 2 3 bk−1 = 0. The complete solution is bk = c1 · 1 + c2 · 2k . We get k = 0 : c1 + c2 = b0 = 0, k = 4 : c1 + 16c2 = b4 = 1, hence ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ c1 = − 1 15 , c2 = 1 15 , and whence bk = 1 15 2k − 1 , and in particular b2 = 3 15 = 1 5 . By 2020, wind could provide one-tenth of our planet’s electricity needs. Already today, SKF’s innovative know- how is crucial to running a large proportion of the world’s wind turbines. Up to 25 % of the generating costs relate to mainte- nance. These can be reduced dramatically thanks to our systems for on-line condition monitoring and automatic lubrication. We help make it more economical to create cleaner, cheaper energy out of thin air. By sharing our experience, expertise, and creativity, industries can boost performance beyond expectations. Therefore we need the best employees who can meet this challenge! The Power of Knowledge Engineering Brain power Plug into The Power of Knowledge Engineering. Visit us at www.skf.com/knowledge Please click the advert
  • 114. Download free ebooks at bookboon.com Stochastic Processes 1 114 3. Markov chains Alternatively. We can reach E4 without passing E0 by either go directly to E4 (in two steps), probability 1 9 , or be back at E2 after two steps, p (2) 22 = 1 3 · 2 3 + 2 3 · 1 3 = 4 9 , and then go directly to E4, or be back at E2 after 2 · 2 steps etc.. Summing up we get the probability 1 9 ∞ n=0 4 9 n = 1 9 · 1 1 − 4 9 = 1 9 · 9 5 = 1 5 . 6) If a = 0, then we have the diagram E0 ←→ E1 ←→ E2 ←→ E4, which again may be interpreted as a ruin problem. The probability of reaching E0 before E4 is for N = 3, k = 2, p = 1 3 , q = 2 3 given by a2 = 22 − 23 1 − 23 = 4 7 . The wanted probability (of the complementary event) is b2 = 1 − 4 7 = 3 7 . Alternatively this question can also be solved by the two alternatives described in 5.. 7) The general case. Let ck denote the probability when starting in Ek to reach E4 before E0. We get by the usual splitting c0 = 0, c4 = 1, c1 = 1 3 c2, c2 = 2 3 c1 + 1 − a 3 + a 3 c3, c3 = 1 3 + 2 3 c2.
  • 115. Download free ebooks at bookboon.com Stochastic Processes 1 115 3. Markov chains When we insert the expressions of c1 and c3 into the equations of c2, we get c2 = 2 3 · 1 3 c2 + 1 − a 3 + a 3 1 3 + 2 3 c2 = 2 9 c2 + 2 9 a c2 + 1 3 − a 3 + a 9 , which is reduced to c2 7 9 − 2 9 a = 1 3 − 2a 9 = 3 − 2a 9 , hence c2 = 3 − 2a 7 − 2a . Check. If a = 0, then c2 = 3 7 , cf. 6., and if a = 1, then c2 = 1 5 , cf. 5. Alternatively we split according to when we last time were in state E2, p (2) 22 = 2 3 · 1 3 + a 3 · 2 3 = 2 9 (1 + a). www.simcorp.com MITIGATE RISK REDUCE COST ENABLE GROWTH The financial industry needs a strong software platform That’s why we need you SimCorp is a leading provider of software solutions for the financial industry. We work together to reach a common goal: to help our clients succeed by providing a strong, scalable IT platform that enables growth, while mitigating risk and reducing cost. At SimCorp, we value commitment and enable you to make the most of your ambitions and potential. Are you among the best qualified in finance, economics, IT or mathematics? Find your next challenge at www.simcorp.com/careers Please click the advert
  • 116. Download free ebooks at bookboon.com Stochastic Processes 1 116 3. Markov chains The probability of going from E2 to E4 in one or two steps is a 3 · 1 3 + 1 − a 3 = 1 3 − 2 9 a = 3 − 2a 9 . The wanted probability is 3 − 2a 9 ∞ n=0 2 9 (1 + a) n = 3 − 2a 9 · 1 1 − 2 9 − 2a 9 = 3 − 2a 7 − 2a . Example 3.46 A Markov chain of the states E1, E2, E3 and E4 has the stochastic matrix P = ⎛ ⎜ ⎜ ⎝ 0 1 2 1 2 0 0 0 1 2 1 2 0 0 1 2 1 2 a 0 0 1 − a ⎞ ⎟ ⎟ ⎠ , where a is a constant in the interval [0, 1]. 1. Find the values of a, for which the Markov chain is irreducible, and the values of a, for which it is regular. 2. Find for every a the invariant probability vector. A particle moves between the states E1, E2, E3 and E4 of the given transition probabilities. At time t = 0 the particle is in state E1. Let T denote the random variable, which indicates the time, when the particle for the first time is in state E4. 3. Find P{T = 2}. Hint: Split the investigation according to whether the particle is passing through state E2 or state E3. 4. Find P{T = n} for n = 2, 3, 4, . . . . 5. Find the mean of T. 6. Explain why we have in the case a = 0 that Pn → ⎛ ⎜ ⎜ ⎝ 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 ⎞ ⎟ ⎟ ⎠ for n → ∞. 1) If a = 0, then E4 is absorbing, and the Markov chain is neither irreducible nor regular for a = 0. If a ∈ ]0, 1], then we have the transitions E1 −→ E2 −→ E3 −→ E4 −→ E1, and the Markov chain is irreducibel. Since p3,3 = 1 2 0, it is also regular for a ∈ ]0, 1].
  • 117. Download free ebooks at bookboon.com Stochastic Processes 1 117 3. Markov chains 2) The equations of the invariant probability vector are g1 = a g4, thus g1 = a g4, g2 = 1 2 g1, thus g2 = 1 2 a g4, g3 = 1 2 g1 + 1 2 g2 + 1 2 g3, thus g3 = g1 + g2 = 3 2 a g4, hence 1 = g1 + g2 + g3 + g4 = g4 a + 1 2 a + 3 2 a + 1 )(1 − 3a)g4. The invariant probability vector is g = 1 1 + 3a a, a 2 , 3a 2 , 1 . 3) We derive from the matrix the tree E1 → E2 → E3 R3 → E4 E3 where all arrows have the weight 1 2 , thus P{T = 2} = 1 2 · 1 2 + 1 2 · 1 2 = 1 2 . 4) We have at step n = 2 only the possibilities E3 and E4. Hence P{T = n} = 1 2 n−1 for n ≥ 2. 5) The mean is E{T} = ∞ n=2 n 1 2 n−1 = ∞ n=1 n 1 2 n−1 − 1 = 1 1 − 1 2 2 − 1 = 3. 6) If a = 0, then P = ⎛ ⎜ ⎜ ⎝ 0 1 2 1 2 0 0 0 1 2 1 2 0 0 1 2 1 2 0 0 0 1 ⎞ ⎟ ⎟ ⎠ , thus ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ p (n+1) i,1 = 0, p (n+1) i,2 = 1 2 p (n) 2,i = 0, p (n+1) i,3 = 1 2 ' p (n) 3,1 + p (n) 3,2 + p (n) 3,3 ( = 1 2 p (n) 3,3 ,
  • 118. Download free ebooks at bookboon.com Stochastic Processes 1 118 3. Markov chains and hence p (n) 1,j = p (n) 2,j = 0, and p (n+1) 3,1 + p (n+1) 3,2 + p (n+1) 3,3 = 1 2 ' p (n) 3,1 + p (n) 3,2 + p (n) 3,3 ( . This shows that p (n) i,j → 0 for n → ∞, ifi = 1, 2, 3, 4 and j = 1, 2, 3, and we get p (n) i,4 → 1 for n → ∞, and the claim is proved. Example 3.47 A Markov chain of the states E1, E2, E3, E4 and E5 has the stochastic matrix P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 3 1 3 1 3 0 0 0 1 2 1 2 0 0 0 0 1 2 1 2 0 0 0 0 1 a 0 0 0 1 − a ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , where a is a constant in the interval [0, 1]. 1. Find the values of a, for which the Markov chain is irreducible. 2. Find the values of a, for which the Markov chain is regular. 3. Find for every a the invariant probability vector. Assume in the following that a = 1 2 . At time t = 0 the process is in state E1. Let T denote the random variable, which indicates the time, when the particle for the first time is in state E5, and let U denote the random variable, which indicates the time, when the particle for the first time returns to the state E1. 4. Find P{T = k} for k = 2, 3, 4, and the mean of T. 5. Find P{U = 3} and P{Y = 4}. 6. Find P{U = k} for k = 5, 6, . . . . Hint: Split into the cases T = 2, T = 3 or T = 4. 1) When a = 0, then E5 is absorbing, and the Markov chain is not irreducible. If a ∈ ]0, 1], then we have the transitions E1 −→ E2 −→ E3 −→ E4 −→ E5 −→ E1, and the Markov chain is irreducible for a ∈ ]0, 1].
  • 119. Download free ebooks at bookboon.com Stochastic Processes 1 119 3. Markov chains 2) If a ∈ ]0, 1[, then the element of the diagonal p5,5 = 1 − a 0, and since the Markov chain is irreducible for a ∈ ]0, 1[, it is also regular for a ∈ ]0, 1[. If a = 1, then P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 3 1 3 1 3 0 0 0 1 2 1 2 0 0 0 0 1 2 1 2 0 0 0 0 1 1 0 0 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , P2 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 0 1 6 1 3 1 2 0 0 0 1 4 3 4 1 2 0 0 0 1 2 1 0 0 0 0 0 1 3 1 3 1 3 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , P3 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 1 2 0 0 1 12 5 12 3 4 0 0 0 1 4 1 2 1 6 1 6 1 6 0 0 1 3 1 3 1 3 0 0 0 1 6 1 3 1 2 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . The Markov chain corresponding to P3 has the transitions E1 −→ E5 −→ E4 −→ E3 −→ E2 −→ E1, so it is irreducible. Since the element of the diagonal p (3) 1,1 = 1 2 0, it is also regular. Hence the original Markov chain is also regular for a = 1, so the Markov chain is regular for a ∈ ]0, 1]. 3) The equations of the invariant probability vector are g1 = a g5, thus g1 = a g5, g2 = 1 3 g1, thus g2 = 1 3 a g5, g3 = 1 3 g1 + 1 2 g2, thus g3 = 3 2 g2 = 1 2 a g5, g4 = 1 3 g1 + 1 2 g2 + 1 2 g3, thus g4 = 3 2 g3 = 3 4 a g5. Please click the advert
  • 120. Download free ebooks at bookboon.com Stochastic Processes 1 120 3. Markov chains Hence 1 = g1 + g2 + g3 + g4 + g5 = g5 a + 1 3 a + 1 2 a + 3 4 a + 1 = 1 + 31 12 a g5, and the invariant probability vector is g = 12 12 + 31a a, a 3 , a 2 , 3 4 a, 1 . 4) We have the tree t = 0 t = 1 t = 2 t = 3 t = 4 E1 1 3 −→ E2 1 2 −→ E3 1 2 −→ E4 1 −→ E5 1 3 1 2 1 2 1 2 E1 E3 1 2 −→ E4 1 −→ E5 1 3 1 2 1 2 E4 1 −→ E5 When we compute P{T = 2} we have the paths E1 −→ E3 −→ E5 probability: 1 3 · 1 2 = 1 6 , E1 −→ E4 −→ E5 probability: 1 3 · 1 = 1 3 , thus P{T = 2} = 1 6 + 1 3 = 1 2 . When we compute P{T = 3} we have the paths E1 −→ E2 −→ E3 −→ E5, probability: 1 3 · 1 2 · 1 2 = 1 12 , E1 −→ E2 −→ E4 −→ E5, probability: 1 3 · 1 2 · 1 = 1 6 , E1 −→ E3 −→ E4 −→ E5, probability: 1 3 · 1 2 · 1 = 1 6 , hence P{T = 3} = 1 12 + 1 6 + 1 6 = 5 12 . When we compute P{T = 4} we shall only consider the path E1 −→ E2 −→ E3 −→ E4 −→ E5, probability: 1 3 · 1 2 · 1 2 = 1 12 . 5) It is only possible to reach E1 via E5. Since a = 1 2 , we have P{U = 3} = P{T = 2} · P {E5 → E1} = 1 2 · 1 2 = 1 4 , P{U = 4} = P{T = 2} · P {E5 → E5 → E1} + P{T = 3} · P {E5 → E1} = 1 2 · 1 2 · 1 2 + 5 12 · 1 2 = 3 + 5 24 = 1 3 .
  • 121. Download free ebooks at bookboon.com Stochastic Processes 1 121 3. Markov chains 6) When k ≥ 5, we shall find how much “mass”, which is collected in total in E5 at t = 4. This mass of probability is P{T = 2} · 1 2 · 1 2 + P{T = 3} · 1 2 + P{T = 4} = 1 8 + 5 24 + 1 12 = 10 24 = 5 12 . In every one of the following steps, half of it remains in E5, and the other half is transferred to E1, so P{U = k} = 5 12 · 1 2 k−4 for k ≥ 5. Challenging? Not challenging? Try more Try this... www.alloptions.nl/life Please click the advert
  • 122. Download free ebooks at bookboon.com Stochastic Processes 1 122 3. Markov chains Example 3.48 A Markov chain of 2 states E1 and E2 has the stochastic matrix Q = ⎛ ⎜ ⎜ ⎝ 1 5 4 5 3 5 2 5 ⎞ ⎟ ⎟ ⎠ . 1. Prove that Q is regular, and find the invariant probability vector. Another Markov chain of 4 states E1, E2, E3 and E4 has the stochastic matrix P = ⎛ ⎜ ⎜ ⎝ 1 5 4 5 0 0 3 5 2 5 0 0 1 5 0 2 5 2 5 0 1 5 2 5 2 5 ⎞ ⎟ ⎟ ⎠ . 2. Prove that P is not irreducible, and find its closed subsets. 3. Prove for every initial distribution p(0) = p (0) 1 , p (0) 2 , p (0) 3 , p (0) 4 that p (n) 3 + p (n) 4 = 4 5 ' p (n−1) 3 + p (n−1) 4 ( , n ∈ N, and then prove that lim n→∞ p (n) 3 = lim n→∞ p (n) 4 = 0. 4. Show that limn→∞ p(n) exists and find the limit vector. 5. At time t = 0 the process is in state E3. Find for every n ∈ N the probability that the process for t = n for the first time is in state E1 without previously having been in state E2. 1) All elements of Q are 0, so the Markov chain is regular. The equations of the invariant probability vector are g1 = 1 5 g1 + 3 5 g2, thus g2 = 4 3 g1, 1 = g1 + g2 = 1 + 4 3 g1 = 7 3 g1, so g = 3 7 , 4 7 . 2) Clearly, {E1, E2} is a closed subset, and there is no other proper closed subset. However, since there exists a proper closed subset, the Markov chain is not irreducible.
  • 123. Download free ebooks at bookboon.com Stochastic Processes 1 123 3. Markov chains 3) It follows immediately that p (n) 3 = 2 5 p (n−1) 3 + 2 5 p (n−1) 4 , and p (n) 4 = 2 5 p (n−1) 3 + 2 5 p (n−1) 4 = p (n) 3 , hence p (n) 3 + p (n) 4 = 4 5 ' p (n−1) 3 + p (n−1) 4 ( , n ∈ N. Then by iteration, p (n) 3 + p (n) 4 = 4 5 n ' p (0) 3 + p (0) 4 ( → 0 for n → ∞, so 0 = lim n→∞ ' p (n) 3 + p (n) 4 ( = 2 lim n→∞ p (n) 3 = 2 lim n→∞ p (n) 4 . 4) It follows from 3. that the latter two coordinates tend towards 0 for n → ∞. The former two coordinates are governed by the matrix Q, so we conclude from 1. that lim n→∞ p(n) = 3 7 , 4 7 , 0, 0 . 5) The two states E3 and E4 occur in every step of the same weight, thus 1 2 · 1 5 = 1 10 of the total weight goes to E1, and 1 2 · 1 5 = 1 10 of the total weight goes to E2. Then we have the diagram E2 1 10 E3 4 5 → {E3, E4} 4 5 → {E3, E4} →, 1 5 1 10 E1 E1 hence P{T = 1} = 1 5 , P{T = 2} = 4 5 · 1 10 , . . . , P{T = n} = 1 10 4 5 n−1 , which can also be written P{T = n} = 22n−3 5n = 1 8 4 5 n for n ≥ 2, together with P{T = 1} = 1 5 for n = 1.
  • 124. Download free ebooks at bookboon.com Stochastic Processes 1 124 3. Markov chains Example 3.49 A Markov chain of the states E1, E2, E3, E4 and E5 has the stochastic matrix P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 3 1 3 0 1 3 0 0 1 0 0 0 0 0 1 2 1 2 0 0 0 0 1 a 0 0 0 1 − a ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , where a is a constant in the interval [0, 1]. 1. Find the values of a, for which the Markov chain is irreducible. 2. Find the values of a, for which the Markov chain is regular. 3. Find for every a the invariant probability vector. The process is at time t = 0 in the state E1. Let T denote the random variable, which indicates the time when the process for the first time is in state E5. 4. Find P{T = k} for k = 1, 2, 3, 4, and then the mean of T. Assume that a 0 and that the process at time t = 0 is in state E1. Let U denote the random variable, which indicates the time when the process for the first time returns to E1. 5. Find the mean of U. 1) If a = 0, then E5 is absorbing, and the Markov chain is not irreducible. If a ∈ ]0, 1], then we have the transitions E5 −→ E1 −→ E2 −→ E3 −→ E4 −→ E5, and the Markov chain is irreducible for a ∈ ]0, 1]. 2) We have proved for a ∈ ]0, 1[ that the Markov chain is irreducible, and since the element of the diagonal p5,5 = 1 − a 0, we conclude that the Markov chain is regular for every a ∈ ]0, 1[. If a = 1, and we let denote elements 0, then P2 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . The Markov chain corresponding to P2 has the transitions E1 −→ E3 −→ E5 −→ E2 −→ E4 −→ E1, from which follows that it is irreducible. Furthermore, P2 has elements in its diagonal which are 0, hence it is also regular. This implies again that the original Markov chain is regular for a ∈ ]0, 1].
  • 125. Download free ebooks at bookboon.com Stochastic Processes 1 125 3. Markov chains 3) The equations of the invariant probability vector are g1 = a g5, thus g1 = a g5, g2 = 1 3 g1, thus g2 = 1 3 a g5 g3 = 1 3 g1 + g2, thus g3 = 2 3 a g5, g4 = 1 2 g3, thus g4 = 1 3 a g5, where the latter equation is used as a check: g5 = 1 3 g1 + 1 2 g3 + g4 + (1 − a)g5. We have furthermore the condition 1 = g1 + g2 + g3 + g4 + g5 = g5 a + 1 3 a + 2 3 a + 1 3 a + 1 = g5 7 3 a + 1 , hence g5 = 3 7a + 3 , and g = 1 7a + 3 (3a, a, 2a, a, 3). Stand out from the crowd Designed for graduates with less than one year of full-time postgraduate work experience, London Business School’s Masters in Management will expand your thinking and provide you with the foundations for a successful career in business. The programme is developed in consultation with recruiters to provide you with the key skills that top employers demand. Through 11 months of full-time study, you will gain the business knowledge and capabilities to increase your career choices and stand out from the crowd. Applications are now open for entry in September 2011. For more information visit www.london.edu/mim/ email mim@london.edu or call +44 (0)20 7000 7573 Masters in Management London Business School Regent’s Park London NW1 4SA United Kingdom Tel +44 (0)20 7000 7573 Email mim@london.edu www.london.edu/mim/ Fast-track your career Please click the advert
  • 126. Download free ebooks at bookboon.com Stochastic Processes 1 126 3. Markov chains 4) If to t = 0 we start in E1, then we have the tree E1 1 3 → E2 1 → E3 1 2 → E4 1 → E5 1 3 1 2 E1 E3 1 2 → E4 1 → E5 1 3 1 2 E5 E5 t = 1 t = 2 t = 3 t = 4 From this we infer that P{T = 1} = 1 3 , P{T = 2} = 1 3 · 1 2 = 1 6 , P{T = 3} = 1 3 · 1 2 · 1 + 1 3 · 1 · 1 2 = 1 3 , P{T = 4} = 1 3 · 1 · 1 2 = 1 6 . The mean of T is E{T} = 1 · 1 3 + 2 · 1 6 + 3 · 1 3 + 4 · 1 6 = 1 3 + 1 3 + 1 + 2 3 = 7 3 . 5) We first notice that we can only reach E1 via E5. If the process is in E5, then we have the probability a of in the next step to be in E1, and probability 1 − a of to remain in E5. This gives a geometric distribution Pas(1, a) of mean 1 a . Then finally, E{U} = E{T} + 1 a = 7 3 + 1 a .
  • 127. Download free ebooks at bookboon.com Stochastic Processes 1 127 3. Markov chains Example 3.50 A Markov chain of states E1, E2, E3 and E4 has the stochastic matrix P = ⎛ ⎜ ⎜ ⎝ 0 1 0 0 1 2 0 1 4 1 4 0 0 0 1 0 a 0 1 − a ⎞ ⎟ ⎟ ⎠ , where a is a constant in the interval [0, 1]. 1. Find the values of a, for which the Markov chain is irreducible. 2. Find the values of a, for which the Markov chain is regular. 3. Find for every a the invariant probability vector. At t = 0 the process is in state E2. Let T denote the random variable, which indicates the time, when the process for the first time is in state E4. 4. Find the probabilities P{T = 1} and P{T = 2}. 5. Prove that for every k ∈ N0, P{T = 2k + 1} = P{T = 2k + 2}, and find these probabilities. 6. Find the mean of the random variable T. 1) By analyzing the stochastic matrix we obtain the diagram E1 ←→ E2 −→ E3 −→ E4 1 − a ↓ ↓ a E4 E2 If a = 0, then the Markov chain is absorbing with E4 as absorbing state. If a 0, then clearly the Markov chain is irreducible. 2) The Markov chain is not irreducible, and therefore not regular either for a = 0. If a 0, then p (2) 22 0 and p (3) 22 0, and the Markov chain is regular. Alternatively one may prove that all elements of P5 are 0. Alternatively we have for 0 a 1 that p44 0, and we shall only investigate the case a = 1 separately. 3) Ligningerne g P = g for sandsynlighedsvektoren skrives 1 2 g2 = g1, thus g2 = 2g1, g1 + a g4 = g2, thus a g4 = g1, 1 4 g2 = g3, g3 = 1 2 g1, 1 4 g2 + g3 + (1 − a)g4 = g4.
  • 128. Download free ebooks at bookboon.com Stochastic Processes 1 128 3. Markov chains If a = 0, then g1 = 0, hence g = (0, 0, 0, 1). If a = 0, then g4 = 1 a g1, hence 1 = 4 i=1 gi = g1 1 + 2 + 1 2 + 1 a = 7a + 2 2a g1, from which g1 = 2a 7a + 2 , and g = 1 7a + 2 (2a, 4a, a, 2). 4) The event {T = 1} can only occur by the transition E2 −→ E4, so P{T = 1} = 1 4 . The event {T = 2} can only occur by the process E2 −→ E3 −→ E4, thus P{T = 2} = 1 4 . 5) We can only obtain E4 to time 2k + 1 by repeating E2 −→ E1 −→ E2 in total k times, follows by E2 −→ E4. Analogously for 2k +2, with the modicication that we at last replace E2 −→ E4 by E2 −→ E3 −→ E4. It follows (cf. 4.) that P{T = 2k + 1} = 1 4 · 1 2 k = P{T = 2k + 2}, k ∈ N. When we compare with 4., we see that this is also true for k = 0.
  • 129. Download free ebooks at bookboon.com Stochastic Processes 1 129 3. Markov chains 6) Using the results of 5. it follows by straightforward computations that E{T} = ∞ n=1 n P{Tn} = ∞ k=0 ((2k + 1)P{T = 2k + 1} + (2k + 2)P{T = 2k + 2}) = ∞ k=0 {(2k + 1) + (2k + 2)} · 1 4 1 2 k = ∞ k=0 1 4 (4k + 3) 1 2 k = ∞ k=0 (k + 1) − 1 4 1 2 k = ∞ =1 · 1 2 −1 − 1 4 ∞ k=0 1 2 k = + d dx ∞ =0 x , x= 1 2 − 1 4 · 2 = d dx 1 1 − x # x= 1 2 − 1 2 = 1 1 − 1 2 2 − 1 2 = 4 − 1 2 = 7 2 . © UBS 2010. All rights reserved. www.ubs.com/graduates Looking for a career where your ideas could really make a difference? UBS’s Graduate Programme and internships are a chance for you to experience for yourself what it’s like to be part of a global team that rewards your input and believes in succeeding together. Wherever you are in your academic career, make your future a part of ours by visiting www.ubs.com/graduates. You’re full of energy and ideas. And that’s just what we are looking for. Please click the advert
  • 130. Download free ebooks at bookboon.com Stochastic Processes 1 130 3. Markov chains Example 3.51 A Markov chain of states E1, E2, E3, E4 and E5 has the stochastic matrix P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ a 0 0 0 1 − a 1 0 0 0 0 1 2 1 2 0 0 0 0 0 1 0 0 0 1 2 0 1 2 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , where a is a constant in the interval [0, 1]. 1. Find the values of a, for which the Markov chain is irreducible. 2. Find the values of a, for which the Markov chain is regular. 3. Find for every a the invariant probability vector. At time t = 0 the process is in state E5. Let T denote the random variable, which indicates the time, when the process for the first time is in state E1 4. Find P{T = k} for k = 2, 3, 4, and then the mean and variance of T. Then assume that a 1, and that the process at t = 0 is in state E5. Let U denote the random variable, which indicates the time when the process for the first time returns to E5. 5. Find the mean of U. 1) If a = 1, then E1 is absorbing, and the Markov chain is not irreducible. If a ∈ [0, 1[, then we have the transitions E1 −→ E5 −→ E4 −→ E3 −→ E2 −→ E1, and the Markov chain is irreducible for a ∈ [0, 1[. 2) If a ∈ [0, 1[, then e.g. E1 −→ E5 −→ E4 −→ E3 −→ E2 −→ E1, 5 steps, and E1 −→ E5 −→ E4 −→ E3 −→ E1, 4 steps. The largest common divisor for 4 and 5 is 1, so the Markov chain is regular. Alternatively, let denote that pi,j 0, and let A denote that a is a factor (so A = 0 for a = 0, and A = for a ∈ ]0, 1[). Then successively, P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ A 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , P2 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ A 0 A A 0 0 0 0 0 0 0 0 0 0 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ ,
  • 131. Download free ebooks at bookboon.com Stochastic Processes 1 131 3. Markov chains P4 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ A A A A A A A 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , P8 = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ A ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , and we see that all elements of e.g. P16 are 0, so the Markov chain is regular for a ∈ [0, 1[. 3) The equations of the invariant probability vector are p1 = a p1 + p2 + 1 2 p3, p2 = 1 2 p3 + 1 2 p5, p3 = p4, p4 = 1 2 p5, p5 = (1 − a)p1. Then we get, expressed by p1, p5 = (1 − a)p1, p3 = p4 = 1 2 p5 = 1 2 (1 − a)p1, p2 = 1 2 p3 + 1 2 p5 = 3 4 (1 − a)p1. thus 1 = p1 + p2 + p3 + p4 + p5 = p1 + 3 4 + 1 2 + 1 2 + 1 (1 − a)p1 = p1 1 + 11 4 (1 − a) , hence p1 = 4 15 − 11a , p2 = 3(1 − a) 15 − 11a , p3 = p4 = 2(1 − a) 15 − 11a , p5 = 4(1 − a) 15 − 11a , and the invariant probability vector is p = 1 15 − 11a (4, 3(1 − a), 2(1 − a), 2(1 − a), 4(1 − a)). 4) We have here the tree E5 1 2 −→ E2 1 −→ E1 E2 1 −→ E1, 1 2 1 2 E4 1 −→ E3 1 2 −→ E1
  • 132. Download free ebooks at bookboon.com Stochastic Processes 1 132 3. Markov chains from which we conclude that P{T = 2} = 1 2 · 1 = 1 2 , P{T = 3} = 1 2 · 1 · 1 2 = 1 4 , P{T = 4} = 1 2 · 1 · 1 2 · 1 = 1 4 . Notice that the sum is 1, so there is no other possibility. The mean is E{T} = 2 · 1 2 + 3 · 1 4 + 4 · 1 4 = 11 4 . It follows from E % T2 = 4 · 1 2 + 9 · 1 4 + 16 · 1 4 = 33 4 , that the variance is V {T} = 33 4 − 11 4 2 = 132 − 121 16 = 11 16 . Please click the advert
  • 133. Download free ebooks at bookboon.com Stochastic Processes 1 133 3. Markov chains 5) We can only reach E5 via E1, and since P {E1 −→ E5 i k steps} = P ' E1 a −→ E1 a −→ E1 a −→ · · · a −→ E1 1−a −→ E5 ( = ak−1 (1 − a), we get E{U} = E{T} + ∞ k=1 k ak−1 (1 − a) = 11 4 + (1 − a) · 1 (1 − a)2 = 11 4 + 1 1 − a = 15 − 11a 4(1 − a) = 1 p5 . Example 3.52 Given a Markov chain of 5 states E1, E2, E3, E4 and E5 and transition probabilities p1,1 = p2,2 = p3,3 = 2 3 , p1,2 = p2,3 = 1 3 , p3,4 = a 3 , p3,5 = 1 − a 3 , p4,5 = p5,1 = 1, and pi,j = 0 oherwise. Here a is a constant in the interval [0, 1]. 1. Find the stochastic matrix P. 2. Find the values of a, for which the Markov chain is irreducible. 3. Find the values of a, for which the Markov chain is regular. 4. Find for every a the invariant probability vector. We assume that the process at time t = 0 is in state E1. Let T denote the random variable, which indicates the time, when the process for the first time is in state E2. 5. Find the probabilities P{T = k}, k ∈ N, and then the mean E{T}. Then assume instead that the process at time t = 0 is in state E3. Let U denote the random variable, which indicates the time, when the process for the first time is in state E5. 6. Find the mean E{U}. 1) If a ∈ [0, 1], then P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 2 3 1 3 0 0 0 0 2 3 1 3 0 0 0 0 2 3 a 3 1−a 3 0 0 0 0 1 1 0 0 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . 2) We shall consider three cases: (a) a = 0, (b) a = 1, (c) 0 a 1.
  • 134. Download free ebooks at bookboon.com Stochastic Processes 1 134 3. Markov chains a) If a = 0, then we get the matrix P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 2 3 1 3 0 0 0 0 2 3 1 3 0 0 0 0 2 3 0 1 3 0 0 0 0 1 1 0 0 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . We notice that the fourth column is the zero column, so we can never get to E4 from any Ei, hence the Markov chain is not irreducible for a = 0. b) If a = 1, then we get the matrix P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 2 3 1 3 0 0 0 0 2 3 1 3 0 0 0 0 2 3 1 3 0 0 0 0 0 1 1 0 0 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ . Then we have in particular the diagram E1 −→ E2 −→ E3 −→ E4 −→ E5 −→ E1, and the Markov chain is irreducible for a = 1. c) If 0 a 1, then we have in particular the diagram E1 −→ E2 −→ E3 −→ E4 −→ E5 −→ E1, and the Markov chain is irreducible for 0 a 1. Summing up, the Markov chain is irreducible for 0 a ≤ 1. 3) Since the Markov chain is irreducible for 0 a ≤ 1, and p1,1 = 2 3 0, it follows that the Markov chain is also regular for 0 a ≤ 1. 4) The equation of the invariant probability vector g is g P = g, which we expand as ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ p1 = 2 3 p1 + p5, p2 = 1 3 p1 + 2 3 p2, p3 = 1 3 p2 + 2 3 p3, p4 = a 3 p3, p5 = 1−a 3 p3 + p4, thus ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ p1 = 3p5, p2 = p1, p3 = p2, p4 = a 3 p3, p5 = 1−a 3 p3 + p4. When p1, . . . , p4 are expressed by p5, then p1 = p2 = p3 = 3p5 and p4 = a 3 p3 = a p5, thus g = p5 (3, 3, 3, a, 1)
  • 135. Download free ebooks at bookboon.com Stochastic Processes 1 135 3. Markov chains where we have the condition 1 = p1 + p2 + p3 + p4 + p5 = p5 (3 + 3 + 3 + a + 1) = (10 + a)p5, so p5 = 1 10 + a , and g = 1 10 + a (3, 3, 3, a, 1). 5) We start for t = 0 at E1, corresponding to the diagram E1 2 3 −→ E1 2 3 −→ E1 2 3 −→ E1 2 3 −→ E1 −→ · · · 1 3 1 3 1 3 1 3 E2 E2 E2 E2 · · · t = 0 t = 1 t = 2 t = 3 t = 4 · · · It follows that P{T = k} = 1 3 2 3 k−1 , k ∈ N, thus T is geometrically distributed, T ∈ Pas 1, 1 3 , p = 1 3 . The mean is E{T} = 1 p = 3. Alternatively, E{T} = ∞ k=1 k P{T = k} = 1 3 ∞ k=1 k 2 3 k−1 = 1 3 · 1 1 − 2 3 2 = 3. 6) Here we get the diagram E5 E4 1 −→ E5 E4 −→ 1−a 3 a 3 1−a 3 a 3 E3 2 3 −→ E3 2 3 −→ E3 2 3 −→ E3 2 3 −→ E3 −→ 2 3 1−a 3 a 3 1−a 3 E4 1 −→ E5 E4 1 −→ E5 A simple counting gives P{U = 1} = 1 − a 3 , P{U = 2} = P {E3 → E3} · P {E3 → E5} + P {E3 → E4} · P {E4 → E5} = 2 3 · 1 − a 3 + a 3 · 1 = 2 − 2a 9 + a 3 = 2 + a 9 .
  • 136. Download free ebooks at bookboon.com Stochastic Processes 1 136 3. Markov chains Now P{U = 3} is obtained by the paths E3 2 3 −→ E3 2 3 −→ E3 1 − a 3 −→ E5, probability 2 3 · 2 3 · 1 − a 3 = 2 3 · 2 − 2a 9 , E3 2 3 −→ E3 a 3 −→ E4 1 −→ E5, probability 2 3 · a 3 = 2 3 · 3a 9 , so P{U = 3} = 2 3 · 2 + a 9 = 2 3 1 · P{U = 2}. Then repeat the pattern P{U = 4} = P {E3 → E3} · P{U = 3} = 2 3 2 P{U = 2}, and in general P{U = k} = 2 3 k−2 P{U = 2} = 2 + a 9 2 3 k−2 for k ≥ 2. The mean is E{U} = ∞ k=1 k P{U = k} = 1 − a 3 · 1 + 2 + a 9 ∞ k=2 k 2 3 k−2 = 1 − a 3 + 2 + a 9 ∞ k=1 (k + 1) 2 3 k−1 = 1 − a 3 + 2 + a 9 ∞ k=1 2 3 k−1 + 2 + a 9 ∞ k=1 k 2 3 k+1 = 1 − a 3 + 2 + a 9 · 1 1 − 2 3 + 2 + a 9 · 1 1 − 2 3 2 = 1 − a 3 + 2 + a 3 + (2 + a) = 3 + a.
  • 137. Download free ebooks at bookboon.com Stochastic Processes 1 137 Index Index absorbing state, 13, 25 Arcus sinus law, 10 closed subset of states, 13 convergence in probability, 28 cycle, 22 discrete Arcus sinus distribution, 10 distribution function of a stochastic process, 4 double stochastic matrix, 22, 39 drunkard’s walk, 5 Ehrenfest’s model, 32 geometric distribution, 124, 133 initial distribution, 11 invariant probability vector, 11, 22, 23, 25, 26, 28, 30, 32, 36, 39 irreducible Markov chain, 12, 18–23, 32, 36, 39, 41, 43, 45, 47, 50, 53, 62, 65, 67, 70, 73, 75, 78, 80, 86, 88, 91, 93, 98, 103, 106, 108, 114, 116, 122, 125, 128, 131 irreducible stochastic matrix, 83, 120 limit matrix, 13 Markov chain, 10, 18 Markov chain of countably many states, 101 Markov process, 5 outcome, 5 periodic Markov chain, 14 probability of state, 11 probability vector, 11 random walk, 5, 14, 15 random walk of reflecting barriers, 14 random walk with absorbing barriers, 14 regular Markov chain, 12, 18–23, 36, 39, 43, 47, 50, 53, 56, 62, 65, 67, 70, 73, 75, 78, 80, 83, 86, 88, 91, 100, 101, 103, 106, 108, 114, 116, 122, 125, 128, 131 regular stochastic matrix, 26, 30, 120 ruin problem, 7 sample function, 4 state of a process, 4 stationary distribution, 11, 43, 50 stationary Markov chain, 10 stochastic limit matrix, 13 stochastic matrix, 10 stochastic process, 4 symmetric random walk, 5, 9 transition probability, 10, 11 vector of state, 11