SlideShare a Scribd company logo
CS-438
COMPUTER SYSTEMS MODELING
Spring Semester 2024
Batch: 2020
(LECTURE # 17-19)
FAKHRA AFTAB
LECTURER
DEPARTMENT OF COMPUTER & INFORMATION SYSTEMS ENGINEERING
NED UNIVERSITY OF ENGINEERING & TECHNOLOGY
MARKOV CHAINS
Chapter # 4
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
3
STOCHASTIC PROCESSES
Processes that evolve over time in a probabilistic manner.
Mathematically, a stochastic process is defined to be an indexed
collection of random variables {Xt}, where the index t runs through a
given set T.
▪ Often T is taken to be the set of non-negative integers, and Xt
represents a measurable characteristic of interest at time t.
▪ e.g., Xt might represent the inventory level of a particular product at
the end of week t.
▪ Stochastic processes are of interest for describing the behavior of a
system operating over some period of time.
▪ State: any one of M + 1 mutually exclusive categories or states
possible. For notational convenience, states are labeled 0, 1, …, M.
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
4
STRUCTURE OF STOCHASTIC PROCESSES
• Let {Xt, t = 0, 1, 2, . . . , } be a stochastic process that takes on a
finite or countable number of possible values.
• This set of possible values of the process will be denoted by
the set of nonnegative integers {0, 1, 2, . . .}
• If Xt = i, then the process is said to be in state i at time t.
• We suppose that whenever the process is in state i, there is a
fixed probability Pij that it will next be in state j.
P{Xt+1 = j|Xt = i,Xt−1 = it−1, . . . ,X1 = i1,X0 = i0} = Pij
for all states i0, i1, . . . , it−1, i, j and all t > 0
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
5
STOCHASTIC PROCESSES - Example
▪ The weather in the town of Centerville can change rather quickly from
day to day.
▪ However, the chances of being dry (no rain) tomorrow are somewhat
larger if it is dry today than if it rains today.
▪ In particular, the probability of
obeing dry tomorrow is 0.8 if it is dry today,
obut is only 0.6 if it rains today.
▪ These probabilities do not change if information about the weather
before today is also taken into account.
▪ The evolution of the weather from day to day in Centerville is a
stochastic process.
▪ Starting on some initial day (labeled as day 0), the weather is observed
on each day t, for t = 0, 1, 2, ….
6
▪ The state of the system on day t can be either
▪ Thus, for t = 0, 1,2, …, the random variable Xt takes on the values,
7
MARKOV CHAINS
Markov chain: A stochastic process {Xt} (t =0, 1, …) with Markovian property.
Markovian property says that the conditional probability of any future
“event,” given any past “event” and the present state Xt = i, is independent of
the past event and depends only upon the present state.
The conditional probabilities P{Xt+1 = j|Xt = i} for a Markov chain are called
one-step transition probabilities.
▪ If, for each i and j,
P{Xt+1 = j|Xt = i} = P{X1 = j|Xo = i} for all t = 0, 1, 2, …,
othen the (one-step) transition probabilities are said to be stationary.
oimplies that the transition probabilities do not change over time.
8
MARKOV CHAINS (Cont’d)
• The existence of stationary (one-step) transition probabilities also implies that, for
each i, j, and n (n = 0, 1, 2, …),
P{Xt+n = j|Xt = i} = P{Xn = j|Xo = i}
for all t = 0, 1, ….
▪ These conditional probabilities are called n-step transition probabilities.
▪ To simplify notation with stationary transition probabilities, let
o pij = P{Xt+1 = j|Xt = i}
o pij
(n) = P{Xt+n = j|Xt = i}
▪ Thus, the n-step transition probability pij
(n) is just the conditional probability that
the system will be in state j after exactly n steps (time units), given that it starts in
state i at any time t.
▪ When n = 1, note that pij
(1) = pij.
9
MARKOV CHAINS (Cont’d)
▪ Because the pij
(n) are conditional probabilities, they must be nonnegative, and
since the process must make a transition into some state, they must satisfy the
properties:
▪ A convenient way of showing all the n-step transition probabilities is the n-step
transition matrix.
MARKOV CHAINS (Cont’d)
Structure of Transition Matrix
• Note that the transition probability in a
particular row and column is for the
transition from the row state to the column
state.
▪ When n = 1, we drop the superscript n and
simply refer to this as the transition matrix.
▪ The Markov chains to be considered have
following properties:-
1. A finite number of states.
2. Stationary transition probabilities.
▪ We also will assume that we know the initial
probabilities P{Xo = i} for all i.
10
Example 1 (Forecasting the weather)
Suppose that the chance of rain tomorrow depends on previous
weather conditions only through whether or not it is raining today and
not on past weather conditions. Suppose also that if it rains today, then
it will rain tomorrow with probability α; and if it does not rain today,
then it will rain tomorrow with probability β.
If we say that the process is in state 0 when it rains and state 1 when it
does not rain, then the preceding is a two-state Markov chain whose
transition probabilities are given by:
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
Example 2 (Gary’s Mood)
On any given day Gary is either cheerful (C), so-so (S), or glum (G). If he is
cheerful today, then he will be C, S, or G tomorrow with respective
probabilities 0.5, 0.4, 0.1. If he is feeling so-so today, then he will be C, S,
or G tomorrow with probabilities 0.3, 0.4, 0.3. If he is glum today, then he
will be C, S, or G tomorrow with probabilities 0.2, 0.3, 0.5.
Letting Xn denote Gary’s mood on the nth day, then {Xn, n ≥ 0} is a three-
state Markov chain (state 0 = C, state 1 = S, state 2 = G) with transition
probability matrix:
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
13
Formulating the Weather Example as a Markov Chain
▪ Recall that the evolution of the weather in Centerville from day to day has
been formulated as a stochastic process {Xt} (t = 0, 1,2, …) where
▪ Furthermore, because these probabilities do not change if information about
the weather before today (day t) is also taken into account,
for t = 0, 1, ... and every sequence ko, k1, ... , kt-1.
▪ These equations also must hold if Xt+1 = 0 is replaced by Xt+1 = 1.
14
Weather Example as a Markov Chain
▪ (The reason is that states 0 and 1 are mutually exclusive and the only possible
states, so the probabilities of the two states must sum to 1.)
▪ Therefore, the stochastic process has the Markovian property, so the process is a
Markov chain.
▪ Using the notation introduced in this section, the (one-step) transition
probabilities are
▪ Furthermore,
▪ where these transition probabilities are for the transition from the row state to
the column state.
15
Weather Example as a Markov Chain
▪ Keep in mind that state 0 means that the day is dry, whereas state 1 signifies that
the day has rain, so these transition probabilities give the probability of the state
the weather will be in tomorrow, given the state of the weather today.
▪ The state transition diagram in Fig 1 graphically depicts the same information
provided by the transition matrix.
Fig 1: The state transition diagram for the weather example.
Matrix Revision
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
The n-step Transition Probabilities
• Let {X0,X1,X2, . . .} be a Markov chain with state space S = {1, 2, . . . ,N}.
• Recall that the elements of the transition matrix P are defined as:
(P)ij = pij = P(X1 = j |X0 = i) = P(Xt+1 = j |Xt = i) for any t
• pij is the probability of making a transition FROM state i TO state j in a
SINGLE step.
Question: what is the probability of making a transition from state i to
state j over two steps? i.e. what is P(X2 = j |X0 = i)?
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
The Partition Theorem
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
DERIVATION OF 2-STEP TRANSITION PROBABILITY
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
The two-step transition probabilities are therefore given by the matrix P2
P(X2 = j |X0 = i) = P(Xt+2 = j |Xt = i) = (P2)ij for any t
They are also termed as Chapman-Kolmogorov equations & provide a
method for computing these n-step transition probabilities.
Can you now derive the formula for 3-Steps Transition ?
General Case: n-Step Transition Matrix
• General case: n-step transitions
• The above working extends to show that the n-step transition probabilities
are given by the matrix Pn for any n:
P(Xt = j |X0 = i) = P(Xt+n = j |Xt = i) = Pn
ij for any n
• Thus, the n-step transition probability matrix Pn can be obtained by
computing the nth power of the one-step transition matrix P.
• A Markov Chain is said to be homogenous if all transitions are
independent of t.
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
Example Problem
In some town, each day is either sunny or rainy. A sunny day is followed by
another sunny day with probability 0.7 whereas rainy day is followed by a sunny
day with probability 0.4.
It rains on Monday. Make forecast for Tuesday, Wednesday & Thursday.
Solution:
2-state homogenous Markov Chain is given by the following state transition
diagram. State S for Sunny Day and R for Rainy Day:
Forecast for Tuesday: pRR = 1 – 0.4 = 0.6
pRS = 1 – 0.6 = 0.4
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
S R
0.3
0.4
0.7 0.6
Forecast for Wednesday:
1st Method: Calculating 2-step Transition Probability:
P(2)
RS = ?
= P [X(2) = S, X(1) = S │ X (0) = R] + P [X(2) = S, X(1) = R │ X (0) = R]
= pRS pSS + pRR pRS
= 0.4 * 0.7 + 0.6 * 0.4
=0.52
P(2)
RR = 1 – 0.52 = 0.48
2nd Method: Calculating 2-step Transition Probability:
𝟎. 𝟕 𝟎. 𝟑
𝟎. 𝟒 𝟎. 𝟔
P(2) =
𝟎. 𝟕 𝟎. 𝟑
𝟎. 𝟒 𝟎. 𝟔
*
𝟎. 𝟕 𝟎. 𝟑
𝟎. 𝟒 𝟎. 𝟔
=
𝟎. 𝟔𝟏 𝟎. 𝟑𝟗
𝟎. 𝟓𝟐 𝟎. 𝟒𝟖
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
S
R
S R
Forecast for Thursday: P(3)
RS & P(3)
RR ??
Answers:
P(3)
RS = 0.556
P(3)
RR = 0.444
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
Task
A computer system is operating in one of the two modes: restricted and
unrestricted. The mode is observed every hour. The probability that it
will remain in the same mode during the next hour is 0.3.
(a) Modeling it as a Markov chain, draw its state transition diagram.
(b) Write down the transition probability matrix.
(c) Compute the 3-step transition probability matrix.
(d) If the system is operating in restricted mode at 3pm, what is the
probability that it will be in the unrestricted mode at 6pm on the same
day?
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
Distribution of Xt
• Let {X0,X1,X2, . . .} be a Markov chain with state space S = {1, 2, . . . ,N}. Now
each Xt is a random variable, so it has a probability distribution.
• We can write the probability distribution of Xt as an N × 1 vector. For
example, consider X0. Let 𝜋 be an N × 1 vector denoting the probability
distribution of X0:
• We will write X0 ~ 𝜋T to denote that the row vector of probabilities is given
by the row vector 𝜋T.
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
Probability distribution of X1
Use the Partition Rule, conditioning on X0:
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
• Let {X0,X1,X2, . . .} be a Markov chain with N × N transition matrix P.
• If the probability distribution of X0 is given by the 1×N row vector 𝜋T,
then the probability distribution of Xt is given by the 1 × N row vector
𝜋TPn. That is,
X0 ~ 𝜋T => Xt ~ 𝜋TPn
Note:
• The distribution of Xt: Xt ~ 𝜋TPn
• The distribution of Xt+1: Xt+1 ~ 𝜋TPn+1
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
Example Problem
Purpose-flea zooms around the vertices of the given transition
diagram. Let Xt be Purpose-flea’s state at time t (t = 0, 1, . . .).
a) Find the transition matrix, P.
P =
0.6 0.2 0.2
0.4 0 0.6
0 0.8 0.2
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
b) Find P(X2 = 3 |X0 = 1)
P(X2 = 3 |X0 = 1) = (P2)13
(P2)=
0.6 0.2 0.2
0.4 0 0.6
0 0.8 0.2
*
0.6 0.2 0.2
0.4 0 0.6
0 0.8 0.2
=
0.44 0.28 0.28
0.24 0.56 0.2
0.32 0.16 0.52
(P2)13 = 0.28
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
c) Suppose that Purpose-flea is equally likely to start on any vertex at
time 0. Find the probability distribution of X1
.
From this info, the distribution of X0 is 𝜋T = (
1
3
1
3
1
3
)
X1 ~ 𝜋TP = (
1
3
1
3
1
3
) *
0.6 0.2 0.2
0.4 0 0.6
0 0.8 0.2
X1 = (
1
3
1
3
1
3
)
Therefore X1 is also equally likely to be state 1, 2, or 3.
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
d) Suppose that Purpose-flea begins at vertex 1 at time 0. Find the
probability distribution of X2.
Now, the distribution of X0 is now 𝜋T = (1,0,0)
X2 ~ 𝜋TP = (1 0 0) *
0.6 0.2 0.2
0.4 0 0.6
0 0.8 0.2
*
0.6 0.2 0.2
0.4 0 0.6
0 0.8 0.2
X2 = (0.44 0.28 0.28)
P(X2 = 1) = 0.44, P(X2 = 2) = 0.28, P(X2 = 3) = 0.28
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
STEADY-STATE PROBABILITIES
▪ The long-run behavior of finite-state Markov chains as reflected by the steady-
state probabilities shows that:
o there is a limiting probability that the system will be in each state j after a large
number of transitions, and
o that this probability is independent of the initial state.
• These properties are summarized below.
• The j are called the steady-state probabilities of the Markov chain.
37
STEADY-STATE PROBABILITIES
▪ The term Steady-state probability means that the probability of
finding the process in a certain state, say j, after a large number of
transitions tends to the value j, independent of the probability
distribution of the initial state.
▪ It is important to note that the steady-state probability does not
imply that the process settles down into one state.
▪ On the contrary, the process continues to make transitions from
state to state, and at any step n the transition probability from state i
to state j is still pij.
▪ The j can also be interpreted as stationary probabilities.
Example
Consider the weather forecast
model. Determine its steady-
state probabilities:
P =
0.7 0.3
0.4 0.6
𝜋 = 𝜋P
(𝜋s 𝜋R) = (𝜋s 𝜋R)
0.7 0.3
0.4 0.6
⟹ 0.7𝜋s + 0.4 𝜋R = 𝜋s ⟹ 0.3𝜋s = 0.4𝜋R
0.3𝜋s + 0.6 𝜋R = 𝜋R ⟹ 𝜋s = 4/3𝜋R
𝜋s + 𝜋R = 1
⟹ 𝜋R = 3/7 & 𝜋S = 4/7
⟹ 𝐼𝑛 𝑡ℎ𝑒 𝑙𝑜𝑛𝑔 ℎ𝑖𝑠𝑡𝑜𝑟𝑦 𝑜𝑓 𝑡ℎ𝑖𝑠 𝑐𝑖𝑡𝑦,
43% 𝑑𝑎𝑦𝑠 𝑎𝑟𝑒 𝑟𝑎𝑖𝑛𝑦 𝑎𝑛𝑑
57% 𝑑𝑎𝑦𝑠 𝑎𝑟𝑒 𝑠𝑢𝑛𝑛𝑦!!
38
S
R
S R
Task
A Markov chain has the following transition probability matrix:
a) Fill in the blanks.
b) Compute the steady-state probabilities.
Answers:
Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)

More Related Content

PDF
Queueing theory
PDF
Recursive State Estimation AI for Robotics.pdf
PPT
ch14MarkovChainkfkkklmkllmkkaskldask.ppt
PPTX
Stochastic matrices
PDF
Spacey random walks and higher-order data analysis
PDF
AI 11 | Markov Model
PDF
12 Machine Learning Supervised Hidden Markov Chains
PDF
Phase-Type Distributions for Finite Interacting Particle Systems
Queueing theory
Recursive State Estimation AI for Robotics.pdf
ch14MarkovChainkfkkklmkllmkkaskldask.ppt
Stochastic matrices
Spacey random walks and higher-order data analysis
AI 11 | Markov Model
12 Machine Learning Supervised Hidden Markov Chains
Phase-Type Distributions for Finite Interacting Particle Systems

Similar to CS-438 COMPUTER SYSTEM MODELINGWK9+10LEC17-19.pdf (20)

PDF
The Use of Markov Model in Continuous Time for Prediction of Rainfall for Cro...
PPTX
Stochastic Processes Assignment Help
PDF
Markov chain
PDF
Recent developments on unbiased MCMC
PDF
Modeling and Analysis of Stochastic Systems 3rd Kulkarni Solution Manual
PPTX
Hidden Markov Models.pptx
PPT
Markov chains1
PPT
Introduction - Time Series Analysis
PDF
Hidden markovmodel
DOCX
IE 423 page 1 of 1 •••••••••••••••••••••••••••••••••••••••.docx
PDF
Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...
PDF
Computational Intelligence for Time Series Prediction
PPTX
Statistics Assignment Help
PPT
Markov Chains
PDF
Delayed acceptance for Metropolis-Hastings algorithms
PDF
PDF
PPTX
Solution of Partial Differential Equations
PDF
PaperNo8-HabibiSafari-IJAM-CHAOTICITY OF A PAIR OF OPERATORS
The Use of Markov Model in Continuous Time for Prediction of Rainfall for Cro...
Stochastic Processes Assignment Help
Markov chain
Recent developments on unbiased MCMC
Modeling and Analysis of Stochastic Systems 3rd Kulkarni Solution Manual
Hidden Markov Models.pptx
Markov chains1
Introduction - Time Series Analysis
Hidden markovmodel
IE 423 page 1 of 1 •••••••••••••••••••••••••••••••••••••••.docx
Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...
Computational Intelligence for Time Series Prediction
Statistics Assignment Help
Markov Chains
Delayed acceptance for Metropolis-Hastings algorithms
Solution of Partial Differential Equations
PaperNo8-HabibiSafari-IJAM-CHAOTICITY OF A PAIR OF OPERATORS
Ad

More from MUHAMMADUSMANYOUSUF1 (6)

PDF
CS-438 COMPUTER SYSTEM MODELING WK5LEC9-10.pdf
PPTX
MPS mucopolysaccharidoses IN MEDICAL.pptx
PPTX
presentationETT, USED FOR THE DIAGNOSIS OF .pptx
PDF
CS-438 computer system modeling WK6+7LEC11-14.pdf
PDF
CS-438 COMPUTER SYSTEMS MODELING WK1LEC1-2.pdf
PDF
CS-438 WK13-15LEC25-30 Computer System Modeling.pdf
CS-438 COMPUTER SYSTEM MODELING WK5LEC9-10.pdf
MPS mucopolysaccharidoses IN MEDICAL.pptx
presentationETT, USED FOR THE DIAGNOSIS OF .pptx
CS-438 computer system modeling WK6+7LEC11-14.pdf
CS-438 COMPUTER SYSTEMS MODELING WK1LEC1-2.pdf
CS-438 WK13-15LEC25-30 Computer System Modeling.pdf
Ad

Recently uploaded (20)

PPTX
Operating system designcfffgfgggggggvggggggggg
PPTX
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
PDF
Upgrade and Innovation Strategies for SAP ERP Customers
PDF
PTS Company Brochure 2025 (1).pdf.......
PPTX
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
PDF
AI in Product Development-omnex systems
PDF
Adobe Illustrator 28.6 Crack My Vision of Vector Design
PDF
Design an Analysis of Algorithms I-SECS-1021-03
PDF
Addressing The Cult of Project Management Tools-Why Disconnected Work is Hold...
PDF
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 41
PDF
How Creative Agencies Leverage Project Management Software.pdf
PDF
2025 Textile ERP Trends: SAP, Odoo & Oracle
PDF
Navsoft: AI-Powered Business Solutions & Custom Software Development
PDF
System and Network Administration Chapter 2
PDF
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
PPTX
history of c programming in notes for students .pptx
PDF
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
PPTX
L1 - Introduction to python Backend.pptx
Operating system designcfffgfgggggggvggggggggg
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
Upgrade and Innovation Strategies for SAP ERP Customers
PTS Company Brochure 2025 (1).pdf.......
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
AI in Product Development-omnex systems
Adobe Illustrator 28.6 Crack My Vision of Vector Design
Design an Analysis of Algorithms I-SECS-1021-03
Addressing The Cult of Project Management Tools-Why Disconnected Work is Hold...
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
Internet Downloader Manager (IDM) Crack 6.42 Build 41
How Creative Agencies Leverage Project Management Software.pdf
2025 Textile ERP Trends: SAP, Odoo & Oracle
Navsoft: AI-Powered Business Solutions & Custom Software Development
System and Network Administration Chapter 2
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
history of c programming in notes for students .pptx
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
L1 - Introduction to python Backend.pptx

CS-438 COMPUTER SYSTEM MODELINGWK9+10LEC17-19.pdf

  • 1. CS-438 COMPUTER SYSTEMS MODELING Spring Semester 2024 Batch: 2020 (LECTURE # 17-19) FAKHRA AFTAB LECTURER DEPARTMENT OF COMPUTER & INFORMATION SYSTEMS ENGINEERING NED UNIVERSITY OF ENGINEERING & TECHNOLOGY
  • 2. MARKOV CHAINS Chapter # 4 Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 3. 3 STOCHASTIC PROCESSES Processes that evolve over time in a probabilistic manner. Mathematically, a stochastic process is defined to be an indexed collection of random variables {Xt}, where the index t runs through a given set T. ▪ Often T is taken to be the set of non-negative integers, and Xt represents a measurable characteristic of interest at time t. ▪ e.g., Xt might represent the inventory level of a particular product at the end of week t. ▪ Stochastic processes are of interest for describing the behavior of a system operating over some period of time. ▪ State: any one of M + 1 mutually exclusive categories or states possible. For notational convenience, states are labeled 0, 1, …, M. Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 4. 4 STRUCTURE OF STOCHASTIC PROCESSES • Let {Xt, t = 0, 1, 2, . . . , } be a stochastic process that takes on a finite or countable number of possible values. • This set of possible values of the process will be denoted by the set of nonnegative integers {0, 1, 2, . . .} • If Xt = i, then the process is said to be in state i at time t. • We suppose that whenever the process is in state i, there is a fixed probability Pij that it will next be in state j. P{Xt+1 = j|Xt = i,Xt−1 = it−1, . . . ,X1 = i1,X0 = i0} = Pij for all states i0, i1, . . . , it−1, i, j and all t > 0 Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 5. 5 STOCHASTIC PROCESSES - Example ▪ The weather in the town of Centerville can change rather quickly from day to day. ▪ However, the chances of being dry (no rain) tomorrow are somewhat larger if it is dry today than if it rains today. ▪ In particular, the probability of obeing dry tomorrow is 0.8 if it is dry today, obut is only 0.6 if it rains today. ▪ These probabilities do not change if information about the weather before today is also taken into account. ▪ The evolution of the weather from day to day in Centerville is a stochastic process. ▪ Starting on some initial day (labeled as day 0), the weather is observed on each day t, for t = 0, 1, 2, ….
  • 6. 6 ▪ The state of the system on day t can be either ▪ Thus, for t = 0, 1,2, …, the random variable Xt takes on the values,
  • 7. 7 MARKOV CHAINS Markov chain: A stochastic process {Xt} (t =0, 1, …) with Markovian property. Markovian property says that the conditional probability of any future “event,” given any past “event” and the present state Xt = i, is independent of the past event and depends only upon the present state. The conditional probabilities P{Xt+1 = j|Xt = i} for a Markov chain are called one-step transition probabilities. ▪ If, for each i and j, P{Xt+1 = j|Xt = i} = P{X1 = j|Xo = i} for all t = 0, 1, 2, …, othen the (one-step) transition probabilities are said to be stationary. oimplies that the transition probabilities do not change over time.
  • 8. 8 MARKOV CHAINS (Cont’d) • The existence of stationary (one-step) transition probabilities also implies that, for each i, j, and n (n = 0, 1, 2, …), P{Xt+n = j|Xt = i} = P{Xn = j|Xo = i} for all t = 0, 1, …. ▪ These conditional probabilities are called n-step transition probabilities. ▪ To simplify notation with stationary transition probabilities, let o pij = P{Xt+1 = j|Xt = i} o pij (n) = P{Xt+n = j|Xt = i} ▪ Thus, the n-step transition probability pij (n) is just the conditional probability that the system will be in state j after exactly n steps (time units), given that it starts in state i at any time t. ▪ When n = 1, note that pij (1) = pij.
  • 9. 9 MARKOV CHAINS (Cont’d) ▪ Because the pij (n) are conditional probabilities, they must be nonnegative, and since the process must make a transition into some state, they must satisfy the properties: ▪ A convenient way of showing all the n-step transition probabilities is the n-step transition matrix.
  • 10. MARKOV CHAINS (Cont’d) Structure of Transition Matrix • Note that the transition probability in a particular row and column is for the transition from the row state to the column state. ▪ When n = 1, we drop the superscript n and simply refer to this as the transition matrix. ▪ The Markov chains to be considered have following properties:- 1. A finite number of states. 2. Stationary transition probabilities. ▪ We also will assume that we know the initial probabilities P{Xo = i} for all i. 10
  • 11. Example 1 (Forecasting the weather) Suppose that the chance of rain tomorrow depends on previous weather conditions only through whether or not it is raining today and not on past weather conditions. Suppose also that if it rains today, then it will rain tomorrow with probability α; and if it does not rain today, then it will rain tomorrow with probability β. If we say that the process is in state 0 when it rains and state 1 when it does not rain, then the preceding is a two-state Markov chain whose transition probabilities are given by: Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 12. Example 2 (Gary’s Mood) On any given day Gary is either cheerful (C), so-so (S), or glum (G). If he is cheerful today, then he will be C, S, or G tomorrow with respective probabilities 0.5, 0.4, 0.1. If he is feeling so-so today, then he will be C, S, or G tomorrow with probabilities 0.3, 0.4, 0.3. If he is glum today, then he will be C, S, or G tomorrow with probabilities 0.2, 0.3, 0.5. Letting Xn denote Gary’s mood on the nth day, then {Xn, n ≥ 0} is a three- state Markov chain (state 0 = C, state 1 = S, state 2 = G) with transition probability matrix: Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 13. 13 Formulating the Weather Example as a Markov Chain ▪ Recall that the evolution of the weather in Centerville from day to day has been formulated as a stochastic process {Xt} (t = 0, 1,2, …) where ▪ Furthermore, because these probabilities do not change if information about the weather before today (day t) is also taken into account, for t = 0, 1, ... and every sequence ko, k1, ... , kt-1. ▪ These equations also must hold if Xt+1 = 0 is replaced by Xt+1 = 1.
  • 14. 14 Weather Example as a Markov Chain ▪ (The reason is that states 0 and 1 are mutually exclusive and the only possible states, so the probabilities of the two states must sum to 1.) ▪ Therefore, the stochastic process has the Markovian property, so the process is a Markov chain. ▪ Using the notation introduced in this section, the (one-step) transition probabilities are ▪ Furthermore, ▪ where these transition probabilities are for the transition from the row state to the column state.
  • 15. 15 Weather Example as a Markov Chain ▪ Keep in mind that state 0 means that the day is dry, whereas state 1 signifies that the day has rain, so these transition probabilities give the probability of the state the weather will be in tomorrow, given the state of the weather today. ▪ The state transition diagram in Fig 1 graphically depicts the same information provided by the transition matrix. Fig 1: The state transition diagram for the weather example.
  • 16. Matrix Revision Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 17. Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 18. Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 19. The n-step Transition Probabilities • Let {X0,X1,X2, . . .} be a Markov chain with state space S = {1, 2, . . . ,N}. • Recall that the elements of the transition matrix P are defined as: (P)ij = pij = P(X1 = j |X0 = i) = P(Xt+1 = j |Xt = i) for any t • pij is the probability of making a transition FROM state i TO state j in a SINGLE step. Question: what is the probability of making a transition from state i to state j over two steps? i.e. what is P(X2 = j |X0 = i)? Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 20. The Partition Theorem Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 21. DERIVATION OF 2-STEP TRANSITION PROBABILITY Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 22. Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET) The two-step transition probabilities are therefore given by the matrix P2 P(X2 = j |X0 = i) = P(Xt+2 = j |Xt = i) = (P2)ij for any t They are also termed as Chapman-Kolmogorov equations & provide a method for computing these n-step transition probabilities. Can you now derive the formula for 3-Steps Transition ?
  • 23. General Case: n-Step Transition Matrix • General case: n-step transitions • The above working extends to show that the n-step transition probabilities are given by the matrix Pn for any n: P(Xt = j |X0 = i) = P(Xt+n = j |Xt = i) = Pn ij for any n • Thus, the n-step transition probability matrix Pn can be obtained by computing the nth power of the one-step transition matrix P. • A Markov Chain is said to be homogenous if all transitions are independent of t. Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 24. Example Problem In some town, each day is either sunny or rainy. A sunny day is followed by another sunny day with probability 0.7 whereas rainy day is followed by a sunny day with probability 0.4. It rains on Monday. Make forecast for Tuesday, Wednesday & Thursday. Solution: 2-state homogenous Markov Chain is given by the following state transition diagram. State S for Sunny Day and R for Rainy Day: Forecast for Tuesday: pRR = 1 – 0.4 = 0.6 pRS = 1 – 0.6 = 0.4 Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET) S R 0.3 0.4 0.7 0.6
  • 25. Forecast for Wednesday: 1st Method: Calculating 2-step Transition Probability: P(2) RS = ? = P [X(2) = S, X(1) = S │ X (0) = R] + P [X(2) = S, X(1) = R │ X (0) = R] = pRS pSS + pRR pRS = 0.4 * 0.7 + 0.6 * 0.4 =0.52 P(2) RR = 1 – 0.52 = 0.48 2nd Method: Calculating 2-step Transition Probability: 𝟎. 𝟕 𝟎. 𝟑 𝟎. 𝟒 𝟎. 𝟔 P(2) = 𝟎. 𝟕 𝟎. 𝟑 𝟎. 𝟒 𝟎. 𝟔 * 𝟎. 𝟕 𝟎. 𝟑 𝟎. 𝟒 𝟎. 𝟔 = 𝟎. 𝟔𝟏 𝟎. 𝟑𝟗 𝟎. 𝟓𝟐 𝟎. 𝟒𝟖 Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET) S R S R
  • 26. Forecast for Thursday: P(3) RS & P(3) RR ?? Answers: P(3) RS = 0.556 P(3) RR = 0.444 Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 27. Task A computer system is operating in one of the two modes: restricted and unrestricted. The mode is observed every hour. The probability that it will remain in the same mode during the next hour is 0.3. (a) Modeling it as a Markov chain, draw its state transition diagram. (b) Write down the transition probability matrix. (c) Compute the 3-step transition probability matrix. (d) If the system is operating in restricted mode at 3pm, what is the probability that it will be in the unrestricted mode at 6pm on the same day? Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 28. Distribution of Xt • Let {X0,X1,X2, . . .} be a Markov chain with state space S = {1, 2, . . . ,N}. Now each Xt is a random variable, so it has a probability distribution. • We can write the probability distribution of Xt as an N × 1 vector. For example, consider X0. Let 𝜋 be an N × 1 vector denoting the probability distribution of X0: • We will write X0 ~ 𝜋T to denote that the row vector of probabilities is given by the row vector 𝜋T. Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 29. Probability distribution of X1 Use the Partition Rule, conditioning on X0: Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 30. Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 31. • Let {X0,X1,X2, . . .} be a Markov chain with N × N transition matrix P. • If the probability distribution of X0 is given by the 1×N row vector 𝜋T, then the probability distribution of Xt is given by the 1 × N row vector 𝜋TPn. That is, X0 ~ 𝜋T => Xt ~ 𝜋TPn Note: • The distribution of Xt: Xt ~ 𝜋TPn • The distribution of Xt+1: Xt+1 ~ 𝜋TPn+1 Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 32. Example Problem Purpose-flea zooms around the vertices of the given transition diagram. Let Xt be Purpose-flea’s state at time t (t = 0, 1, . . .). a) Find the transition matrix, P. P = 0.6 0.2 0.2 0.4 0 0.6 0 0.8 0.2 Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 33. b) Find P(X2 = 3 |X0 = 1) P(X2 = 3 |X0 = 1) = (P2)13 (P2)= 0.6 0.2 0.2 0.4 0 0.6 0 0.8 0.2 * 0.6 0.2 0.2 0.4 0 0.6 0 0.8 0.2 = 0.44 0.28 0.28 0.24 0.56 0.2 0.32 0.16 0.52 (P2)13 = 0.28 Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 34. c) Suppose that Purpose-flea is equally likely to start on any vertex at time 0. Find the probability distribution of X1 . From this info, the distribution of X0 is 𝜋T = ( 1 3 1 3 1 3 ) X1 ~ 𝜋TP = ( 1 3 1 3 1 3 ) * 0.6 0.2 0.2 0.4 0 0.6 0 0.8 0.2 X1 = ( 1 3 1 3 1 3 ) Therefore X1 is also equally likely to be state 1, 2, or 3. Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 35. d) Suppose that Purpose-flea begins at vertex 1 at time 0. Find the probability distribution of X2. Now, the distribution of X0 is now 𝜋T = (1,0,0) X2 ~ 𝜋TP = (1 0 0) * 0.6 0.2 0.2 0.4 0 0.6 0 0.8 0.2 * 0.6 0.2 0.2 0.4 0 0.6 0 0.8 0.2 X2 = (0.44 0.28 0.28) P(X2 = 1) = 0.44, P(X2 = 2) = 0.28, P(X2 = 3) = 0.28 Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)
  • 36. STEADY-STATE PROBABILITIES ▪ The long-run behavior of finite-state Markov chains as reflected by the steady- state probabilities shows that: o there is a limiting probability that the system will be in each state j after a large number of transitions, and o that this probability is independent of the initial state. • These properties are summarized below. • The j are called the steady-state probabilities of the Markov chain.
  • 37. 37 STEADY-STATE PROBABILITIES ▪ The term Steady-state probability means that the probability of finding the process in a certain state, say j, after a large number of transitions tends to the value j, independent of the probability distribution of the initial state. ▪ It is important to note that the steady-state probability does not imply that the process settles down into one state. ▪ On the contrary, the process continues to make transitions from state to state, and at any step n the transition probability from state i to state j is still pij. ▪ The j can also be interpreted as stationary probabilities.
  • 38. Example Consider the weather forecast model. Determine its steady- state probabilities: P = 0.7 0.3 0.4 0.6 𝜋 = 𝜋P (𝜋s 𝜋R) = (𝜋s 𝜋R) 0.7 0.3 0.4 0.6 ⟹ 0.7𝜋s + 0.4 𝜋R = 𝜋s ⟹ 0.3𝜋s = 0.4𝜋R 0.3𝜋s + 0.6 𝜋R = 𝜋R ⟹ 𝜋s = 4/3𝜋R 𝜋s + 𝜋R = 1 ⟹ 𝜋R = 3/7 & 𝜋S = 4/7 ⟹ 𝐼𝑛 𝑡ℎ𝑒 𝑙𝑜𝑛𝑔 ℎ𝑖𝑠𝑡𝑜𝑟𝑦 𝑜𝑓 𝑡ℎ𝑖𝑠 𝑐𝑖𝑡𝑦, 43% 𝑑𝑎𝑦𝑠 𝑎𝑟𝑒 𝑟𝑎𝑖𝑛𝑦 𝑎𝑛𝑑 57% 𝑑𝑎𝑦𝑠 𝑎𝑟𝑒 𝑠𝑢𝑛𝑛𝑦!! 38 S R S R
  • 39. Task A Markov chain has the following transition probability matrix: a) Fill in the blanks. b) Compute the steady-state probabilities. Answers: Prepared by: Ms. Fakhra Aftab (Lecturer, CISD, NEDUET)