SlideShare a Scribd company logo
Markov Chains
Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable evolves over time includes  Stochastic Processes .
What is a Stochastic Process? Suppose we observe some characteristic of a system at discrete points in time. Let  X t  be the value of the system characteristic at time  t . In most situations,  X t  is not known with certainty before time  t  and may be viewed as a random variable. A  discrete-time stochastic process  is simply a description of the relation between the random variables  X 0 ,  X 1 ,  X 2  …..
A  continuous –time stochastic process  is simply the stochastic process in which the state of the system can be viewed at any time, not just at discrete instants in time. For example, the number of people in a supermarket  t  minutes after the store opens for business may be viewed as a continuous-time stochastic process.
The Gambler’s Ruin Problem At time 0, I have Rs. 2. At times 1, 2, …, I play a game in which I bet Rs. 1, with probabilities p, I win the game, and with probability 1 – p, I lose the game. My goal is to increase my capital to Rs. 4, and as soon as I do, the game is over. The game is also over if my capital is reduced to 0. Let X t  represent my capital position after the time t game (if any) is played X 0 , X 1 , X 2 , …. May be viewed as a discrete-time stochastic process
What is a Markov Chain? One special type of discrete-time stochastic process is called a Markov Chain. Definition:  A discrete-time stochastic process is a  Markov chain  if, for  t  = 0,1,2… and all states P ( X t+1  =  i t+1 | X t  = i t ,  X t-1 = i t-1 ,…, X 1 = i 1 ,  X 0 = i 0 )   = P ( X t+1 = i t+1 | X t  =  i t ) Essentially this says that the probability distribution of the state at time  t +1 depends on the state at time  t ( i t ) and does not depend on the states the chain passed through on the way to  i t  at time  t .
In our study of Markov chains, we make further assumption that for all states  i  and  j  and all  t ,  P ( X t+ 1  = j| X t  =  i ) is independent of  t . This assumption allows us to write  P ( X t+1  = j| X t  =  i ) =  p ij  where  p ij   is the probability that given the system is in state  i  at time  t , it will be in a state  j  at time  t +1. If the system moves from state  i  during one period to state  j  during the next period, we that a  transition  from  i  to  j  has occurred.
The  p ij ’s are often referred to as the  transition probabilities  for the Markov chain. This equation implies that the probability law relating the next period’s state to the current state does not change over time. It is often called the  Stationary Assumption  and any Markov chain that satisfies it is called a  stationary Markov chain . We also must define  q i  to be the probability that the chain is in state  i  at the time 0; in other words,  P ( X 0 = i ) =  q i .
We call the vector  q = [ q 1 , q 2 ,…q s ] the  initial probability distribution  for the Markov chain. In most applications, the transition probabilities are displayed as an  s  x  s   transition probability matrix   P . The transition probability matrix  P  may be written as
TPM: The Gambler’s Ruin Problem Rs. 0  Rs. 1  Rs. 2  Rs. 3  Rs. 4
For each  i We also know that each entry in the  P  matrix must be nonnegative. Hence, all entries in the transition probability matrix are nonnegative, and the entries in each row must sum to 1.
Question A company has two machines. During any day, each machine that is working at the beginning of the day has a 1/3 chance of breaking down. If a machine breaks down during the day it is sent to a repair facility and will be working two days after it breaks down. (Thus, if a machine breaks down during day-3, it will be working at the beginning of day 5) Letting the state of the system to be the number of machines working at the beginning of the day, formulate a transition probability matrix for this situation.
n -Step Transition Probabilities A question of interest when studying a Markov chain is: If a Markov chain is in a state  i  at time  m , what is the probability that  n  periods later than the Markov chain will be in state  j ? This probability will be independent of  m , so we may write P ( X m+n  = j | X m  =  i ) =  P ( X n  = j | X 0  =  i ) =  P ij ( n ) where  P ij ( n ) is called the  n -step probability  of a transition from state  i  to state  j . For  n  > 1,  P ij ( n ) =  ij th element of  P n
The Cola Example Suppose the entire cola industry produces only two colas. Given that a person last purchased cola 1, there is a 90% chance that her next purchase will be cola 1. Given that a person last purchased cola 2, there is an 80% chance that her next purchase will be cola 2. If a person is currently a cola 2 purchaser, what is the probability that she will purchase cola 1 two purchases from now? If a person is currently a cola 1 purchaser, what is the probability that she will purchase cola 1 three purchases from now?
The Cola Example We view each person’s purchases as a Markov chain with the state at any given time being the type of cola the person last purchased. Hence, each person’s cola purchases may be represented by a two-state Markov chain, where State 1 = person has last purchased cola 1 State 2 = person has last purchased cola 2 If we define  X n  to be the type of cola purchased by a person on her  n th future cola purchase, then  X 0 ,  X 1 , … may be described as the Markov chain with the following transition matrix:
The Cola Example We can now answer questions 1 and 2. We seek  P ( X 2  = 1| X 0  = 2) =  P 21 (2) = element 21 of  P 2 :
The Cola Example Hence,  P 21 (2) =.34. This means that the probability is .34 that two purchases in the future a cola 2 drinker will purchase cola 1. By using basic probability theory, we may obtain this answer in a different way. We seek  P 11 (3) = element 11 of  P 3 : Therefore,  P 11 (3) = .781
Many times we do not know the state of the Markov chain at time 0. Then we can determine the probability that the system is in state  i  at time  n  by using the reasoning. Probability of being in state  j  at time  n  where  q =[q 1 , q 2 , … q 3 ]. (This is an unconditional probability).
Limiting probabilities To illustrate the behavior of the  n -step transition probabilities for large values of  n , we have computed several of the  n -step transition probabilities for the Cola example. This means that for large  n,  no matter what the initial state, there is a .67 chance that a person will be a cola 1 purchaser. We can easily multiply matrices on a spreadsheet using the MMULT command.
Question Find the equilibrium market shares of two firms whose probability transition matrix is as follows  .5 .5 B .3 .7 A B A
Example Suppose we have a Markov transition matrix: 1 2 3 1/2 1/2 1/2 1/2 1
All states communicate with each other Starting from 1, the MC can return to 1 in three steps via two possible routes: Route 1:  1 to 3 to 2 to 1 with probability  .5 ×1   ×.5  =  1/4 Route 2:  1 to 2 to 2 to 1 with probability  .5  ×.5×.5   = 1/8  Hence the required probability is =1/4+1/8 = 3/8 .
Steady-State Probabilities Steady-state probabilities are used to describe the long-run behavior of a Markov chain. Theorem 1:  Let  P  be the transition matrix for an  s -state ergodic chain. Then there exists a vector  π  = [ π 1   π 2   …  π s ] such that
Theorem 1 tells us that for any initial state  i ,  The vector  π  = [ π 1   π 2   …  π s ]  is often called the  steady-state distribution , or  equilibrium distribution , for the Markov chain.
An Example A supermarket stocks 3 brands of coffee,  A ,  B , and  C,  and it has been observed   that customers switch from brand to brand according to the following  transition matrix: In the long In the long run, what fraction of the customers purchase the  respective brands?
Solution Since the chain is ergodic (all states are communicating, each state is recurrent and aperiodic), the steady-state distribution exists. Solving    =   P  gives  1 =(3/4)   1 +(1/4)   3  2 =(1/4)   1 +(2/3)   2 +(1/4)   3  3 =(1/3)   2 +(1/2)   3 Subject to   1  +   2 +   3 =1. Solving the equations gives  1 =(2/7),   2 =(3/7),   3 =2/7.
Inventory Example A camera store stocks a particular model camera that can be ordered weekly. Let D 1 , D 2 , … represent the demand for this camera (the number of units that would be sold if the inventory is not depleted) during the first week, second week, …, respectively. It is assumed that the D i ’s are independent and identically distributed random variables having a Poisson distribution with a mean of 1. Let X 0  represent the number of cameras on hand at the outset, X 1  the number of cameras on hand at the end of week 1, X 2  the number of cameras on hand at the end of week 2, and so on.  Assume that X 0  = 3.  On Saturday night the store places an order that is delivered in time for the next opening of the store on Monday.  The store using the following order policy: If there are no cameras in stock, 3 cameras are ordered. Otherwise, no order is placed.  Sales are lost when demand exceeds the inventory on hand
Inventory Example X t  is the number of Cameras in stock at the end of week t (as defined earlier), where X t  represents the state of the system at time t Given that X t  = i, X t+1  depends only on D t+1  and X t  (Markovian property) D t  has a Poisson distribution with mean equal to one. This means that P(D t+1  = n) = e -1 1 n /n! for n = 0, 1, … P(D t  = 0 ) = e -1  = 0.368 P(D t  = 1 ) = e -1  = 0.368 P(D t  = 2 ) = (1/2)e -1  = 0.184 P(D t     3 ) = 1 – P(D t     2) = 1 – (.368 + .368 + .184) = 0.08 X t+1  = max(3-D t+1 , 0) if X t  = 0 and X t+1  = max(X t  – D t+1 , 0) if X t     1, for t =  0, 1, 2, ….
Inventory Example: (One-Step) Transition Matrix P 03  = P(D t+1  = 0) = 0.368 P 02  = P(D t+1  = 1) = 0.368 P 01  = P(D t+1  = 2) = 0.184 P 00  = P(D t+1     3) = 0.080
Inventory Example: Transition Diagram 0 1 2 3
Inventory Example: (One-Step) Transition Matrix
Transition Matrix: Two-Step P (2)  = PP
Transition Matrix: Four-Step P (4)  = P (2) P (2)
Transition Matrix: Eight-Step P (8)  = P (4) P (4)
Steady-State Probabilities The steady-state probabilities uniquely satisfy the following steady-state equations  0  =   0 p 00  +   1 p 10  +   2 p 20  +   3 p 30  1  =   0 p 01  +   1 p 11  +   2 p 21  +   3 p 31  2  =   0 p 02  +   1 p 12  +   2 p 22  +   3 p 32  3  =   0 p 03  +   1 p 13  +   2 p 23  +   3 p 33 1 =   0  +   1  +   2  +   3
Steady-State Probabilities: Inventory Example  0  = .080  0  + .632  1  + .264  2 + .080  3  1  = .184  0  + .368  1  + .368  2  + .184  3  2  = .368  0   + .368  2  + .368  3  3  = .368  0   + .368  3 1  =   0  +   1  +   2  +   3  0  = .286,   1  = .285,   2  = .263,   3  = .166 The numbers in each row of matrix P (8)  match the corresponding steady-state probability
Mean First Passage Times For an ergodic chain, let  m ij  = expected number of transitions before we first reach state  j , given that we are currently in state  i; m ij  is called the  mean first passage time  from state  i  to state  j . In the example, we assume we are currently in state  i . Then with probability  p ij ,  it will take one transition to go from state  i  to state  j . For  k  ≠ j , we next go with probability  p ik  to state  k . In this case, it will take an average of 1 +  m kj  transitions to go from  i  and  j .
This reasoning implies By solving the linear equations of the equation above, we find all the mean first passage times. It can be shown that
For the cola example,  π 1 =2/3 and   π 2  = 1/3 Hence,  m 11  = 1.5 and   m 22  = 3 m 12  = 1 + p 11 m 12   = 1 + .9m 12 m 21  = 1 + p 22 m 21  = 1 + .8m 21 Solving these two equations yields,  m 12  = 10 and   m 21  = 5

More Related Content

PPTX
Arbitrage pricing theory (apt)
PPT
Maintenance
PPTX
Binomial probability distributions ppt
PPT
Goal setting
PPT
Control charts
PPT
Training On Microsoft Excel
PDF
Lesson 11: Markov Chains
PPTX
Markov chain-model
Arbitrage pricing theory (apt)
Maintenance
Binomial probability distributions ppt
Goal setting
Control charts
Training On Microsoft Excel
Lesson 11: Markov Chains
Markov chain-model

What's hot (20)

PPTX
Markov chain
PPTX
Markov chain
PPT
Markov Chains
PPT
M A R K O V C H A I N
PPTX
Markov process
PPTX
Markov Chain and its Analysis
PPT
Markov chain intro
ODP
Markov chain and its Application
PPTX
Markov Chains.pptx
PPTX
Markov presentation
PDF
Timeseries forecasting
PPTX
Markov chain
PDF
17-markov-chains.pdf
PPTX
Chap 4 markov chains
PPTX
Midsquare method- simulation system
PPTX
Travelling salesman problem
PDF
Kalman filter - Applications in Image processing
PPTX
Stat 2153 Stochastic Process and Markov chain
PDF
Hill Climbing Algorithm in Artificial Intelligence
PPT
markov chain.ppt
Markov chain
Markov chain
Markov Chains
M A R K O V C H A I N
Markov process
Markov Chain and its Analysis
Markov chain intro
Markov chain and its Application
Markov Chains.pptx
Markov presentation
Timeseries forecasting
Markov chain
17-markov-chains.pdf
Chap 4 markov chains
Midsquare method- simulation system
Travelling salesman problem
Kalman filter - Applications in Image processing
Stat 2153 Stochastic Process and Markov chain
Hill Climbing Algorithm in Artificial Intelligence
markov chain.ppt
Ad

Viewers also liked (8)

DOC
PPTX
The markovchain package use r2016
PPT
Lesson 4.1 Extreme Values
PPT
3.1 Extreme Values of Functions
PPT
Review Of Probability
PPT
Extreme value distribution to predict maximum precipitation
PPT
2 discrete markov chain
PPT
4 stochastic processes
The markovchain package use r2016
Lesson 4.1 Extreme Values
3.1 Extreme Values of Functions
Review Of Probability
Extreme value distribution to predict maximum precipitation
2 discrete markov chain
4 stochastic processes
Ad

Similar to Markov chains1 (20)

PPTX
Stochastic matrices
DOCX
IE 423 page 1 of 1 •••••••••••••••••••••••••••••••••••••••.docx
PDF
12 Machine Learning Supervised Hidden Markov Chains
PPT
Hidden Markov Models with applications to speech recognition
PPT
Hidden Markov Models with applications to speech recognition
PDF
Markov chain
PPT
ch14MarkovChainkfkkklmkllmkkaskldask.ppt
PDF
Martingales Stopping Time Processes
PPTX
Stochastic Processes Assignment Help
PDF
By BIRASA FABRICE
PPTX
Machine learning fundamental concepts in detail
PPT
Rfid presentation in internet
PDF
PDF
Book chapter-5
PPTX
NLP_KASHK:Markov Models
PDF
A STUDY ON MARKOV CHAIN WITH TRANSITION DIAGRAM
PDF
2012 mdsp pr06  hmm
PDF
stochastic-processes-1.pdf
PDF
Univariate Financial Time Series Analysis
Stochastic matrices
IE 423 page 1 of 1 •••••••••••••••••••••••••••••••••••••••.docx
12 Machine Learning Supervised Hidden Markov Chains
Hidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognition
Markov chain
ch14MarkovChainkfkkklmkllmkkaskldask.ppt
Martingales Stopping Time Processes
Stochastic Processes Assignment Help
By BIRASA FABRICE
Machine learning fundamental concepts in detail
Rfid presentation in internet
Book chapter-5
NLP_KASHK:Markov Models
A STUDY ON MARKOV CHAIN WITH TRANSITION DIAGRAM
2012 mdsp pr06  hmm
stochastic-processes-1.pdf
Univariate Financial Time Series Analysis

More from Kinshook Chaturvedi (20)

PPTX
Working and functions_of_rbi[1]
PPTX
Role of idfc_in_infrastucture_finance
PPTX
Mutual funds
PPTX
PPTX
Basel ii norms.ppt
PPT
Retail banking pres
PPT
Presentation on lic of india
PPT
Management of np as imt
PPT
Life insurance in india final raja
PPT
Financial inclusion
PPT
Corporate banking v2
PPT
Corporate banking latest
DOC
Financial mgt exercises
PDF
Csac10[1].p
PDF
Csac08[1].p
PDF
Csac05[1].p
PDF
Csac14[1].p
PDF
Csac06[1].p
PDF
Xyber001 16 12_09 (2)
Working and functions_of_rbi[1]
Role of idfc_in_infrastucture_finance
Mutual funds
Basel ii norms.ppt
Retail banking pres
Presentation on lic of india
Management of np as imt
Life insurance in india final raja
Financial inclusion
Corporate banking v2
Corporate banking latest
Financial mgt exercises
Csac10[1].p
Csac08[1].p
Csac05[1].p
Csac14[1].p
Csac06[1].p
Xyber001 16 12_09 (2)

Recently uploaded (20)

PPTX
Amazon (Business Studies) management studies
PDF
MSPs in 10 Words - Created by US MSP Network
PDF
COST SHEET- Tender and Quotation unit 2.pdf
PDF
A Brief Introduction About Julia Allison
PPTX
5 Stages of group development guide.pptx
PDF
Elevate Cleaning Efficiency Using Tallfly Hair Remover Roller Factory Expertise
PDF
DOC-20250806-WA0002._20250806_112011_0000.pdf
PPTX
AI-assistance in Knowledge Collection and Curation supporting Safe and Sustai...
PPT
Data mining for business intelligence ch04 sharda
PPTX
ICG2025_ICG 6th steering committee 30-8-24.pptx
PDF
Power and position in leadershipDOC-20250808-WA0011..pdf
PDF
SIMNET Inc – 2023’s Most Trusted IT Services & Solution Provider
PDF
Katrina Stoneking: Shaking Up the Alcohol Beverage Industry
PPTX
job Avenue by vinith.pptxvnbvnvnvbnvbnbmnbmbh
DOCX
unit 1 COST ACCOUNTING AND COST SHEET
PPTX
The Marketing Journey - Tracey Phillips - Marketing Matters 7-2025.pptx
PDF
Unit 1 Cost Accounting - Cost sheet
PDF
Ôn tập tiếng anh trong kinh doanh nâng cao
DOCX
Business Management - unit 1 and 2
PPTX
HR Introduction Slide (1).pptx on hr intro
Amazon (Business Studies) management studies
MSPs in 10 Words - Created by US MSP Network
COST SHEET- Tender and Quotation unit 2.pdf
A Brief Introduction About Julia Allison
5 Stages of group development guide.pptx
Elevate Cleaning Efficiency Using Tallfly Hair Remover Roller Factory Expertise
DOC-20250806-WA0002._20250806_112011_0000.pdf
AI-assistance in Knowledge Collection and Curation supporting Safe and Sustai...
Data mining for business intelligence ch04 sharda
ICG2025_ICG 6th steering committee 30-8-24.pptx
Power and position in leadershipDOC-20250808-WA0011..pdf
SIMNET Inc – 2023’s Most Trusted IT Services & Solution Provider
Katrina Stoneking: Shaking Up the Alcohol Beverage Industry
job Avenue by vinith.pptxvnbvnvnvbnvbnbmnbmbh
unit 1 COST ACCOUNTING AND COST SHEET
The Marketing Journey - Tracey Phillips - Marketing Matters 7-2025.pptx
Unit 1 Cost Accounting - Cost sheet
Ôn tập tiếng anh trong kinh doanh nâng cao
Business Management - unit 1 and 2
HR Introduction Slide (1).pptx on hr intro

Markov chains1

  • 2. Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable evolves over time includes Stochastic Processes .
  • 3. What is a Stochastic Process? Suppose we observe some characteristic of a system at discrete points in time. Let X t be the value of the system characteristic at time t . In most situations, X t is not known with certainty before time t and may be viewed as a random variable. A discrete-time stochastic process is simply a description of the relation between the random variables X 0 , X 1 , X 2 …..
  • 4. A continuous –time stochastic process is simply the stochastic process in which the state of the system can be viewed at any time, not just at discrete instants in time. For example, the number of people in a supermarket t minutes after the store opens for business may be viewed as a continuous-time stochastic process.
  • 5. The Gambler’s Ruin Problem At time 0, I have Rs. 2. At times 1, 2, …, I play a game in which I bet Rs. 1, with probabilities p, I win the game, and with probability 1 – p, I lose the game. My goal is to increase my capital to Rs. 4, and as soon as I do, the game is over. The game is also over if my capital is reduced to 0. Let X t represent my capital position after the time t game (if any) is played X 0 , X 1 , X 2 , …. May be viewed as a discrete-time stochastic process
  • 6. What is a Markov Chain? One special type of discrete-time stochastic process is called a Markov Chain. Definition: A discrete-time stochastic process is a Markov chain if, for t = 0,1,2… and all states P ( X t+1 = i t+1 | X t = i t , X t-1 = i t-1 ,…, X 1 = i 1 , X 0 = i 0 ) = P ( X t+1 = i t+1 | X t = i t ) Essentially this says that the probability distribution of the state at time t +1 depends on the state at time t ( i t ) and does not depend on the states the chain passed through on the way to i t at time t .
  • 7. In our study of Markov chains, we make further assumption that for all states i and j and all t , P ( X t+ 1 = j| X t = i ) is independent of t . This assumption allows us to write P ( X t+1 = j| X t = i ) = p ij where p ij is the probability that given the system is in state i at time t , it will be in a state j at time t +1. If the system moves from state i during one period to state j during the next period, we that a transition from i to j has occurred.
  • 8. The p ij ’s are often referred to as the transition probabilities for the Markov chain. This equation implies that the probability law relating the next period’s state to the current state does not change over time. It is often called the Stationary Assumption and any Markov chain that satisfies it is called a stationary Markov chain . We also must define q i to be the probability that the chain is in state i at the time 0; in other words, P ( X 0 = i ) = q i .
  • 9. We call the vector q = [ q 1 , q 2 ,…q s ] the initial probability distribution for the Markov chain. In most applications, the transition probabilities are displayed as an s x s transition probability matrix P . The transition probability matrix P may be written as
  • 10. TPM: The Gambler’s Ruin Problem Rs. 0 Rs. 1 Rs. 2 Rs. 3 Rs. 4
  • 11. For each i We also know that each entry in the P matrix must be nonnegative. Hence, all entries in the transition probability matrix are nonnegative, and the entries in each row must sum to 1.
  • 12. Question A company has two machines. During any day, each machine that is working at the beginning of the day has a 1/3 chance of breaking down. If a machine breaks down during the day it is sent to a repair facility and will be working two days after it breaks down. (Thus, if a machine breaks down during day-3, it will be working at the beginning of day 5) Letting the state of the system to be the number of machines working at the beginning of the day, formulate a transition probability matrix for this situation.
  • 13. n -Step Transition Probabilities A question of interest when studying a Markov chain is: If a Markov chain is in a state i at time m , what is the probability that n periods later than the Markov chain will be in state j ? This probability will be independent of m , so we may write P ( X m+n = j | X m = i ) = P ( X n = j | X 0 = i ) = P ij ( n ) where P ij ( n ) is called the n -step probability of a transition from state i to state j . For n > 1, P ij ( n ) = ij th element of P n
  • 14. The Cola Example Suppose the entire cola industry produces only two colas. Given that a person last purchased cola 1, there is a 90% chance that her next purchase will be cola 1. Given that a person last purchased cola 2, there is an 80% chance that her next purchase will be cola 2. If a person is currently a cola 2 purchaser, what is the probability that she will purchase cola 1 two purchases from now? If a person is currently a cola 1 purchaser, what is the probability that she will purchase cola 1 three purchases from now?
  • 15. The Cola Example We view each person’s purchases as a Markov chain with the state at any given time being the type of cola the person last purchased. Hence, each person’s cola purchases may be represented by a two-state Markov chain, where State 1 = person has last purchased cola 1 State 2 = person has last purchased cola 2 If we define X n to be the type of cola purchased by a person on her n th future cola purchase, then X 0 , X 1 , … may be described as the Markov chain with the following transition matrix:
  • 16. The Cola Example We can now answer questions 1 and 2. We seek P ( X 2 = 1| X 0 = 2) = P 21 (2) = element 21 of P 2 :
  • 17. The Cola Example Hence, P 21 (2) =.34. This means that the probability is .34 that two purchases in the future a cola 2 drinker will purchase cola 1. By using basic probability theory, we may obtain this answer in a different way. We seek P 11 (3) = element 11 of P 3 : Therefore, P 11 (3) = .781
  • 18. Many times we do not know the state of the Markov chain at time 0. Then we can determine the probability that the system is in state i at time n by using the reasoning. Probability of being in state j at time n where q =[q 1 , q 2 , … q 3 ]. (This is an unconditional probability).
  • 19. Limiting probabilities To illustrate the behavior of the n -step transition probabilities for large values of n , we have computed several of the n -step transition probabilities for the Cola example. This means that for large n, no matter what the initial state, there is a .67 chance that a person will be a cola 1 purchaser. We can easily multiply matrices on a spreadsheet using the MMULT command.
  • 20. Question Find the equilibrium market shares of two firms whose probability transition matrix is as follows .5 .5 B .3 .7 A B A
  • 21. Example Suppose we have a Markov transition matrix: 1 2 3 1/2 1/2 1/2 1/2 1
  • 22. All states communicate with each other Starting from 1, the MC can return to 1 in three steps via two possible routes: Route 1: 1 to 3 to 2 to 1 with probability .5 ×1 ×.5 = 1/4 Route 2: 1 to 2 to 2 to 1 with probability .5 ×.5×.5 = 1/8 Hence the required probability is =1/4+1/8 = 3/8 .
  • 23. Steady-State Probabilities Steady-state probabilities are used to describe the long-run behavior of a Markov chain. Theorem 1: Let P be the transition matrix for an s -state ergodic chain. Then there exists a vector π = [ π 1 π 2 … π s ] such that
  • 24. Theorem 1 tells us that for any initial state i , The vector π = [ π 1 π 2 … π s ] is often called the steady-state distribution , or equilibrium distribution , for the Markov chain.
  • 25. An Example A supermarket stocks 3 brands of coffee, A , B , and C, and it has been observed that customers switch from brand to brand according to the following transition matrix: In the long In the long run, what fraction of the customers purchase the respective brands?
  • 26. Solution Since the chain is ergodic (all states are communicating, each state is recurrent and aperiodic), the steady-state distribution exists. Solving  =  P gives  1 =(3/4)  1 +(1/4)  3  2 =(1/4)  1 +(2/3)  2 +(1/4)  3  3 =(1/3)  2 +(1/2)  3 Subject to  1 +  2 +  3 =1. Solving the equations gives  1 =(2/7),  2 =(3/7),  3 =2/7.
  • 27. Inventory Example A camera store stocks a particular model camera that can be ordered weekly. Let D 1 , D 2 , … represent the demand for this camera (the number of units that would be sold if the inventory is not depleted) during the first week, second week, …, respectively. It is assumed that the D i ’s are independent and identically distributed random variables having a Poisson distribution with a mean of 1. Let X 0 represent the number of cameras on hand at the outset, X 1 the number of cameras on hand at the end of week 1, X 2 the number of cameras on hand at the end of week 2, and so on. Assume that X 0 = 3. On Saturday night the store places an order that is delivered in time for the next opening of the store on Monday. The store using the following order policy: If there are no cameras in stock, 3 cameras are ordered. Otherwise, no order is placed. Sales are lost when demand exceeds the inventory on hand
  • 28. Inventory Example X t is the number of Cameras in stock at the end of week t (as defined earlier), where X t represents the state of the system at time t Given that X t = i, X t+1 depends only on D t+1 and X t (Markovian property) D t has a Poisson distribution with mean equal to one. This means that P(D t+1 = n) = e -1 1 n /n! for n = 0, 1, … P(D t = 0 ) = e -1 = 0.368 P(D t = 1 ) = e -1 = 0.368 P(D t = 2 ) = (1/2)e -1 = 0.184 P(D t  3 ) = 1 – P(D t  2) = 1 – (.368 + .368 + .184) = 0.08 X t+1 = max(3-D t+1 , 0) if X t = 0 and X t+1 = max(X t – D t+1 , 0) if X t  1, for t = 0, 1, 2, ….
  • 29. Inventory Example: (One-Step) Transition Matrix P 03 = P(D t+1 = 0) = 0.368 P 02 = P(D t+1 = 1) = 0.368 P 01 = P(D t+1 = 2) = 0.184 P 00 = P(D t+1  3) = 0.080
  • 30. Inventory Example: Transition Diagram 0 1 2 3
  • 31. Inventory Example: (One-Step) Transition Matrix
  • 33. Transition Matrix: Four-Step P (4) = P (2) P (2)
  • 34. Transition Matrix: Eight-Step P (8) = P (4) P (4)
  • 35. Steady-State Probabilities The steady-state probabilities uniquely satisfy the following steady-state equations  0 =  0 p 00 +  1 p 10 +  2 p 20 +  3 p 30  1 =  0 p 01 +  1 p 11 +  2 p 21 +  3 p 31  2 =  0 p 02 +  1 p 12 +  2 p 22 +  3 p 32  3 =  0 p 03 +  1 p 13 +  2 p 23 +  3 p 33 1 =  0 +  1 +  2 +  3
  • 36. Steady-State Probabilities: Inventory Example  0 = .080  0 + .632  1 + .264  2 + .080  3  1 = .184  0 + .368  1 + .368  2 + .184  3  2 = .368  0 + .368  2 + .368  3  3 = .368  0 + .368  3 1 =  0 +  1 +  2 +  3  0 = .286,  1 = .285,  2 = .263,  3 = .166 The numbers in each row of matrix P (8) match the corresponding steady-state probability
  • 37. Mean First Passage Times For an ergodic chain, let m ij = expected number of transitions before we first reach state j , given that we are currently in state i; m ij is called the mean first passage time from state i to state j . In the example, we assume we are currently in state i . Then with probability p ij , it will take one transition to go from state i to state j . For k ≠ j , we next go with probability p ik to state k . In this case, it will take an average of 1 + m kj transitions to go from i and j .
  • 38. This reasoning implies By solving the linear equations of the equation above, we find all the mean first passage times. It can be shown that
  • 39. For the cola example, π 1 =2/3 and π 2 = 1/3 Hence, m 11 = 1.5 and m 22 = 3 m 12 = 1 + p 11 m 12 = 1 + .9m 12 m 21 = 1 + p 22 m 21 = 1 + .8m 21 Solving these two equations yields, m 12 = 10 and m 21 = 5