SlideShare a Scribd company logo
International Journal of Technical Research and Applications e-ISSN: 2320-8163,
www.ijtra.com Volume 3, Issue 2 (Mar-Apr 2015), PP. 123-125
123 | P a g e
A STUDY ON MARKOV CHAIN WITH
TRANSITION DIAGRAM
M.Reni Sagayaraj1, A.Michael Raj2, G.Sathiyavani3
1,2,3
Sacred Heart College (Autonomous), Tirupattur,
Vellore, Tamilnadu, India.
reni.sagaya@gmail.com
Abstract—Stochastic processes have many useful applications
and are taught in several university programmers. In this paper
we are using stochastic process with complex concept on Markov
chains which uses a transition matrix to plot a transition diagram
and there are several examples which explains various type of
transition diagram. The concept behind this topic is simple and
easy to understand.
Keywords— Transition Diagram, Transition Matrix, Markov
Chain, First Passage Time, Persistent State, Transient State,
Periodic State, Inter-Communicating States.
I. INTRODUCTION
A Stochastic processes are important for modelling many
natural and social phenomena and have useful applications in
computer science, physics, biology, economics and finance. There are
many textbooks on stochastic processes, from introductory to
advanced ones [2] & [4]. Discrete Markov processes are the simplest
and most important class of stochastic processes.
There are only few publications on teaching stochastic processes
and Markov chains, such as [1] & [5] “More research needs to be
carried out on how to teach stochastic processes to researchers” [5].
“While trying to understand Markov chains models, students
usually encounter many obstacles and difficulties” [6]. Many
lecturers use visual displays such as sample paths and transition
diagrams to illustrate Markov chains. In this article we utilise
transition diagrams further for teaching several important concepts of
Markov chains. We explain in details how these concepts can be
defined in terms of transition diagrams (treated as directed weighted
graphs) and we accompany this with worked examples. Transition
diagrams provide a good techniques for solving some problems about
Markov chains, especially for students with poor mathematical
background.
II. TRANSITION DIAGRAM OF A MARKOV CHAIN:
DEFINITIONS
A homogeneous finite Markov chain is entirely defined
by its initial state distribution and its transition matrix S =
[pij], where pij = P(X1 = i|X0 = j) is the transition probability
from state j to state i.
The graphical representation of a Markov chain is a transition
diagram, which is equivalent to its transition matrix.
The transition diagram of a Markov chain X is a single weighted
directed graph, where each vertex represents a state of the Markov
chain and there is a directed edge from vertex j to vertex i if the
transition probability pij >0; this edge has the weight/probability of
pij.
Fig.1.The transition diagram of the Markov chain
Example1.
Example 1
A Markov chain has states 1, 2, 3, 4, 5, 6 and the following
transition matrix:
0 0 0 0 0 0.5
0.85 0 0 0.1 0 0
0 0.3 0.9 0 0 0
0 0.2 0.3 0 0 0
0.6 0.1 0 0.7 0 0
0 0 0 0 0.8 0
S
 
 
 
 
  
 
 
 
 
This is its transition diagram.
In the diagram in Fig. 1 the probability of each edge is shown
next to it. For example, the loop from state 3 to state 3 has probability
0.9 = p33 = P(X1 = 3 | X0 = 3) and the edge from state 2 to state 3 has
probability 0.3 = p32 = = P(X1 = 3 | X0 = 2).
In the graph terminology, an edge sequence of length n is an
ordered sequence of edges e1, e2, …, e n, where ei
and ei+1 are adjacent edges for all i = 1, 2,…, n-1.
A path is an edge sequence, where all edges are distinct. A simple
path is a path, where all vertices are distinct (except possibly the start
and end vertices). A cycle is a simple path there the start vertex and
the end vertex are the same.
In a transition diagram the probability of an edge sequence equals
a product of the probabilities of its edges.
III. PROPERTIES OF A MARKOV CHAIN IN TERMS OF
TRANSITION DIAGRAMS
A. N-Step Transition Probability:
An n-step transition probability is:
pij
(n)
= P  Xn = i | X0 = j .
It equals the probability of getting from state j to state i in
exactly n steps. It can be calculated as the corresponding element of
the matrix S(n)
but it is usually easier to find it from the transition
diagram as a sum of the probabilities of all edge sequences of length n
from j to i.
Example 2
In the chain from Example 1, the 3-step transition probability
from 2 to 1 equals:
p32
(3)
= a1 + a2
International Journal of Technical Research and Applications e-ISSN: 2320-8163,
www.ijtra.com Volume 3, Issue 2 (Mar-Apr 2015), PP. 123-125
124 | P a g e
Where:
a1 = The probability of the path 2423
= = 0.006
a2 = The probability of the edge sequence 2333.
= 0.3 0.9 2
= 0.243
These probabilities are easy to find from the diagram
p32
3
 0.006  0.243  0.249.
p32
3
 0.249.
B. Probability of Visiting a State for the First Time:
Let us consider a random variable:
Ti = min {n ≥1 : Xn = i}.
It represents the number of steps to visit a state i for the
first time. It is called the first passage time of the state i. Related
probabilities are:
fij
(m)
= P  Ti = m | X0 = j and fij = P  Ti | X0 = j .
Clearly:
fij=
( )
1
m
ij
m
f



These probabilities can be interpreted as follows:
fij
(m)
= the probability to visit i on step m for the first time starting
from j;
fij
(m)
= the probability to visit i in finite number of starting from j;
In terms of transition diagrams, fij equals a sum of the
probabilities of all edge sequences from j to i that do not include the
vertex i between the start and end vertices.
fij
 m
equals a similar sum for the edge sequences of length m
only.
For finite Markov chains these probabilities are easier to find
from their transition diagrams than with other methods.
Example.3
From the transition diagram
we can calculate the following probabilities:
 f61
2
 as the probability of the path
156.
f61
n
for , f61
 For vertices 2 and 3 we have
f23

 
f23
2
 as the probability of the path 342.
f23
3
  as the probability of the path 3342.
and in general, for , for f23
n+1
n
 as the
probability of the edge sequence
( 1)
3...3 42
n times
with n loops
around 3.
So:
f23   
 
f23 = 0.3.
C. Persistent and Transient States
A state i of a Markov chain is called persistent if fii = 1 and
transient otherwise.
Thus, if the chain starts at a persistent state, it will return to
this state almost surely. If the chain starts at a transient state, there is
a positive probability of never returning to this state. From the
transition diagram we can evaluate the probability fii and therefore
determine whether the state i is persistent or transient.
Example 4
For each of the states 3 and 6 of the Markov chain in
Example 1 determine whether the state is persistent or transient.
Solution
 f66= =0.24 as the probability of the cycle 615.So the
state 6 is persistent.
 =0.9 the probability of the loop around 3
=0
=0.3 =0.009 as the probability of the
cycle 3423.
More generally, for any , =0 and 0.1 as
the probability of the edge sequence 342...423
ntimes
.
So:
f33  
 
Since f11=0.9 < 1, the state 3 is transistent.
Lemma 1
Suppose i and j are two different states of a Markov chain.
If pji > 0 and fij = 0, then the state i is transient. This lemma is
easily derived from the definition of fij. The lemma can be rephrased
in terms of transition diagrams: if the chain can reach state j from
state i in one step (pji > 0) but cannot come back (fij = 0), then the
state i is transient.
Lemma 1 gives a method of finding transient states from a
transition diagram without any calculations. For example, from Fig. 1
we can see that p16 = 0.3 > 0 and f61 = 0 because the chain cannot
return from state 6 to state 1. Therefore by Lemma 1 the state 1 is
transient. This is consistent with the result of Example 4.
International Journal of Technical Research and Applications e-ISSN: 2320-8163,
www.ijtra.com Volume 3, Issue 2 (Mar-Apr 2015), PP. 123-125
125 | P a g e
D. Mean Recurrence Time:
The mean recurrence time of a persistent state i is
defined as
( )
1
m
i ii
m
mf


  .
If i is a transient state, i   by the definition.
Thus is the expected time of returning to the state i if thechain
states at i.
Example 5
For each of the states 3 and 6 of the Markov chain in Example 1 find
its mean recurrence time.
Solution
 Since the state 3 is transient =
 For the state 6, =0.24 and =0 for any n
So =3 =3 0.24=0.72.
E. Periodic States:
The period of a state i is the greatest common divisor of all n ≥ 1
with p(
ii
n)
> 0.
The state i is periodic if its period is greater than 1; otherwise it is
aperiodic.
In terms of transition diagrams, a state i has a period d if every
edge sequence from i to i has the length, which is a multiple of d.
Example 6
For each of the states 3 and 6 of the Markov chain in Example 1 find
its period and determine whether the state is periodic.
Solution
The transition diagram in Fig. 1 has a cycle 343 of length 2
and a cycle 3432 of length 3. The greatest common divisor of 2 and 3
equals 1. Therefore the period of the state 2 equals 1 and the state is
aperiodic.
Any edge sequence from 6 to 6 is a cycle 615 or its
repetition, so its length is a multiple of 3. Hence the state 6 is periodic
with period 3.
F. Communicating States
State i communicates with state j (notation i→j) if p
ji
n
> 0 for some n ≥ 0.
In terms of transition diagrams, a state i communicates with
state j if there is a path from i to j.
State i inter-communicates with state j (notation ij) if the
two states communicate with each other.
Theorem 1
(Grimmett and Stirzaker, 2001)
Suppose i and j are two states of a Markov chain and ij. Then:
 i and j have the same period; 
 i is persistent  j is persistent;
 i is transient  j is transient
Inter-communication is an equivalence relation on the set Q
of all states of a Markov chain. So the set Q can be partitioned into
equivalence classes;all states in one equivalence class share the same
properties, according to Theorem 1.
Example 7
Let us consider the Markov chain from Example 1 and its
transition diagram
Clearly, the states 2 and 4 inter-communicate. Also 2→3, since
p32 > 0 and 3→2, since there is a path 342 from 3 to 2.
Next, 6→1 but not 1→6 (there is no path from 6 to 1). States 1, 5
and 6 all inter-communicate.
Therefore, the equivalence class of 3 is:
[3] = {2,3,4}
and the equivalence class of 6 is:
[6] = {1,5,6}.
According to Theorem 1 and Examples 4 and 5, the states 2, 3 and 4
are all transient and aperiodic; the states 1, 5 and 6 are all persistent
and periodic with period 3.
IV. CONCLUSION
In this paper we have given some elementary definitions for
Markov chains. These definitions are utilized in plotting the transition
diagrams. Finally we have related the two components that are
Markov chains and transition matrix in the form of diagram.
REFERENCES
[1] Chang-Xing, L., 2009. Probe into the teaching of probability
theory and stochastic processes. Proceedings of the International
Conference on Computational Intelligence and Software
Engineering, Dec. 11-13, IEEE Xplore Press, Wuhan, pp: 1-4.
DOI: 10.1109/CISE.2009.5366432.I. S. Jacobs and C. P. Bean,
“Fine particles, thin films and exchange anisotropy,” in Magnetism, vol.
III, G. T. Rado and H. Suhl, Eds. New York: Academic, 1963, pp. 271–
350.
[2] Cinlar, E., 2013. Introduction to Stochastic Processes. 1st Edn.,
Elsevier, ISBN-10: 0486497976, pp: 416.
[3] Favida Kachapova “Representing Markov chains with transition
diagram” Journal of Mathematics and Statistics 9 (3): 149-154, 2013
ISSN: 1549-3644
[4] Grimmett, G. and D. Stirzaker, 2001. Probability and Random
Processes. 3rd Edn., Oxford University Press, New York, ISBN-
10: 0198572220, pp: 596.
[5] Wang, A.L. and S.H. Kon, 2003. Should simple Markov
processes be taught by mathematics teachers? International
Statistical Institute, 54th
Session, Berlin, pp:1-4.
[6] Wang, A.L., 2001b. Introducing Markov chains models to
undergraduates. International Statistical Institute, 53rd Session,
Seoul, pp:1-4.

More Related Content

PDF
03 time and motion
PDF
Research on the Stability of the Grade Structure of a University Title
PDF
J05715060
PDF
12 Machine Learning Supervised Hidden Markov Chains
PDF
PART I.5 - Physical Mathematics
PPTX
Hidden Markov Models
PPTX
Hidden markov model
PPTX
Variational Principle
03 time and motion
Research on the Stability of the Grade Structure of a University Title
J05715060
12 Machine Learning Supervised Hidden Markov Chains
PART I.5 - Physical Mathematics
Hidden Markov Models
Hidden markov model
Variational Principle

What's hot (14)

DOCX
MC0082 –Theory of Computer Science
PDF
E04943237
PPTX
3.1 limit definition of the derivative
PDF
Introduction to tensor calculus
PDF
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
PDF
Local properties and differential forms of smooth map and tangent bundle
PDF
Classical mechanics
PPTX
Spacey random walks CAM Colloquium
PDF
03 tensors
PPTX
Spacey random walks CMStatistics 2017
PPTX
Histroy of partial differential equation
PDF
Tensor algebra and tensor analysis for engineers
PDF
Implimenting_HJM
PPTX
February 17 2015
MC0082 –Theory of Computer Science
E04943237
3.1 limit definition of the derivative
Introduction to tensor calculus
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
Local properties and differential forms of smooth map and tangent bundle
Classical mechanics
Spacey random walks CAM Colloquium
03 tensors
Spacey random walks CMStatistics 2017
Histroy of partial differential equation
Tensor algebra and tensor analysis for engineers
Implimenting_HJM
February 17 2015
Ad

Viewers also liked (20)

PDF
FERTIGATION THROUGH DRIP IRRIGATION USING EMBEDDED SYSTEM
PDF
EVALUATING THICKNESS REQUIREMENTS OF FRACTURE SPECIMEN IN PREDICTING CHARACTE...
PDF
EFFICIENT DATA HIDING SYSTEM USING LZW CRYPTOGRAPHY AND GIF IMAGE STEGANOGRAPHY
PDF
STUDY OF PATHOLOGICAL, EFFECTS OF CRUDE EXTRACT OF PORTULACA OLERACEA L. IN T...
PDF
SOIL BURIAL DEGRADATION OF POLYPROPYLENE/ STARCH BLEND
PDF
THE KUYPERS EFFECT: ANGULARMOMENTUM CONSERVATION IMPLIES GLOBAL C IN GRAVITY
PDF
AN APPROACH FOR RFID TICKETING USED FOR PERSONAL NAVIGATOR FOR A PUBLIC TRANS...
PDF
DEVELOPING A HIGHER-CYCLED PRODUCT DESIGN CAE MODEL: THE EVOLUTION OF AUTOMOT...
PDF
KINETIC AND STATIC STUDY ON BIOSORPTION OF HEXAVALENT CHROMIUM USING TAMARIND...
PDF
INFORMATION FLOW CONTROL IN LOGISTICS NETWORK OVER CLOUD
PDF
IMPROVING RELIABLE DATA TRANSFER IN MOBILE ADHOC NETWORKS USING THE PRINCIPLE...
PDF
RECOGNITION AND CONVERSION OF HANDWRITTEN MODI CHARACTERS
PDF
SIMULATION AND OPTIMISATION OF A SOLAR PANEL: A CASE STUDY FOR SURESH GYAN VI...
PDF
PREPARATION AND STRUCTURAL PROPERTIES OF PALM SHELL
PDF
DESIGN AND EVALUATION OF A REAL-TIME FLEET MANAGEMENT SYSTEM
PDF
Characterization Techniques of Metamaterials
PDF
HEAT TRANSFER ENHANCEMENT OF SERPENTINE SHAPED MICRO CHANNEL HEAT SINK WITH A...
PDF
SPACE TIME ADAPTIVE PROCESSING FOR CLUTTER SUPPRESSION IN RADAR USING SUBSPAC...
PDF
A LITERATURE SURVEY ON INFORMATION EXTRACTION BY PRIORITIZING CALLS
PDF
AN EXPERIMENTAL STUDY OF PERFORMANCE AND EMISSION CHARACTERISTICS OF CI ENGIN...
FERTIGATION THROUGH DRIP IRRIGATION USING EMBEDDED SYSTEM
EVALUATING THICKNESS REQUIREMENTS OF FRACTURE SPECIMEN IN PREDICTING CHARACTE...
EFFICIENT DATA HIDING SYSTEM USING LZW CRYPTOGRAPHY AND GIF IMAGE STEGANOGRAPHY
STUDY OF PATHOLOGICAL, EFFECTS OF CRUDE EXTRACT OF PORTULACA OLERACEA L. IN T...
SOIL BURIAL DEGRADATION OF POLYPROPYLENE/ STARCH BLEND
THE KUYPERS EFFECT: ANGULARMOMENTUM CONSERVATION IMPLIES GLOBAL C IN GRAVITY
AN APPROACH FOR RFID TICKETING USED FOR PERSONAL NAVIGATOR FOR A PUBLIC TRANS...
DEVELOPING A HIGHER-CYCLED PRODUCT DESIGN CAE MODEL: THE EVOLUTION OF AUTOMOT...
KINETIC AND STATIC STUDY ON BIOSORPTION OF HEXAVALENT CHROMIUM USING TAMARIND...
INFORMATION FLOW CONTROL IN LOGISTICS NETWORK OVER CLOUD
IMPROVING RELIABLE DATA TRANSFER IN MOBILE ADHOC NETWORKS USING THE PRINCIPLE...
RECOGNITION AND CONVERSION OF HANDWRITTEN MODI CHARACTERS
SIMULATION AND OPTIMISATION OF A SOLAR PANEL: A CASE STUDY FOR SURESH GYAN VI...
PREPARATION AND STRUCTURAL PROPERTIES OF PALM SHELL
DESIGN AND EVALUATION OF A REAL-TIME FLEET MANAGEMENT SYSTEM
Characterization Techniques of Metamaterials
HEAT TRANSFER ENHANCEMENT OF SERPENTINE SHAPED MICRO CHANNEL HEAT SINK WITH A...
SPACE TIME ADAPTIVE PROCESSING FOR CLUTTER SUPPRESSION IN RADAR USING SUBSPAC...
A LITERATURE SURVEY ON INFORMATION EXTRACTION BY PRIORITIZING CALLS
AN EXPERIMENTAL STUDY OF PERFORMANCE AND EMISSION CHARACTERISTICS OF CI ENGIN...
Ad

Similar to A STUDY ON MARKOV CHAIN WITH TRANSITION DIAGRAM (20)

PPT
Markov Chains
PPTX
02 - Discrete-Time Markov Models - incomplete.pptx
PPT
markov chain.ppt
PDF
CS-438 COMPUTER SYSTEM MODELINGWK9+10LEC17-19.pdf
PDF
Book chapter-5
PDF
A SURVEY OF MARKOV CHAIN MODELS IN LINGUISTICS APPLICATIONS
PPTX
Stat 2153 Stochastic Process and Markov chain
PPT
ch14MarkovChainkfkkklmkllmkkaskldask.ppt
PPT
Markov chains1
PDF
I05745368
PPTX
Markov Chains.pptx
PDF
Markov chain
PDF
Queueing theory
PDF
DOCX
IE 423 page 1 of 1 •••••••••••••••••••••••••••••••••••••••.docx
PPTX
Markov chain
PPTX
Chap 4 markov chains
PPTX
Markov chain-model
PDF
A Stochastic Model Approach for Reaching Probabilities of Message Flow in Spa...
PPT
2 discrete markov chain
Markov Chains
02 - Discrete-Time Markov Models - incomplete.pptx
markov chain.ppt
CS-438 COMPUTER SYSTEM MODELINGWK9+10LEC17-19.pdf
Book chapter-5
A SURVEY OF MARKOV CHAIN MODELS IN LINGUISTICS APPLICATIONS
Stat 2153 Stochastic Process and Markov chain
ch14MarkovChainkfkkklmkllmkkaskldask.ppt
Markov chains1
I05745368
Markov Chains.pptx
Markov chain
Queueing theory
IE 423 page 1 of 1 •••••••••••••••••••••••••••••••••••••••.docx
Markov chain
Chap 4 markov chains
Markov chain-model
A Stochastic Model Approach for Reaching Probabilities of Message Flow in Spa...
2 discrete markov chain

More from International Journal of Technical Research & Application (20)

PDF
STUDY & PERFORMANCE OF METAL ON METAL HIP IMPLANTS: A REVIEW
PDF
EXPONENTIAL SMOOTHING OF POSTPONEMENT RATES IN OPERATION THEATRES OF ADVANCED...
PDF
POSTPONEMENT OF SCHEDULED GENERAL SURGERIES IN A TERTIARY CARE HOSPITAL - A T...
PDF
STUDY OF NANO-SYSTEMS FOR COMPUTER SIMULATIONS
PDF
ENERGY GAP INVESTIGATION AND CHARACTERIZATION OF KESTERITE CU2ZNSNS4 THIN FIL...
PDF
POD-PWM BASED CAPACITOR CLAMPED MULTILEVEL INVERTER
PDF
DIGITAL COMPRESSING OF A BPCM SIGNAL ACCORDING TO BARKER CODE USING FPGA
PDF
MODELLING THE IMPACT OF FLOODING USING GEOGRAPHIC INFORMATION SYSTEM AND REMO...
PDF
AN EXPERIMENTAL STUDY ON SEPARATION OF WATER FROM THE ATMOSPHERIC AIR
PDF
LI-ION BATTERY TESTING FROM MANUFACTURING TO OPERATION PROCESS
PDF
QUALITATIVE RISK ASSESSMENT AND MITIGATION MEASURES FOR REAL ESTATE PROJECTS ...
PDF
SCOPE OF REPLACING FINE AGGREGATE WITH COPPER SLAG IN CONCRETE- A REVIEW
PDF
IMPLEMENTATION OF METHODS FOR TRANSACTION IN SECURE ONLINE BANKING
PDF
EFFECT OF TRANS-SEPTAL SUTURE TECHNIQUE VERSUS NASAL PACKING AFTER SEPTOPLASTY
PDF
EVALUATION OF DRAINAGE WATER QUALITY FOR IRRIGATION BY INTEGRATION BETWEEN IR...
PDF
THE CONSTRUCTION PROCEDURE AND ADVANTAGE OF THE RAIL CABLE-LIFTING CONSTRUCTI...
PDF
TIME EFFICIENT BAYLIS-HILLMAN REACTION ON STEROIDAL NUCLEUS OF WITHAFERIN-A T...
PDF
A STUDY ON THE FRESH PROPERTIES OF SCC WITH FLY ASH
PDF
AN INSIDE LOOK IN THE ELECTRICAL STRUCTURE OF THE BATTERY MANAGEMENT SYSTEM T...
PDF
OPEN LOOP ANALYSIS OF CASCADED HBRIDGE MULTILEVEL INVERTER USING PDPWM FOR PH...
STUDY & PERFORMANCE OF METAL ON METAL HIP IMPLANTS: A REVIEW
EXPONENTIAL SMOOTHING OF POSTPONEMENT RATES IN OPERATION THEATRES OF ADVANCED...
POSTPONEMENT OF SCHEDULED GENERAL SURGERIES IN A TERTIARY CARE HOSPITAL - A T...
STUDY OF NANO-SYSTEMS FOR COMPUTER SIMULATIONS
ENERGY GAP INVESTIGATION AND CHARACTERIZATION OF KESTERITE CU2ZNSNS4 THIN FIL...
POD-PWM BASED CAPACITOR CLAMPED MULTILEVEL INVERTER
DIGITAL COMPRESSING OF A BPCM SIGNAL ACCORDING TO BARKER CODE USING FPGA
MODELLING THE IMPACT OF FLOODING USING GEOGRAPHIC INFORMATION SYSTEM AND REMO...
AN EXPERIMENTAL STUDY ON SEPARATION OF WATER FROM THE ATMOSPHERIC AIR
LI-ION BATTERY TESTING FROM MANUFACTURING TO OPERATION PROCESS
QUALITATIVE RISK ASSESSMENT AND MITIGATION MEASURES FOR REAL ESTATE PROJECTS ...
SCOPE OF REPLACING FINE AGGREGATE WITH COPPER SLAG IN CONCRETE- A REVIEW
IMPLEMENTATION OF METHODS FOR TRANSACTION IN SECURE ONLINE BANKING
EFFECT OF TRANS-SEPTAL SUTURE TECHNIQUE VERSUS NASAL PACKING AFTER SEPTOPLASTY
EVALUATION OF DRAINAGE WATER QUALITY FOR IRRIGATION BY INTEGRATION BETWEEN IR...
THE CONSTRUCTION PROCEDURE AND ADVANTAGE OF THE RAIL CABLE-LIFTING CONSTRUCTI...
TIME EFFICIENT BAYLIS-HILLMAN REACTION ON STEROIDAL NUCLEUS OF WITHAFERIN-A T...
A STUDY ON THE FRESH PROPERTIES OF SCC WITH FLY ASH
AN INSIDE LOOK IN THE ELECTRICAL STRUCTURE OF THE BATTERY MANAGEMENT SYSTEM T...
OPEN LOOP ANALYSIS OF CASCADED HBRIDGE MULTILEVEL INVERTER USING PDPWM FOR PH...

Recently uploaded (20)

PPTX
Geodesy 1.pptx...............................................
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PPT
Project quality management in manufacturing
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PDF
Digital Logic Computer Design lecture notes
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PPTX
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
PPTX
Internet of Things (IOT) - A guide to understanding
PDF
composite construction of structures.pdf
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PDF
Well-logging-methods_new................
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
Geodesy 1.pptx...............................................
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
Project quality management in manufacturing
Foundation to blockchain - A guide to Blockchain Tech
Digital Logic Computer Design lecture notes
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
R24 SURVEYING LAB MANUAL for civil enggi
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
Automation-in-Manufacturing-Chapter-Introduction.pdf
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
Internet of Things (IOT) - A guide to understanding
composite construction of structures.pdf
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
Well-logging-methods_new................
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Embodied AI: Ushering in the Next Era of Intelligent Systems
CYBER-CRIMES AND SECURITY A guide to understanding

A STUDY ON MARKOV CHAIN WITH TRANSITION DIAGRAM

  • 1. International Journal of Technical Research and Applications e-ISSN: 2320-8163, www.ijtra.com Volume 3, Issue 2 (Mar-Apr 2015), PP. 123-125 123 | P a g e A STUDY ON MARKOV CHAIN WITH TRANSITION DIAGRAM M.Reni Sagayaraj1, A.Michael Raj2, G.Sathiyavani3 1,2,3 Sacred Heart College (Autonomous), Tirupattur, Vellore, Tamilnadu, India. reni.sagaya@gmail.com Abstract—Stochastic processes have many useful applications and are taught in several university programmers. In this paper we are using stochastic process with complex concept on Markov chains which uses a transition matrix to plot a transition diagram and there are several examples which explains various type of transition diagram. The concept behind this topic is simple and easy to understand. Keywords— Transition Diagram, Transition Matrix, Markov Chain, First Passage Time, Persistent State, Transient State, Periodic State, Inter-Communicating States. I. INTRODUCTION A Stochastic processes are important for modelling many natural and social phenomena and have useful applications in computer science, physics, biology, economics and finance. There are many textbooks on stochastic processes, from introductory to advanced ones [2] & [4]. Discrete Markov processes are the simplest and most important class of stochastic processes. There are only few publications on teaching stochastic processes and Markov chains, such as [1] & [5] “More research needs to be carried out on how to teach stochastic processes to researchers” [5]. “While trying to understand Markov chains models, students usually encounter many obstacles and difficulties” [6]. Many lecturers use visual displays such as sample paths and transition diagrams to illustrate Markov chains. In this article we utilise transition diagrams further for teaching several important concepts of Markov chains. We explain in details how these concepts can be defined in terms of transition diagrams (treated as directed weighted graphs) and we accompany this with worked examples. Transition diagrams provide a good techniques for solving some problems about Markov chains, especially for students with poor mathematical background. II. TRANSITION DIAGRAM OF A MARKOV CHAIN: DEFINITIONS A homogeneous finite Markov chain is entirely defined by its initial state distribution and its transition matrix S = [pij], where pij = P(X1 = i|X0 = j) is the transition probability from state j to state i. The graphical representation of a Markov chain is a transition diagram, which is equivalent to its transition matrix. The transition diagram of a Markov chain X is a single weighted directed graph, where each vertex represents a state of the Markov chain and there is a directed edge from vertex j to vertex i if the transition probability pij >0; this edge has the weight/probability of pij. Fig.1.The transition diagram of the Markov chain Example1. Example 1 A Markov chain has states 1, 2, 3, 4, 5, 6 and the following transition matrix: 0 0 0 0 0 0.5 0.85 0 0 0.1 0 0 0 0.3 0.9 0 0 0 0 0.2 0.3 0 0 0 0.6 0.1 0 0.7 0 0 0 0 0 0 0.8 0 S                    This is its transition diagram. In the diagram in Fig. 1 the probability of each edge is shown next to it. For example, the loop from state 3 to state 3 has probability 0.9 = p33 = P(X1 = 3 | X0 = 3) and the edge from state 2 to state 3 has probability 0.3 = p32 = = P(X1 = 3 | X0 = 2). In the graph terminology, an edge sequence of length n is an ordered sequence of edges e1, e2, …, e n, where ei and ei+1 are adjacent edges for all i = 1, 2,…, n-1. A path is an edge sequence, where all edges are distinct. A simple path is a path, where all vertices are distinct (except possibly the start and end vertices). A cycle is a simple path there the start vertex and the end vertex are the same. In a transition diagram the probability of an edge sequence equals a product of the probabilities of its edges. III. PROPERTIES OF A MARKOV CHAIN IN TERMS OF TRANSITION DIAGRAMS A. N-Step Transition Probability: An n-step transition probability is: pij (n) = P  Xn = i | X0 = j . It equals the probability of getting from state j to state i in exactly n steps. It can be calculated as the corresponding element of the matrix S(n) but it is usually easier to find it from the transition diagram as a sum of the probabilities of all edge sequences of length n from j to i. Example 2 In the chain from Example 1, the 3-step transition probability from 2 to 1 equals: p32 (3) = a1 + a2
  • 2. International Journal of Technical Research and Applications e-ISSN: 2320-8163, www.ijtra.com Volume 3, Issue 2 (Mar-Apr 2015), PP. 123-125 124 | P a g e Where: a1 = The probability of the path 2423 = = 0.006 a2 = The probability of the edge sequence 2333. = 0.3 0.9 2 = 0.243 These probabilities are easy to find from the diagram p32 3  0.006  0.243  0.249. p32 3  0.249. B. Probability of Visiting a State for the First Time: Let us consider a random variable: Ti = min {n ≥1 : Xn = i}. It represents the number of steps to visit a state i for the first time. It is called the first passage time of the state i. Related probabilities are: fij (m) = P  Ti = m | X0 = j and fij = P  Ti | X0 = j . Clearly: fij= ( ) 1 m ij m f    These probabilities can be interpreted as follows: fij (m) = the probability to visit i on step m for the first time starting from j; fij (m) = the probability to visit i in finite number of starting from j; In terms of transition diagrams, fij equals a sum of the probabilities of all edge sequences from j to i that do not include the vertex i between the start and end vertices. fij  m equals a similar sum for the edge sequences of length m only. For finite Markov chains these probabilities are easier to find from their transition diagrams than with other methods. Example.3 From the transition diagram we can calculate the following probabilities:  f61 2  as the probability of the path 156. f61 n for , f61  For vertices 2 and 3 we have f23    f23 2  as the probability of the path 342. f23 3   as the probability of the path 3342. and in general, for , for f23 n+1 n  as the probability of the edge sequence ( 1) 3...3 42 n times with n loops around 3. So: f23      f23 = 0.3. C. Persistent and Transient States A state i of a Markov chain is called persistent if fii = 1 and transient otherwise. Thus, if the chain starts at a persistent state, it will return to this state almost surely. If the chain starts at a transient state, there is a positive probability of never returning to this state. From the transition diagram we can evaluate the probability fii and therefore determine whether the state i is persistent or transient. Example 4 For each of the states 3 and 6 of the Markov chain in Example 1 determine whether the state is persistent or transient. Solution  f66= =0.24 as the probability of the cycle 615.So the state 6 is persistent.  =0.9 the probability of the loop around 3 =0 =0.3 =0.009 as the probability of the cycle 3423. More generally, for any , =0 and 0.1 as the probability of the edge sequence 342...423 ntimes . So: f33     Since f11=0.9 < 1, the state 3 is transistent. Lemma 1 Suppose i and j are two different states of a Markov chain. If pji > 0 and fij = 0, then the state i is transient. This lemma is easily derived from the definition of fij. The lemma can be rephrased in terms of transition diagrams: if the chain can reach state j from state i in one step (pji > 0) but cannot come back (fij = 0), then the state i is transient. Lemma 1 gives a method of finding transient states from a transition diagram without any calculations. For example, from Fig. 1 we can see that p16 = 0.3 > 0 and f61 = 0 because the chain cannot return from state 6 to state 1. Therefore by Lemma 1 the state 1 is transient. This is consistent with the result of Example 4.
  • 3. International Journal of Technical Research and Applications e-ISSN: 2320-8163, www.ijtra.com Volume 3, Issue 2 (Mar-Apr 2015), PP. 123-125 125 | P a g e D. Mean Recurrence Time: The mean recurrence time of a persistent state i is defined as ( ) 1 m i ii m mf     . If i is a transient state, i   by the definition. Thus is the expected time of returning to the state i if thechain states at i. Example 5 For each of the states 3 and 6 of the Markov chain in Example 1 find its mean recurrence time. Solution  Since the state 3 is transient =  For the state 6, =0.24 and =0 for any n So =3 =3 0.24=0.72. E. Periodic States: The period of a state i is the greatest common divisor of all n ≥ 1 with p( ii n) > 0. The state i is periodic if its period is greater than 1; otherwise it is aperiodic. In terms of transition diagrams, a state i has a period d if every edge sequence from i to i has the length, which is a multiple of d. Example 6 For each of the states 3 and 6 of the Markov chain in Example 1 find its period and determine whether the state is periodic. Solution The transition diagram in Fig. 1 has a cycle 343 of length 2 and a cycle 3432 of length 3. The greatest common divisor of 2 and 3 equals 1. Therefore the period of the state 2 equals 1 and the state is aperiodic. Any edge sequence from 6 to 6 is a cycle 615 or its repetition, so its length is a multiple of 3. Hence the state 6 is periodic with period 3. F. Communicating States State i communicates with state j (notation i→j) if p ji n > 0 for some n ≥ 0. In terms of transition diagrams, a state i communicates with state j if there is a path from i to j. State i inter-communicates with state j (notation ij) if the two states communicate with each other. Theorem 1 (Grimmett and Stirzaker, 2001) Suppose i and j are two states of a Markov chain and ij. Then:  i and j have the same period;   i is persistent  j is persistent;  i is transient  j is transient Inter-communication is an equivalence relation on the set Q of all states of a Markov chain. So the set Q can be partitioned into equivalence classes;all states in one equivalence class share the same properties, according to Theorem 1. Example 7 Let us consider the Markov chain from Example 1 and its transition diagram Clearly, the states 2 and 4 inter-communicate. Also 2→3, since p32 > 0 and 3→2, since there is a path 342 from 3 to 2. Next, 6→1 but not 1→6 (there is no path from 6 to 1). States 1, 5 and 6 all inter-communicate. Therefore, the equivalence class of 3 is: [3] = {2,3,4} and the equivalence class of 6 is: [6] = {1,5,6}. According to Theorem 1 and Examples 4 and 5, the states 2, 3 and 4 are all transient and aperiodic; the states 1, 5 and 6 are all persistent and periodic with period 3. IV. CONCLUSION In this paper we have given some elementary definitions for Markov chains. These definitions are utilized in plotting the transition diagrams. Finally we have related the two components that are Markov chains and transition matrix in the form of diagram. REFERENCES [1] Chang-Xing, L., 2009. Probe into the teaching of probability theory and stochastic processes. Proceedings of the International Conference on Computational Intelligence and Software Engineering, Dec. 11-13, IEEE Xplore Press, Wuhan, pp: 1-4. DOI: 10.1109/CISE.2009.5366432.I. S. Jacobs and C. P. Bean, “Fine particles, thin films and exchange anisotropy,” in Magnetism, vol. III, G. T. Rado and H. Suhl, Eds. New York: Academic, 1963, pp. 271– 350. [2] Cinlar, E., 2013. Introduction to Stochastic Processes. 1st Edn., Elsevier, ISBN-10: 0486497976, pp: 416. [3] Favida Kachapova “Representing Markov chains with transition diagram” Journal of Mathematics and Statistics 9 (3): 149-154, 2013 ISSN: 1549-3644 [4] Grimmett, G. and D. Stirzaker, 2001. Probability and Random Processes. 3rd Edn., Oxford University Press, New York, ISBN- 10: 0198572220, pp: 596. [5] Wang, A.L. and S.H. Kon, 2003. Should simple Markov processes be taught by mathematics teachers? International Statistical Institute, 54th Session, Berlin, pp:1-4. [6] Wang, A.L., 2001b. Introducing Markov chains models to undergraduates. International Statistical Institute, 53rd Session, Seoul, pp:1-4.