The Theory Of Queuing Systems With Correlated Flows 1st Ed 2020 Alexander N Dudin
The Theory Of Queuing Systems With Correlated Flows 1st Ed 2020 Alexander N Dudin
The Theory Of Queuing Systems With Correlated Flows 1st Ed 2020 Alexander N Dudin
The Theory Of Queuing Systems With Correlated Flows 1st Ed 2020 Alexander N Dudin
1. The Theory Of Queuing Systems With Correlated
Flows 1st Ed 2020 Alexander N Dudin download
https://ebookbell.com/product/the-theory-of-queuing-systems-with-
correlated-flows-1st-ed-2020-alexander-n-dudin-10800812
Explore and download more ebooks at ebookbell.com
2. Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Network Calculus A Theory Of Deterministic Queuing Systems For The
Internet 2001th Edition Jeanyves Le Boudec
https://ebookbell.com/product/network-calculus-a-theory-of-
deterministic-queuing-systems-for-the-internet-2001th-edition-
jeanyves-le-boudec-57564578
Optimization Of Rental Systems Queuing Loss Theory For The
Optimization Of Cargo Vehicle Rental Systems 1st Ed Felix Papier
https://ebookbell.com/product/optimization-of-rental-systems-queuing-
loss-theory-for-the-optimization-of-cargo-vehicle-rental-systems-1st-
ed-felix-papier-9960768
Mathematical Methods In The Theory Of Queuing A Y Khinchin
https://ebookbell.com/product/mathematical-methods-in-the-theory-of-
queuing-a-y-khinchin-36404270
The Theory Of Accumulation A Marxian Approach To The Dynamics Of
Capitalist Economy Nobuo Okishio
https://ebookbell.com/product/the-theory-of-accumulation-a-marxian-
approach-to-the-dynamics-of-capitalist-economy-nobuo-okishio-44912834
3. The Theory Of Direct Dark Matter Detection A Guide To Computations
Eugenio Del Nobile
https://ebookbell.com/product/the-theory-of-direct-dark-matter-
detection-a-guide-to-computations-eugenio-del-nobile-45447226
The Theory Of Being Practices For Transforming Self And Communities
Across Difference Sherry K Watt
https://ebookbell.com/product/the-theory-of-being-practices-for-
transforming-self-and-communities-across-difference-sherry-k-
watt-46136220
The Theory Of Stochastic Processes Ii Reprint Of The 1st Ed Ii Gikhman
https://ebookbell.com/product/the-theory-of-stochastic-processes-ii-
reprint-of-the-1st-ed-ii-gikhman-46667804
The Theory Of Fault Travel Waves And Its Application Xinzhou Dong
https://ebookbell.com/product/the-theory-of-fault-travel-waves-and-
its-application-xinzhou-dong-46897856
The Theory Of Measures And Integration Toc And Numbering Corrected 1st
Edition Eric M Vestrup
https://ebookbell.com/product/the-theory-of-measures-and-integration-
toc-and-numbering-corrected-1st-edition-eric-m-vestrup-47226678
5. Alexander N. Dudin
Valentina I. Klimenok
Vladimir M.Vishnevsky
TheTheory
of Queuing
Systems
with Correlated
Flows
9. Foreword
It is my pleasure and honor to write this Foreword. As is known, every product
or process we face in day-to-day activities involves either a manufacturing aspect
or a service aspect (and possibly both). Queuing theory plays a major role in
both these aspects. There are a number of excellent books written on queuing
theory in the last many decades starting with the classical book by Takacs in
1960s to Kleinrock’s two volumes in the 1970s to Neuts’ ground-breaking ones
on matrix-analytic methods in the 1980s. Along the way a number of notable
books (undergraduate and graduate level) were published by Alfa, Bhat, Cohen,
Gross and Harris, He, Latouche and Ramaswami, and Trivedi, et al., among others.
This book, written by well-known researchers on queueing and related areas, is
unique in the sense that the entire book is devoted to queues with correlated
arrivals. Since the introduction of Markovian arrival processes by Neuts, possible
correlation present in the inter-arrival times as well as correlation in the service
times has been getting attention. A number of published research articles have
pointed out how correlation cannot be ignored when analyzing queuing systems.
Further, the input to any system comes from different sources and unless one
assumes every single source to be Poisson, it is incorrect to assume that the inter-
arrival times of customers to the system are independent and identically distributed.
Thus, it is time that there is a book devoted to presenting a thorough analysis of
correlated input in queuing systems. After setting up the basic tools needed in
the study of queuing theory, the authors set out to cover a wide range of topics
dealing with batch Markovian arrival processes, quasi-birth-and-death processes,
GI/M/1-type and M/G/1-type queues, asymptotically quasi-Toeplitz Markov chains,
multidimensional asymptotically quasi-Toeplitz Markov chains, and tandem queues.
There is a chapter discussing the characteristics of hybrid communication systems
v
10. vi Foreword
based on laser and radio technologies. This book will be an excellent addition to the
collection of books on queuing theory.
Professor of Industrial Engineering & Statistics Srinivas R. Chakravarthy
Departments of Industrial and Manufacturing
Engineering & Mathematics
Kettering University, Flint, MI, USA
11. Introduction
Mathematical models of networks and queuing systems (QS) are widely used
for study and optimization of various technical, physical, economic, industrial,
administrative, medical, military, and other systems. The objects of study in the
queuing theory are situations when there is a certain resource and customers needing
to obtain that resource. Since the resource is restricted, and the customers arrive
at random moments, some customers are rejected, or their processing is delayed.
The desire to reduce the rejection rate and delay duration was the driving force
behind the development of QS theory. The QS theory is especially important in
the telecommunications industry for solving the problems of optimal distribution of
telecommunication resources between numerous users which generate requests at
random time.
This development began in the works of a Danish mathematician and engineer
A.K. Erlang published in 1909–1922. These works resulted from his efforts to solve
design problems for telephone networks. In these works, the flow of customers
arriving at a network node was modeled as a Poisson flow. This flow is defined
as a stationary ordinary flow without aftereffect, or as a recurrent flow with
exponential distribution of interval durations between arrival moments. In addition,
Erlang presumed that the servicing time for a claim also has an exponential
distribution. Under these assumptions, since the exponential distribution has no
memory the number of customers in systems considered by Erlang is defined
by a one-dimensional Markov process, which makes it easy to find its stationary
distribution. The assumption of an exponential form of the distribution of interval
durations between arrival moments appears rather strong and artificial. Nevertheless,
the theoretical results of Erlang agreed very well with the results of practical
measurements in real telephone networks. Later this phenomenon was explained
in the works of A.Ya. Khinchine, G.G. Ososkov, and B.I. Grigelionis who proved
that under the assumption of uniform smallness of flow intensities a superposition
of a large number of arbitrary recurrent flows converges to a stationary Poisson flow.
This result is a counterpart of the central limit theorem in probability theory, which
states that under uniform smallness of random values their normalized sum in the
limit (as the number of terms tends to infinity) converges in distribution to a random
vii
12. viii Introduction
value with standard normal distribution. Flows of customers arriving to a telephone
station node represent a superposition of a large number of flows with small intensity
arriving from individual network users. Therefore, the stationary Poisson flow
model describes well enough the flows in real world telephone networks.
The introduction of computer data transmission networks, whose packet-
switching approach was more efficient than the channel switching used in telephone
networks, motivated the need of a new mathematical formalism for their optimal
design. Like Erlang’s work, initial studies in the field of computer networks (L.
Kleinrock [67], M. Schwartz [101], G.P. Basharin et al. [5], V.M. Vishnevsky et
al. [107, 111], and others) were based on simplified assumptions regarding the
character of information flows (Poisson flows) and exponential distributions of the
packet translation time. This let them use the queuing theory formalism, which was
well developed by the time computer networks appeared, to evaluate performance,
design topological structures, control routing and the network as a whole, choose
optimal parameters for network protocols, and so on.
Further development of theoretical studies was related to the introduction of
integrated service digital networks (ISDN), which represent an improvement on
earlier packet switching networks. A characteristic feature of these networks is that
the same hardware and software is used for joint transmission of various multimedia
information: speech, interactive information, large volumes of data, faxes, video,
and so on. Since the flows are highly non-uniform, these networks are significantly
non-stationary (they have the so-called “bursty traffic”) and correlated. Numbers
of customers arriving over non-intersecting time intervals can be dependent, and
the dependence can be preserved even for intervals spread far apart. It is especially
important to take these factors into account in the modeling of modern wideband
4G networks [81, 112] and the new generation 5G networks that are under active
development now and expected to be introduced by 2020 (see, e.g., [83, 95, 115,
118]).
The complex character of flows in modern telecommunication networks can be
captured with so-called self-similar flows (see, e.g., [82, 106]). The main drawback
of models with self-similar flows from the point of view of their use for analytic
modeling of information transmission processes in telecommunication networks is
the following. The model of an input flow is just one of a QS’s structural elements.
Therefore, keeping in mind the prospect of an analytic study of the entire QS,
we need to aim to have as simple a flow model as possible. A self-similar flow
is a complex object by itself, and even the most successful methods of defining
it, e.g., via a superposition of a large number of on/off sources with heavy-tailed
distributions of on and off periods presume the use of asymptotics. Therefore, an
analytic study of not only the flow itself but also a QS where it comes to appears to
be virtually impossible.
The simplest flow model is a stationary Poisson flow. The simplicity of a
stationary Poisson flow is in that it is defined with a single parameter, which has
the meaning of flow intensity (expectation of the number of customers arriving
per unit of time) or the parameter of the exponential distribution of intervals
between arrival moments of the customers. However, the presence of only one
13. Introduction ix
parameter also causes obvious drawbacks of the stationary Poisson flow model. In
particular, the variation coefficient of the distribution of interval durations between
arrival moments equals 1, and the correlation coefficient between the lengths of
adjacent intervals equals zero. Therefore, if in the modeling of some real world
object the results of statistical data processing about the input flow show that the
value of variation coefficient is different from 1, and the correlation coefficient
is significantly different from 0, it is clear that a stationary Poisson flow cannot
represent an acceptable model for a real flow. Thus, it was obviously important to
introduce models of the input flow that would let one describe flows with variation
coefficient significantly different from 1 and correlation coefficient significantly
different from 0. At the same time, this model should allow the result of QS studies
with correlated flows to be relatively easily obtained as transparent counterparts of
the results of studying the corresponding QS with stationary Poisson flows. Studies
on the development of such models were conducted independently in the research
group of G.P. Basharin in the USSR (the corresponding flows were called MC-
flows, or flows controlled by a Markov chain) and the research group of M. Neuts in
the USA. The name of these flows in the USA changed with time from versatile
flows [92] to N-flows (Neuts flows) [98] and later Markovian arrival processes
(MAPs) and their generalization—batch Markovian arrival processes (BMAPs).
The shift from stationary Poisson flows to BMAP-flows was gradual, first via the
so-called Interrupted Poisson Process (IPP) flows, where intervals between arrivals
of a stationary Poisson flow alternate with intervals when there are no claims, and
the distribution of these intervals is exponential, and then Switched Poisson Process
(SPP) flows, where intervals between arrivals from a stationary Poisson flow of
one intensity alternate with intervals between arrivals of a stationary Poisson flow
with a different intensity. Finally, assuming that there are not two but rather a finite
number of intensities, researchers arrived at the MMPP (Markov Modulated Poisson
Process) flows, and then generalized it to the BMAP-flow model.
It is important to account for correlations in the input flow because both
measurement results in real networks and computation results based on QS models
considered in this book show that the presence of a positive correlation significantly
deteriorates the characteristics of a QS (increases average waiting time, rejection
probability, and so on). For example, for the same average rate of customer arrival,
same distribution of servicing time and same buffer capacity the probability of
losing a claim may differ by several orders of magnitude [31]. Thus, if the results of
studying a real flow of customers indicate that it is a correlated flow then it is a bad
idea to apply the wellstudied models with recurrent flows and, of course, stationary
Poisson flow. For this reason, the theory of QS with correlated arrival processes
developed in the works of M. Neuts, D. Lucantoni, V. Ramaswami, S. Chakravarthy,
A.N. Dudin, V.I. Klimenok and others has had numerous applications in the studies
of telecommunication networks, in particular wideband wireless networks operating
under the control of IEEE802.11 and 802.16 protocols (see, e.g., [85, 86, 94]),
hybrid highspeed communication systems based on laser and radiotechnologies
(see, e.g., [17, 87, 100]), cellular networks of the European standard UMTS and its
14. x Introduction
latest version LTE (Long Term Evolution) (see, e.g., [18–20, 27, 66, 108]), wireless
networks with linear topology (see, e.g., [13–15, 21, 36, 37]).
The works of Lucantoni [71, 79] describe in detail the history of the problem
and main results of studying single-server QS with BMAP-flows obtained before
1991. Further progress in this area up to the end of the twentieth century is shown
in a detailed survey by Chakravarthy [78]. In the present survey, whose purpose is
to describe and systematize new results of studying complex QS with incoming
BMAP-flows and their applications to telecommunication networks, we mostly
mention the papers published after the survey [78] appeared in 2001.
Active research in the field of queueing systems with correlated input, conducted
for over a quarter of a century, is described in numerous papers by A.S. Alfa, S.
Asmussen, A.D. Banik, D. Baum, L. Breuer, H. Bruneel, S.R. Chakravarthy, M.L.
Chaudhry, B.D. Choi, A.N. Dudin, D.V.Efrosinin, U.S. Gupta, Q.M. He, C.S. Kim,
A. Krishnamoorty, V.I. Klimenok, G.Latouche, H.W. Lee, Q-L. Li, L. Lakatos, F.
Machihara, M. Neuts, S. Nishimura, V. Ramaswami, Y. Takahashi, T. Takine, M.
Telek, Y.Q. Zhao, V.M. Vishnevsky. A brief description of some issues on queuing
theory with correlated input is given in separate sections of the monographs on
queueing theory and its applications [6, 9, 12, 43, 45, 110].
However, there is a lack of a systematic description of the theory of QS
with correlated arrivals in the world literature. The sufficient completeness of
the mathematical results obtained in the recent years, and the practical needs of
developers of modern telecommunications networks have made it expedient to write
the present monograph to close this gap.
The content of the book is based on original results obtained by the authors; links
to their recent publications are listed in the “Bibliography” section. Here we also
use the materials of lecture courses delivered to students and post-graduates of the
Belarusian State University and the Moscow Institute of Physics and Technology.
The book includes an introduction, six chapters and an appendix.
Chapter 1 sets out the fundamentals of the queuing theory. The basic concepts of
this theory are given. The following notions which play an important role in study
of queuing systems are described: Markov and semi-Markovian (SM) stochastic
processes, Laplace-Stieltjes transforms, and generating functions. We describe the
classical methods for evaluation of characteristics of one-server, multi-server and
multi-stage QS with limited or unlimited buffers, different service disciplines and
different service time distributions.
Chapter 2 (Sect. 2.1) describes the BMAP (Batch Markov Arrival Process),
discusses its properties in detail, and communicates with some of the simpler types
of input flows known in the recent literature. The so-called Marked Markovian
flow, which is an extension of the BMAP concept to the case of heterogeneous
applications, and the SM input flow is briefly defined, where the time intervals
between customer arrivals can be dependent and distributed according to an arbitrary
law. Further we describe the phase type distributions (PH).
Section 2.2 describes multidimensional (or vector) Quasi-Birth-and-Death Pro-
cesses which are the simplest and most well-studied class of multidimensional
Markov chains. An ergodicity criterion and a formula to calculate vectors of
15. Introduction xi
stationary probabilities of the system states are obtained in the so-called matrix-
geometric form. As an example of application of vector Birth-and-Death process,
a queue of the type MAP/PH/1, is analyzed, i.e., a single-server queue with an
infinite volume buffer, the Markov arrival process (which is a special case of BMAP)
and a phase type distribution of the service time. Hereafter it is assumed that the
reader is familiar with J. Kendall’s notation for describing queuing models. For the
sake of completeness of this system description, we also obtained (in terms of the
Laplace-Stieltjes transform) the distribution of the waiting time in the system for a
query. A spectral approach to the analysis of vector Birth-and-Death Processes is
described.
In Sect. 2.3 we give brief results for multidimensional Markov chains of type
G/M/1. As an example of applications of multidimensional Markov chains of type
G/M/1, we analyze the embedded chain with moments of arrivals as embedded
moments for the queueing system of type G/PH/1. To make the description more
complete we also obtained the system state probability distribution for this system
at an arbitrary moment of time and distribution of the waiting time of an arbitrary
query in the system.
In Sect. 2.4 we give results for multidimensional Markov chains of type M/G/1
(or quasi-Toeplitz upper-Hessenberg Markov chains). A necessary and sufficient
condition for ergodicity of the chain is proved. Two methods are described for
obtaining the stationary probability distribution of the chain states: a method that
utilizes vector generating functions and considerations of their analyticity on a
unit circle of the complex plane, and the method of M.Neuts and its modification,
obtained on the basis of the censored Markov chains. The pros and cons of both
methods are discussed.
Section 2.5 briefly describes the procedure for obtaining the stationary probabil-
ity distribution of states of multidimensional Markov chains of type M/G/1 with a
finite state space.
In Sect. 2.6 we describe the obtained results for discrete-time asymptotically
quasi-Toeplitz Markov chains (AQTMC), and in Sect. 2.7—the results for
continuous-time asymptotically quasi-Toeplitz Markov chains. We prove the
sufficient conditions for ergodicity and nonergodicity of an AQTMC. An algorithm
for calculating the stationary probabilities of an AQTMC is derived on the basis of
the apparatus of the theory of sensor Markov chains.
As an example of applying the results presented for multidimensional Markov
chains of type M/G/1, in Chap.3 we study BMAP/G/1 and BMAP/SM/1
queues. For the BMAP/G/1 system in Sect. 3.1 an example is given of calculating
the stationary state probability distribution of the embedded Markov chain with
service completion moments as embedding points. The derivation of equations for
the stationary state probability distribution of the system at an arbitrary instant of
time is briefly described on the basis of the results in the theory of the Markovian
renewal processes. The distribution of the virtual and real waiting time in the
system and queueing time of a request is obtained in terms of the Laplace-Stieltjes
transform. For a more general system BMAP/SM/1 with semi-Markovian service,
in Sect. 3.2 we have derived in a simple form the ergodicity condition for the
16. xii Introduction
embedded Markov chain with service completion moments as embedding points,
and discussed the problem of finding block matrices of the transition probabilities
for this chain. A connection is established between the stationary distribution of
the system state probabilities at an arbitrary instant of time with the stationary
distribution of the state probabilities of the embedded Markov chain with moments
of termination of service as embedding points. Section 3.3 is devoted to the analysis
of the BMAP/SM/1/N system with different disciplines for accepting a batch of
requests in a situation where the batch size exceeds the number of vacant spaces in
the buffer at the moment of the batch’s arrival.
In Sect. 3.4, a multi-server system of type BMAP/PH/N is investigated, and in
Sect. 3.5 we consider a multi-server loss system of type BMAP/PH/N/0, which
is a generalization of Erlang’s model M/M/N/0, with the study of which queuing
theory has started its history. Different disciplines for accepting a batch of requests
for service are examined in a situation when the batch size exceeds the number of
vacant servers at the moment of the batch’s arrival. It is numerically illustrated that
the known property of the invariance of the steady state probability distribution with
respect to the service time distribution (under a fixed mean service time) inherent
in the system M/G/N/0 is not satisfied when the arrival flow is not a stationary
Poisson one, but a more general BMAP flow.
In Chap.4 (Sect. 4.1), using the results obtained for asymptotically quasi-Toeplitz
Markov chains, we consider a BMAP/SM/1 system with retrial requests. The
classical retrial strategy (and its generalization—a linear strategy) and a strategy
with a constant retrial rate are considered. Conditions for the existence of a
stationary probability distribution of system states at moments of termination of
service are given. A differential-functional equation is obtained for the vector
generating function of this distribution. A connection is established between the
stationary distribution of the system state probabilities at an arbitrary instant
of time and the stationary distribution of the state probabilities of the Markov
chain, embedded at the moments of service termination. And in Sect. 4.2, the
BMAP/PH/N system with retrial calls is considered. We considered the retrial
strategy with an infinitely increasing rate of retrials from the orbit with an increase
in the number of requests in the orbit and a strategy with a constant retrial rate.
The behavior of the system is described by a multidimensional Markov chain,
the components of which are the number of requests in the orbit, the number of
busy servers, the state of the BMAP flow control process, and the state of the PH
service process in each of the busy servers. The conditions for the existence of
a stationary probability distribution of the system states are given. Formulas for
computing the most important characteristics of system performance are obtained.
The case of non-absolutely insistent requests is also considered, in which, after each
unsuccessful attempt to get the service, the request from the orbit is permanently
dismissed from the system. We provide the results of numerical calculations
illustrating the operability of the proposed algorithms and the dependence of the
basic system performance characteristics on its parameters. The possibility of
another description of the dynamics of the system functioning is illustrated in terms
of the multidimensional Markov chain, the blocks of the infinitesimal generator of
17. Introduction xiii
which, in the case of a large number of servers, have a much smaller dimension than
that of the original chain.
Chapter 5 provides an analysis of the characteristics of hybrid communication
systems based on laser and radio technologies in terms of two-server or three-
server queueing systems with unreliable servers. The basic architectures of wireless
systems of this class are being investigated that provide high-speed and reliable
access to information resources in existing and prospective next-generation net-
works. Sections 5.1–5.3 contain description of models and methods for analyzing
the characteristics of two-server queuing systems with unreliable servers and BMAP
arrival flows, adequately describing the functioning of hybrid communication
systems where the laser channel is backed up by a broadband radio channel (cold
and hot standby). In Sect. 5.4, methods and algorithms for assessing performance
characteristics of the generalized architecture of a hybrid system are given: an
atmospheric laser channel and a millimeter-wave (71–76GHz, 81–86 GHz) radio
channel operating in parallel are backed up by a broadband IEEE 802.11 centimeter-
wave radio channel. For all the queuing models considered in this chapter, we give a
description of the multidimensional Markov chain, the conditions for the existence
of a stationary regime, and the algorithms for calculation of stationary distributions,
and basic performance characteristics.
Chapter 6 discusses multiphase queuing systems with correlated arrivals and their
application assessing the performance of wireless networks with a linear topology.
In Sect. 6.1 a brief overview of studies on multi-stage QSs with correlated arrivals
is given. Section 6.2 generalizes the results of the previous section to the case when
service times at the first phase are dependent random variables and are described by
an SM service process.
Section 6.3 is devoted to the study of the tandem queue, first station of which is
a BMAP/G/1 system and the second one is a multi-server system without a buffer
with exponentially distributed service tine.
In Sects. 6.4 and 6.5, we consider the methods of analysis of the stationary
characteristics of tandem queues BMAP/G/1 → M̄/N̄/R̄ and BMAP/G/1 →
M/N/O with retrials and group occupation of servers of the second-station.
In Sect. 6.6 a tandem queue with MAP arrivals is considered which consists
of an arbitrary finite number of stations, defined as multi-server QSs without
buffers. The service times at the tandem’s stations are PH-distributed, which
allows us to adequately describe real-life service processes. The theorem is proved
stating that the output flow of customers at each station of the tandem queue
belongs to the MAP-type. A method is described for exact calculation of marginal
stationary probabilities for the tandem’s parts, as well as for the tandem itself, and
the corresponding loss probabilities. With the use of the Ramaswami-Lucantoni
approach an algorithm is proposed for computation of the main characteristics of
a multi-stage QS under a large values of the number of servers at the stages and a
large state space of the PH service control process.
Section 6.7 considers a more general case of QS, than described in the previous
section, which has a correlated cross-traffic incoming at each station in addition to
the traffic from the previous station of the tandem queue.
18. xiv Introduction
Section 6.8 is devoted to the study of a tandem queue with an arbitrary finite
number of stations, represented by single-server QS with limited buffers, which
adequately describes the behavior of a broadband wireless network with linear
topology.
In the appendix we give some information from the theory of matrices and
the theory of functions of matrices, which are essentially used in presenting the
mathematical material of the book.
The authors are grateful to the colleagues from the Belarusian State University
and the V.A. Trapeznikov Institute of Control Sciences of the Russian Academy of
Sciences, who have helped in the preparation of the book. Special thanks should be
given to Prof. S. Chakravarthy who have written the preface to this book’s foreword,
as well as to Russian Foundation for Basic Research and to Belarusian Republic
Foundation for Fundamental Research for financial support in the framework of the
grant № 18-57-00002–№ F18R-136.
19. Contents
1 Mathematical Methods to Study Classical Queuing Systems ........... 1
1.1 Introduction ............................................................. 1
1.2 Input Flow, Service Time ............................................... 2
1.3 Markov Random Processes ............................................. 7
1.3.1 Birth-and-Death Processes ..................................... 8
1.3.2 Method of Transition Intensity Diagrams...................... 11
1.3.3 Discrete-Time Markov Chains ................................. 12
1.4 Probability Generating Function, Laplace and Laplace-Stieltjes
Transforms .............................................................. 13
1.5 Single-Server Markovian Queuing Systems ........................... 15
1.5.1 M/M/1 Queue.................................................. 15
1.5.2 M/M/1/n Queue ............................................... 18
1.5.3 System with a Finite Number of Sources ...................... 19
1.6 Semi-Markovian Queuing Systems and Their Investigation
Methods ................................................................. 20
1.6.1 Embedded Markov Chain Method to Study an M/G/1 ...... 20
1.6.2 The Method of Embedded Markov Chains
for a G/M/1 Queue ............................................ 28
1.6.3 Method of Supplementary Variables ........................... 31
1.6.4 Method of Supplementary Events.............................. 35
1.7 Multi-Server Queues .................................................... 40
1.7.1 M/M/n and M/M/n/n + m Queues ......................... 41
1.7.2 M/M/n/n and M/G/n/n Queues ............................ 43
1.7.3 M/M/∞ Queue ................................................ 45
1.7.4 M/G/1 with Processor Sharing and Preemptive LIFO
Disciplines ...................................................... 49
1.8 Priority Queues.......................................................... 50
1.9 Multiphase Queues...................................................... 56
xv
20. xvi Contents
2 Methods to Study Queuing Systems with Correlated Arrivals ......... 63
2.1 Batch Markovian Arrival Process (BMAP): Phase-Type
Distribution .............................................................. 63
2.1.1 Definition of the Batch Markovian Arrival Process ........... 63
2.1.2 The Flow Matrix Counting Function........................... 64
2.1.3 Some Properties and Integral Characteristics of BMAP ..... 67
2.1.4 Special Cases of BMAP ....................................... 73
2.1.5 Phase-Type Distribution ........................................ 74
2.1.6 Calculating the Probabilities of a Fixed Number
of BMAP Arrivals During a Random Time .................. 79
2.1.7 Superposition and Sifting of BMAPs ......................... 81
2.1.8 Batch Marked Markovian Arrival Process (BMMAP)......... 82
2.1.9 Semi-Markovian Arrival Process (SM)........................ 84
2.2 Multidimensional Birth-and-Death Processes ......................... 85
2.2.1 Definition of the Quasi-Birth-and-Death Process
and Its Stationary State Distribution ........................... 86
2.2.2 Application of the Results Obtained for QBD
Processes to the MAP/PH/1 Queue Analysis ............... 89
2.2.3 Spectral Approach for the Analysis
of Quasi-Birth-and-Death Processes ........................... 93
2.3 G/M/1-Type Markov Chains .......................................... 94
2.3.1 Definition of a G/M/1-Type MC and its Stationary
State Distribution ............................................... 94
2.3.2 Application of Results to Analysis of a G/PH/1-Type
Queue............................................................ 98
2.4 M/G/1-Type Markov Chains .......................................... 105
2.4.1 Definition of an M/G/1-Type Markov Chain
and Its Ergodicity Criteria ...................................... 105
2.4.2 The Method of Generating Functions to Find
the Stationary State Distribution of an M/G/1-Type MC .... 109
2.4.3 Calculation of Factorial Moments.............................. 114
2.4.4 A Matrix-Analytic Method to Find the Stationary
State Probability Distribution of an M/G/1-Type MC
(the Method of M. Neuts) ...................................... 117
2.5 M/G/1-Type Markov Chains with Finite State Space................ 125
2.6 Asymptotically Quasi-Toeplitz Discrete-Time Markov Chains....... 126
2.6.1 The Ergodicity and Nonergodicity Conditions
for an Asymptotically Quasi-Toeplitz MC..................... 127
2.6.2 Algorithm to Calculate the Stationary State
Probabilities of an Asymptotically Quasi-Toeplitz
Markov Chain ................................................... 133
21. Contents xvii
2.7 Asymptotically Quasi-Toeplitz Continuous-Time Markov
Chains ................................................................... 139
2.7.1 Definition of a Continuous-Time AQTMC .................... 139
2.7.2 Ergodicity Conditions for a Continuous-Time
Asymptotically Quasi-Toeplitz MC............................ 141
2.7.3 Algorithm to Calculate the Stationary State Distribution ..... 145
3 Queuing Systems with Waiting Space and Correlated Arrivals
and Their Application to Evaluation of Network Structure
Performance .................................................................. 147
3.1 BMAP/G/1 Queue .................................................... 147
3.1.1 Stationary Distribution of an Embedded Markov Chain ...... 148
3.1.2 The Stationary Distribution of the System
at an Arbitrary Time ............................................ 156
3.1.3 The Virtual and Real Waiting Time Distributions ............. 161
3.2 BMAP/SM/1 Queue .................................................. 168
3.2.1 The Stationary Probability Distribution
of an Embedded Markov Chain ................................ 169
3.2.2 The Stationary Probability Distribution of the System
States at an Arbitrary Time ..................................... 172
3.3 BMAP/SM/1/N Queue .............................................. 175
3.3.1 Analysis of the System with the Partial Admission
Discipline........................................................ 176
3.3.2 Analysis of the System with the Complete Admission
and Complete Rejection Disciplines ........................... 183
3.4 BMAP/PH/N Queue................................................. 189
3.5 BMAP/PH/N/N Queue ............................................. 194
3.5.1 The Stationary Distribution of the System States
for the PA Discipline........................................... 195
3.5.2 The Stationary Distribution of the System States
for the CR Discipline........................................... 199
3.5.3 The Stationary Distribution of the System States
for the CA Discipline ........................................... 200
4 Retrial Queuing Systems with Correlated Input Flows
and Their Application for Network Structures Performance
Evaluation .................................................................... 203
4.1 BMAP/SM/1 Retrial System ......................................... 203
4.1.1 Stationary Distribution of Embedded Markov Chain ......... 204
4.1.2 Stationary Distribution of the System at an Arbitrary
Time ............................................................. 209
4.1.3 Performance Measures ......................................... 211
4.2 BMAP/PH/N Retrial System........................................ 212
4.2.1 System Description ............................................. 213
4.2.2 Markov Chain Describing the Operation of the System ...... 213
4.2.3 Ergodicity Condition............................................ 217
22. xviii Contents
4.2.4 Performance Measures ......................................... 221
4.2.5 Case of Impatient Customers................................... 223
4.2.6 Numerical Results .............................................. 224
4.3 BMAP/PH/N Retrial System in the Case of a Phase
Distribution of Service Time and a Large Number of Servers ........ 233
4.3.1 Choice of Markov Chain for Analysis of System ............. 233
4.3.2 Infinitesimal Generator and Ergodicity Condition ............ 235
4.3.3 Numerical Examples............................................ 236
4.3.4 Algorithm for Calculation of the Matrices Pn(β),
An(N, S), and LN−n(N, S̃) .................................... 238
5 Mathematical Models and Methods of Investigation of Hybrid
Communication Networks Based on Laser and Radio Technologies... 241
5.1 Analysis of Characteristics of the Data Transmission Process
Under Hot-Standby Architecture of a High-Speed FSO
Channel Backed Up by a Wireless Broadband Radio Channel ....... 243
5.1.1 Model Description .............................................. 243
5.1.2 Process of the System States ................................... 244
5.1.3 Condition for Stable Operation of the System:
Stationary Distribution ......................................... 246
5.1.4 Vector Generating Function of the Stationary
Distribution: Performance Measures........................... 249
5.1.5 Sojourn Time Distribution...................................... 251
5.2 Analysis of Characteristics of the Data Transmission Process
Under Cold-Standby Architecture of a High-Speed FSO
Channel Backed Up by a Wireless Broadband Radio Channel ....... 253
5.2.1 Model Description .............................................. 254
5.2.2 Process of the System States ................................... 255
5.2.3 Stationary Distribution: Performance Measures............... 257
5.2.4 Numerical Experiments......................................... 261
5.2.5 The Case of an Exponential System ........................... 267
5.3 Analysis of Characteristics of the Data Transmission Process
Under Hot-Standby Architecture of a High-Speed FSO
Channel Backed Up by a Millimeter Radio Channel.................. 273
5.3.1 Model Description .............................................. 273
5.3.2 Process of the System States ................................... 274
5.3.3 Stationary Distribution: Performance Measures............... 276
5.4 Analysis of Characteristics of the Data Transmission Process
Under Cold-Standby Architecture of a High-Speed FSO
Channel and Millimeter Channel Backed Up by a Wireless
Broadband Radio Channel.............................................. 281
5.4.1 Model Description .............................................. 281
5.4.2 Process of the System States ................................... 282
5.4.3 Stationary Distribution.......................................... 287
23. Contents xix
5.4.4 Vector Generating Function of the Stationary
Distribution: System Performance Measures .................. 293
5.5 Numerical Experiments................................................. 296
6 Tandem Queues with Correlated Arrivals and Their Application
to System Structure Performance Evaluation ............................ 307
6.1 Brief Review of Tandem Queues with Correlated Arrivals ........... 307
6.2 BMAP/G/1 → ·/M/N/0 Queue with a Group Occupation
of Servers of the Second Station ....................................... 310
6.2.1 Mathematical Model ............................................ 310
6.2.2 Stationary State Distribution of the Embedded Markov
Chain ............................................................ 311
6.2.3 Stationary State Distribution at an Arbitrary Time ............ 316
6.2.4 Performance Characteristics of the System .................... 322
6.2.5 Stationary Sojourn Time Distribution.......................... 323
6.2.6 Sojourn Time Moments......................................... 331
6.2.7 Numerical Examples............................................ 335
6.3 BMAP/SM/1 → ·/M/N/0 Queue with a Group Occupation
of Servers at the Second Station ....................................... 340
6.3.1 Mathematical Model ............................................ 340
6.3.2 The Stationary State Distribution of the Embedded
Markov Chain ................................................... 341
6.3.3 Stationary State Distribution at an Arbitrary Time ............ 343
6.3.4 Performance Characteristics of the System .................... 345
6.3.5 Numerical Examples............................................ 345
6.4 BMAP/G/1 → ·/M/N/R Queue.................................... 347
6.4.1 Mathematical Model ............................................ 347
6.4.2 Stationary State Distribution of the Embedded Markov
Chain ............................................................ 347
6.4.3 Stationary State Distribution at an Arbitrary Time ............ 349
6.4.4 Performance Characteristics of the System .................... 350
6.4.5 Numerical Examples............................................ 351
6.5 BMAP/G/1 → ·/M/N/0 Retrial Queue with a Group
Occupation of Servers at the Second Station .......................... 353
6.5.1 Mathematical Model ............................................ 353
6.5.2 Stationary State Distribution of the Embedded Markov
Chain ............................................................ 354
6.5.3 Stationary State Distribution at an Arbitrary Time ............ 357
6.5.4 Performance Characteristics of the System .................... 359
6.5.5 Numerical Examples............................................ 360
6.6 Tandem of Multi-Server Queues Without Buffers..................... 364
6.6.1 The System Description ........................................ 364
6.6.2 Output Flows from the Stations ................................ 365
6.6.3 Stationary State Distribution of a MAP/PH/N/N
Queue............................................................ 367
24. xx Contents
6.6.4 Stationary State Distribution of the Tandem
and Its Fragments ............................................... 369
6.6.5 Customer Loss Probability ..................................... 370
6.6.6 Investigation of the System Based on the Construction
of a Markov Chain Using the Ramaswami-Lucantoni
Approach ........................................................ 371
6.7 Tandem of Multi-Server Queues with Cross-Traffic
and No Buffers .......................................................... 374
6.7.1 The System Description ........................................ 374
6.7.2 Output Flows from the Stations: Stationary State
Distribution of the Tandem and Its Fragments ................ 375
6.7.3 Loss Probabilities ............................................... 377
6.7.4 Investigation of the System Based on the Construction
of the Markov Chain Using the Ramaswami-Lucantoni
Approach ........................................................ 380
6.8 Tandem of Single-Server Queues with Finite Buffers
and Cross-Traffic ........................................................ 382
6.8.1 The System Description ........................................ 382
6.8.2 Output Flows from the Stations ................................ 383
6.8.3 Stationary State Distribution and the Customer Sojourn
Time Distribution for a MAP/PH/1/N Queue.............. 385
6.8.4 The Stationary Distribution of the Tandem and Its
Fragments ....................................................... 387
6.8.5 Loss Probabilities ............................................... 388
6.8.6 Stationary Distribution of Customer Sojourn Time
in the Tandem Stations and in the Whole Tandem ............ 390
A Some Information from the Theory of Matrices and Functions
of Matrices .................................................................... 393
A.1 Stochastic and Sub-stochastic Matrices: Generators
and Subgenerators....................................................... 393
A.2 Functions of Matrices ................................................... 398
A.3 Norms of Matrices ...................................................... 401
A.4 The Kronecker Product and Sum of the Matrices ..................... 402
References......................................................................... 405
25. Notation
AQTMC Asymptotically quasi-Toeplitz Markov chain
QTMC Quasi-Toeplitz Markov chain
MC Markov chain
LST Laplace-Stieltjes Transform
PGF Probability generating function
MAP Markovian Arrival Process
MMAP Marked Markovian Arrival Process
MMPP Markov Modulated Poisson Process
BMAP Batch Markov Arrival Process
PH Phase Type
QBD Quasi-Birth-and-Death Process
SM Semi-Markovian Process
An Square matrix A of size n × n
detA Determinant of matrix A
(A)i,k = aik The entry in row i and column k of matrix A
Aik (i, k) Cofactor of matrix A = (aik)
AdjA = (Aki) Adjugate matrix of the matrix A
diag{a1, . . . , aM} Order M diagonal matrix with diagonal entries {a1, . . . , aM}
diag+
{a1, . . . , aM } Order M + 1 square matrix with superdiagonal {a1, . . . , aM},
the rest of the entries are zero
diag−
{a1, . . . , aM } Order M + 1 square matrix with subdiagonal {a1, . . . , aM},
the rest of the entries are zero
δi,j Kronecker delta, which is 1 if i = j, and 0 otherwise
e (0) Column vector (row vector) consisting of 1’s (0’s). If neces-
sary, the vector size is determined by the subscript
ê Row vector (1, 0, . . . , 0)
E Symbol of expected value of a random variable
I Identity matrix. If necessary, the matrix size is determined by
the subscript
O Zero matrix. If necessary, the matrix size is determined by the
subscript
xxi
26. xxii Notation
˜
I Matrix diag{0, 1, . . ., 1}
T Matrix transposition symbol
⊗ Kronecker product of matrices
⊕ Kronecker sum of matrices
W̄ = W + 1
f (j)(x) The order j derivative of function f (x), j ≥ 0
f (x) Derivative of function f (x)
End of proof
Bold lower-case letters denote row vectors, unless otherwise defined (e.g.,
column vector e).
28. 2 1 Mathematical Methods to Study Classical Queuing Systems
in the review [105] by the well-known specialist R. Syski, who notes the danger
of a possible split of unified queuing theory into theoretical and engineering parts.
The direct consequence of this problem when writing an article or book is usually
the question of choosing the language and the corresponding level of rigor in the
presentation of the results. This book is aimed both at specialists in the field of
queuing theory and specialists in the field of its application to the study of real
objects (first of all, telecommunications networks). Therefore, in this chapter, we
give a brief overview of methods for analysing queuing systems on the average
(compromise) level of rigor. It is assumed that the reader is familiar with probability
theory within the framework of a technical college course. Where necessary, some
information is provided directly in the text.
In queuing theory applications, an important stage in studying a real object is a
formal description of the object’s operation in terms of a particular queuing system.
The queuing system is considered preset if the following components are fully
described:
• input of customers (requests, messages, calls, etc.);
• number and types of servers;
• buffer size, or number of waiting places (waiting space) where the customers
arriving at the system when all servers are busy wait before their service starts;
• service time of customers;
• queue discipline, which determines the order in which customers are served in
the queue.
A commonly used shorthand notation for a queuing system called Kendall
notation and presented in 1953 is a set of four symbols separated by the slash:
A/B/n/m. The symbol n, n ≥ 1 specifies the number of identical parallel servers.
The symbol m, m ≥ 0, means the number of places to wait in the buffer in literature
in Russian and the maximum number of customers in the system in literature in
English. If m = ∞, then the fourth position in the queuing system description is
usually omitted. Symbol A describes the incoming flow of customers, and symbol B
characterizes the service time distribution. Some possible values for these symbols
will be given and explained in the next section.
1.2 Input Flow, Service Time
The incoming flow of customers significantly determines the performance charac-
teristics of queuing systems. Therefore, a correct description of the input flow of
customers arriving at random times into the real system and the identification of its
parameters are very important tasks. A rigorous solution of this problem lies in the
mainstream of the theory of point random processes and is out of the scope of our
book. Here we give only brief information from the theory of homogeneous random
flows that is necessary to understand the subsequent results.
29. 1.2 Input Flow, Service Time 3
Customers arrive at the queue at random time points t1, t2, . . . , tn, . . . . Let τk =
tk − tk−1 be the length of the time interval between the arrivals of the (k − 1)th and
kth customers, k ≥ 1 (t0 is supposed to be 0) and xt be the number of moments tk
lying on the time axis to the left of point t, t ≥ 0.
The stochastic arrival process (input flow) is assumed to be determined if we
know the joint distribution of random values τk, k = 1, . . . , n for any n, n ≥ 1 or
the joint distribution of the random values xt for all t, t ≥ 0.
Definition 1.1 A stochastic arrival process is called stationary if for any integer m
and any non-negativenumbers u1, . . . , um, the joint distribution of random variables
(xt+uk − xt), k = 1, . . ., m is independent of t.
On the intuitive level, this means that the distribution of the number of customers
arriving during a certain time interval depends on the length of this interval and does
not depend on the location of this interval on the time axis.
Definition 1.2 A stochastic arrival process is called ordinary if for any t it holds
lim
→0
P{xt+ − xt 1}
= 0.
On the intuitive level, this means that the probability of more than one arrival
during a short time interval is of higher order of vanishing compared to the
interval length. Roughly speaking, it means that simultaneous arrival of two or more
customers is almost impossible.
Definition 1.3 A stochastic arrival process is said to be a flow without aftereffect
if the numbers of customers arriving in non-overlapping time intervals are mutually
independent random variables.
Definition 1.4 A stochastic arrival process is said to be a flow with limited
aftereffect if the random variables τk, k ≥ 1 are mutually independent.
Definition 1.5 A stochastic arrival process is said to be recurrent (or renewal) if it
is a flow with limited aftereffect and the random values τk, k ≥ 1 are identically
distributed.
Let A(t) = P{τk t} be the distribution function, which completely determines
the renewal arrival process.
If A(t) is the exponential distribution, A(t) = 1 − e−λt then the first symbol is
M in Kendall’s notation.
If distribution A(t) is degenerate, i.e., the customers arrive at regular intervals,
then the first symbol is D in Kendall’s notation.
If A(t) is hyperexponential,
A(t) =
n
k=1
qk(1 − e−λkt
),
30. 4 1 Mathematical Methods to Study Classical Queuing Systems
where λk ≥ 0, qk ≥ 0, k = 1, . . . , n,
n
l=1
ql = 1 then in Kendall’s notation the first
symbol is HMn.
If A(t) is Erlang with parameters (λ, k),
A(t) =
∞
0
λ
(λt)k−1
(k − 1)!
e−λt
dt,
then the first symbol is Ek, k is called the order of the Erlang distribution.
A more general class of distributions including hyperexponential and Erlang as
special cases is the so-called phase-type distribution. In Kendall’s notation, it is
denoted by PH. See [9] and the next chapter for detailed information about the
PH distribution, its properties and the probabilistic interpretation.
If no assumptions are made about the form of the distribution function A(t) then
the symbols G (General) or GI (General Independent) are used as the first symbol
in Kendall’s notation. Strictly speaking, the symbol G usage does not require an
input flow to be recurrent but symbol GI means exactly a recurrent flow. But in the
literature, sometimes there is no difference between these symbols.
Definition 1.6 The intensity (the rate) λ of the stationary stochastic arrival process
is the expectation (the mean number) of customers arriving per time unit
λ = M{xt+1 − xt} =
M{xt+T − xt}
T
, T 0.
Definition 1.7 Parameter α of a stationary stochastic arrival process is called the
positive value, determined as
α = lim
→0
P{xt+ − xt ≥ 1}
.
Definition 1.8 A stationary ordinary input flow without aftereffect is called simple.
The following propositions are valid.
Proposition 1.1 The input flow is simple if and only if it is stationary Poisson, i.e.,
P{xt+u − xt = k} =
(λu)k
k!
e−λu
, k ≥ 0.
Proposition 1.2 The input flow is simple if and only if it is recurrent with exponen-
tial distribution of time intervals between customer arrivals, A(t) = 1 − e−λt.
Proposition 1.3 If n customers of a simple input flow arrived during a time interval
of length T then the probability that a tagged customer arrived during time interval
31. 1.2 Input Flow, Service Time 5
τ inside the interval of length T does not depend on when the other customers
arrived and how the time interval τ is located inside T. This probability is τ/T.
Proposition 1.4 The intensity and the parameter of a simple flow are equal. The
mean number of customers arriving during the time interval T is equal to λT.
Proposition 1.5 Superposition of two independent simple flows having parameters
λ1 and λ2 is again a simple flow with parameter λ1 + λ2.
Proposition 1.6 A flow obtained from a simple flow of rate λ as a result of applying
the simplest procedure of recurrent sifting (an arbitrary customer is accepted to the
sifted flow with probability p and ignored with probability 1 − p) is a simple flow
of rate pλ.
Proposition 1.7 The flow obtained as a result of the superposition of n independent
recurrent flows having a uniformly small rate converges to a simple flow as n
increases.
In the literature, a simple flow is often referred to as Poisson arrivals. The most
well-studied queuing systems are the ones with the Poisson input flow of customers.
This is largely explained by Proposition 1.2 and the well-known memoryless
property of the exponential distribution. For the exponentially distributed random
variable ν, this property is expressed in terms of conditional probabilities as follows:
P{ν ≥ t + τ|ν ≥ t} = P{ν ≥ τ}.
Let us explain this equation. The definition of conditional probability and the fact
that P{ν x} = 1 − e−λx yield
P{ν ≥ t + τ|ν ≥ t} =
P{ν ≥ t + τ, ν ≥ t}
P{ν ≥ t}
=
P{ν ≥ t + τ}
P{ν ≥ t}
=
e−λ(t+τ)
e−λt
= e−λτ
= P{ν ≥ τ}.
From this property, it follows that the distribution of time intervals from an arbitrary
moment to the moment when the next customer of the Poisson input arrives is
independent of when the previous customer arrived. The property considerably
simplifies the analysis of queues with Poisson arrivals.
Proposition 1.7 explains the fact that the Poisson input often may be realistic
in practical systems (for example, the input of customers arriving at the automatic
telephone station is the sum of a large number of independent small flows coming
from individual subscribers of the telephone network, and therefore close to a simple
one), and therefore the use of a Poisson input for the simulation of real flow not only
facilitates the investigation of queuing systems but is also justified.
Below we present useful information about recurrent flows. Let an arbitrary
moment of time be fixed. We are interested in the distribution function F1(t) of
32. 6 1 Mathematical Methods to Study Classical Queuing Systems
the time passed to an arbitrary moment from the previous customer arrival of the
recurrent flow, and the distribution function F2(t) of the time interval between the
given moment and the next customer arrival. It can be shown that
F1(t) = F2(t) = F(t) = a−1
1
t
0
(1 − A(u))du
where a1 is the mean inter-arrival time: a1 =
∞
0
(1 − A(t))dt. The value a1 and the
input rate λ are related as a1λ = 1.
Function F(t) is identical to A(t) if and only if the input flow is Poisson.
Definition 1.9 The instantaneous intensity μ(t), μ(t) 0 of a recurrent input flow
is the value
P{xt+ − xt ≥ 1|Et} = μ(t) + o()
where Et is a random event that a customer arrived at time 0 and no one arrived
during (0, t]. Here o() means a value with order of vanishing higher than (as
→ 0).
Instantaneous intensity μ(t) is related to the distribution function A(t) as
A(t) = 1 − e
−
t
0
μ(u)du
, μ(t) =
dA(t)
dt
1 − A(t)
.
In the case of Poisson arrivals, the instantaneous intensity coincides with the
intensity.
In modern integrated digital communication networks (unlike traditional tele-
phone networks), data traffic no longer represents a superposition of a large number
of uniformly small independent recurrent flows. As a result, these flows are often not
only not simple but not recurrent. To describe such flows, D. Lucantoni proposed the
formalism of batch Markov flows, see [85]. To denote them in Kendall’s symbols,
the abbreviation BMAP (Batch Markovian Arrival Process) is used. For more
information on BMAP flows and related queuing systems see [26, 85].
Note that the so-called input from a finite source (primitive flow, second-order
Poisson flow) is also popular in telephony. It is defined as follows.
Let there be a finite number of n objects generating requests independently of
others, called request sources. A source can be in a busy state for some random
time during which it is not able to generate requests. After the busy state finishes,
the source goes into a free state. In this state, the source can generate a new request
during time exponentially distributed with parameter λ. After generating the request,
the source immediately goes into a busy state.
33. 1.3 Markov Random Processes 7
A primitive input flow of n request sources is a flow with a limited aftereffect
whose intensity λi is directly proportional to the number of currently available
sources at a given time: λi = λ(n − i) where i is the number of busy sources.
Concerning the customer service process in the system, it is usually assumed
that the service is recurrent, i.e., the consecutive service times of customers
are independent and identically distributed random variables. Let us denote their
distribution function by B(t). It is usually assumed that B(+0) = 0, i.e., customer
service cannot be instantaneous. It is also usually assumed that a random variable
having the distribution function B(t) has the required number of finite initial
moments.
To determine the type of service time distribution in Kendall’s classification, the
same symbols are used as for the type of arrival process. So, the symbol G means
either the absence of any assumptions about the service process, or it is identified
with the symbol GI meaning a recurrent service process, the symbol M means the
assumption that the service time is exponentially distributed, that is, B(t) = 1 −
e−μt, μ 0, t 0, the symbol D denotes deterministic service times, etc.
The symbol SM (Semi-Markovian), see e.g., [94], means the service process
that is more general than a recurrent one, i.e., consecutive service times are the
consecutive times a semi-Markovian process stays in its states. An SM process is
supposed to have a finite state space and a fixed kernel.
1.3 Markov Random Processes
Markov random processes play an important role in analysis of queuing systems.
Therefore, we give some information from the theory of such processes that will be
used below.
Definition 1.10 A random process yt, t ≥ 0 determined on a probability space
and taking values in a certain numerical set Y is called Markov if for any positive
integer n, any y, u, un, . . . , u1 ∈ Y and any τ, t, tn, . . . , t1, ordered such that τ
t tn · · · t1, the following relation holds:
P{yτ y|yt = u, ytn = un, . . . , yt1 = u1} = P{yτ y|yt = u}. (1.1)
Process parameter t is considered to be the time. If τ is a future time point, t is
a present time point, and tk, k = 1, . . . , n are past points, the condition (1.1) can
be interpreted as follows: the future behavior of a Markov process is completely
determined by its state at the present time. For a non-Markov process, its future
behavior depends also on states of the process in the past.
In case a state space Y of a Markov process yt, t ≥ 0 is finite or countable, the
process is called a Markov chain (MC). If parameter t takes values in a discrete
set, the MC is called a discrete-time MC. If the parameter t takes values in some
continuous set then the MC is called a continuous-time MC.
34. 8 1 Mathematical Methods to Study Classical Queuing Systems
An important special case of the continuous-time MC is a so-called birth-and-
death process.
1.3.1 Birth-and-Death Processes
Definition 1.11 A random process it, t ≥ 0, is called a birth-and-death process if
it satisfies the following conditions:
• the process state space is a set (or a subset) of non-negative integers;
• the sojourn time of the process in state i is exponentially distributed with
parameter γi, γi 0 and is independent of the past states of the process;
• after the process finishes its stay in state i, it transits to state i−1 with probability
qi, 0 qi 1 or to state i + 1 with probability pi = 1 − qi. The probability p0
is supposed to be 1.
A state of the process it, t ≥ 0 at time t is considered to be the size of some
population at this point in time. Transition from state i to state i + 1 is assumed to
be the birth of a new population member, and transition to state i − 1 is considered
to be population member death. Thus, this interpretation of the process explains its
name.
Denote by Pi(t) the probability that the process it is in state i at time t.
Proposition 1.8 The probabilities Pi(t) = P{it = i}, i ≥ 0 satisfy the following
system of linear differential equations:
P
0(t) = −λ0P0(t) + μ1P1(t), (1.2)
P
i (t) = λi−1Pi−1(t) − (λi + μi)Pi(t) + μi+1Pi+1(t), i ≥ 1 (1.3)
where λi = γipi, μi = γiqi, i ≥ 0.
Proof We apply the so-called t-method. This method is widely used in the
analysis of continuous-time MCs and Markov processes.
The main point of this method is as follows. We fix the time t and a small time
increment t. The probability state distribution of the Markov process at time point
t + t is expressed by its probability state distribution at an arbitrary time point t
and the probability of the possible process transitions during time t. The result is
a system of difference equations for probabilities Pi(t). By dividing both sides of
these equations by t and taking a limit as t tends to 0, we obtain a system of
differential equations for the process state probabilities.
Below we apply the t-method to derive (1.3). Denote by R(i, j, t, t) the
probability that process it transits from state i to state j during the time interval
(t, t + t). Since the duration of time when the process it is in state i has an
exponential distribution possessing the property of no aftereffect, the time passing
from time point t to the completion of the process sojourn time in state i is also
35. 1.3 Markov Random Processes 9
exponentially distributed with parameter γi. The probability that the process it will
exit the state i during the time interval (t, t + t) is the probability that time
exponentially distributed with parameter γi expires during time t. Taking into
account the definition of a distribution function, this probability is
1 − e−γit
= γit + o(t). (1.4)
That is, the probability that the process changes its state during time t is a
quantity of order t. Hence, the probability that the process it makes two or more
transitions during time t is of order o(t). Taking into account that the birth-and-
death process makes one-step transitions just to the neighboring states, we obtain
R(i, j, t, t) = o(t) for |i − j| 1.
Using the above arguments and the formula of total probability, we obtain the
relations
Pi(t + t) = Pi−1(t)R(i − 1, i, t, t) + Pi(t)R(i, i, t, t) (1.5)
+Pi+1(t)R(i + 1, i, t, t) + o(t).
From the description of the process and formula (1.4) it follows that
R(i − 1, i, t, t) = γi−1tpi−1 + o(t),
R(i, i, t, t) = (1 − γit) + o(t), (1.6)
R(i + 1, i, t, t) = γi+1tqi+1 + o(t).
Substituting relations (1.6) into (1.5) and using the notation λi and μi, we
rewrite (1.5) in the form
Pi(t + t) − Pi(t) = Pi−1(t)λi−1t − Pi(t)(λi + μi)t
+ Pi+1(t)μi+1t + o(t). (1.7)
Dividing both sides of this equation by t and tending t to 0, we obtain
relation (1.3). Formula (1.2) is derived similarly.
Proposition 1.8 is proved.
To solve an infinite system of differential equations (1.2) and (1.3) by means of
Laplace transforms pi(s) of probabilities Pi(t) (see below), the system is reduced
to an infinite system of linear algebraic equations. However, this system can also
be solved explicitly only for special cases, e.g., when the tridiagonal matrix of this
system has additional specific properties (for example, λi = λ, i ≥ 0, μi =
μ, i ≥ 1). Situations when the resulting system of equations for the probabilities
Pi(t) cannot be explicitly solved are rather typical in queuing theory. Therefore, in
spite of the fact that sometimes these probabilities depending on t and characterizing
36. 10 1 Mathematical Methods to Study Classical Queuing Systems
the process dynamics (for the known initial state i0 of the process or the known
probability distribution of i0) are of significant practical interest, usually we have to
deal with the so-called stationary probabilities of the process
πi = lim
t→∞
Pi(t), i ≥ 0. (1.8)
Positive limit (stationary) probabilities πi may not always exist, and the condi-
tions for their existence are usually established by so-called ergodic theorems.
For the birth-and-death process we are considering, the following result can be
proved.
Proposition 1.9 The stationary probability distribution (1.8) of the birth-and-death
process exists if the following series converges:
∞
i=1
ρi ∞, (1.9)
where
ρi =
i
l=1
λl−1
μl
, i ≥ 1, ρ0 = 1,
and the following series diverges:
∞
i=1
i
l=1
μl
λl
= ∞. (1.10)
Note that the stationary probabilities πi, i ≥ 0 are calculated as
πi = π0ρi, i ≥ 1, π0 =
∞
i=0
ρi
−1
. (1.11)
Proof The last part of the Proposition is elementary to prove. We assume that
conditions (1.9) and (1.10) are valid, and the limits (1.8) exist. Let t tend to
infinity in (1.2), (1.3). The derivatives P
i (t) tend to zero. The existence of these
derivative limits follows from the existence of limits on the right-hand side of
system (1.2), (1.3). The zero limits of the derivatives follow from the fact that the
assumption that the limits are non-zero contradicts the boundedness of probabilities:
0 ≤ Pi(t) ≤ 1.
As a result, from (1.2), (1.3) we obtain the system of linear algebraic equations
for the distribution πi, i ≥ 0:
− λ0π0 = μ1π1, (1.12)
37. 1.3 Markov Random Processes 11
λi−1πi−1 − (λi + μi)πi + μi+1πi+1 = 0, i ≥ 1. (1.13)
Using the notation xi = λi−1πi−1 − μiπi, i ≥ 1, the system (1.12), (1.13) can be
rewritten in the form
x1 = 0, xi − xi+1 = 0
resulting in xi = 0, i ≥ 1, which in turn implies the validity of relations
λi−1πi−1 = μiπi, i ≥ 1. (1.14)
It follows that πi = ρiπ0, i ≥ 1. The formula for the probability π0 follows from
the normalization condition.
1.3.2 Method of Transition Intensity Diagrams
Note that there is an effective method to obtain equations of type (1.12), (1.13)
(so-called equilibrium equations) for continuous-time MCs (and the birth-and-death
processes, in particular) without using equations for non-stationary probabilities.
This alternative method is called the method of transition intensity diagrams.
The main idea of the method is as follows. The behavior of a continuous-time
MC is described by an oriented graph. The nodes of the graph correspond to the
possible states of the chain. Arcs of the graph correspond to possible one-step
transitions between chain states. Each arc is supplied with a number equal to the
intensity of the corresponding transition. To obtain a system of linear algebraic
equations for stationary probabilities, the so-called flow conservation principle is
used. This principle is as follows. If we make a cross-section of the graph, that
is we remove some arcs in such a way that a disjoint graph is obtained, then the
flow out of one part of the cut graph to the other is equal to the flow in the opposite
direction. The flow is assumed to be the sum (over all removed arcs) of the stationary
probabilities of the nodes which are the origins of the removed arcs multiplied by
the corresponding transition intensities. Equating the flows over all cross-sections
selected in an appropriate way, we obtain the required system of equations.
Note that we get exactly the equilibrium equations obtained by applying the t
method (called the equations of global equilibrium) if the cross-section in the graph
is done by cutting all arcs around the chosen graph node. Sometimes, due to a lucky
choice of cross-section in the graph, it is possible to obtain a much simpler system
of equations (called local equilibrium equations). In particular, if we depict the
behavior of a birth-and-death process considered in the form of a string graph and
make a cut not around the node corresponding to the state i (in this case we obtain
Eqs. (1.12) and (1.13)) but between nodes corresponding to the states i and i + 1
then we immediately obtain simpler equations (1.14) from which formulas (1.11)
automatically follow.
38. 12 1 Mathematical Methods to Study Classical Queuing Systems
1.3.3 Discrete-Time Markov Chains
Below we give brief information from the theory of discrete-time MCs. More
detailed information can be found, for example, in [40].
Without loss of generality, we assume that the MC state space is the set of non-
negative integers (or some subset of it).
Definition 1.12 Homogeneous discrete-time MC ik, k ≥ 1 is determined if
• the initial probability distribution of the chain states is determined:
ri = P{i0 = i}, i ≥ 0;
• the matrix P of the one-step transition probabilities pi,j defined as follows:
pi,j = P{ik+1 = j|ik = i}, i, j ≥ 0,
is determined.
The matrix P of the one-step transition probabilities pi,j is stochastic, that is, its
elements are non-negative numbers, and the sum of the elements of any row is equal
to one. The transition probability matrix for m steps is Pm.
Denote by Pi(k) = P{ik = i}, k ≥ 1, i ≥ 0 the probability that the MC is
in state i after the kth step. Potentially, the probabilities Pi(k) characterizing the
non-stationary behavior of the MC can be very interesting in solving the problem
of finding the characteristics of an object described by a given MC. However,
the problem of finding such probabilities is complicated. Therefore, the so-called
stationary probabilities of states of the MC are usually analyzed:
πi = lim
k→∞
Pi(k), i ≥ 0. (1.15)
These probabilities are also called limiting, final, or ergodic. We shall only deal
with irreducible non-periodic chains for which the positive limits (1.15) exist, and
the listed alternative names of stationary probabilities express practically the same
properties of chains: the existence of the limits (1.15), the independence of the limits
of the initial probability distribution of the states of the MC, and the existence of
the unique positive solution of the following system of linear algebraic equations
(equilibrium equations) for the stationary state probabilities:
πj =
∞
i=0
πipi,j , (1.16)
∞
i=0
πi = 1. (1.17)
39. 1.4 Probability Generating Function, Laplace and Laplace-Stieltjes Transforms 13
There are a number of results (theorems of Feller, Foster, Moustafa, Tweedy, etc.)
that allow us to determine whether the stationary probabilities πi (limits in (1.15))
exist or not for a specific type of transition probabilities. If, as a result of analysis
of the transition probabilities pi,j , it is established that the limits (1.15) exist under
the specific conditions on the parameters of the MC, then it is assumed that these
conditions are fulfilled and the system of equilibrium equations (1.16), (1.17) is
solved. The investigation of the MC is considered finished.
In case the chain state space is infinite, the solution of the system (1.16), (1.17)
becomes a tough task, and its effective solution is possible only if the matrix of
one-step transition probabilities has certain specific properties.
1.4 Probability Generating Function, Laplace
and Laplace-Stieltjes Transforms
In queuing theory, the apparatus of Laplace and Laplace-Stieltjes transforms and
probability generating functions (PGFs) is intensively used. In particular, we have
already mentioned the possibility of using Laplace transforms to reduce the problem
of solving a system of linear differential equations to the solution of a system of
linear algebraic equations. We give the basic information about these functions and
transforms.
Definition 1.13 The Laplace-Stieltjes transform (LST) of the distribution B(t) is a
function β(s) determined as follows:
β(s) =
∞
0
e−st
dB(t),
and the Laplace transform φ(s) (LT) is a function
φ(s) =
∞
0
e−st
B(t)dt.
If s is a pure imaginary variable, the LST coincides with the characteristic function
that corresponds to the distribution function B(t). The right half-plane of the
complex plane is usually considered to be the analyticity domain for functions
β(s), φ(s). However, without substantial loss of generality, within the framework
of this chapter we can consider s to be a real positive number.
Let us note some of the properties of the Laplace-Stieltjes transform.
Property 1.1 If both LSTs β(s) and φ(s) exist (that is, the corresponding improper
integrals converge), then they are related as β(s) = sφ(s).
40. 14 1 Mathematical Methods to Study Classical Queuing Systems
Property 1.2 If two independent random variables have LSTs β1(s) and β2(s) of
their distribution functions, then the LST of the distribution function of the sum of
these random variables is the product β1(s)β2(s).
Property 1.3 The LST of the derivative B
(t) is sβ(s) − sB(+0).
Property 1.4
lim
s→0
β(s) = lim
t→∞
B(t).
Property 1.5 Let bk be the kth initial moment of the distribution bk =
∞
0
tkdB(t), k ≥ 1. It is expressed using the LST as
bk = (−1)k dkβ(s)
dsk
s=0
. (1.18)
Property 1.6 The LST β(s) can be given a probabilistic meaning as follows. We
assume that B(t) is a distribution function of a length of some time interval, and
a simple input of catastrophes having parameter s 0 arrives within this time
interval. Then it is easy to see that β(s) is the probability that no catastrophe arrives
during the time interval.
Property 1.7 LST β(s) considered as a function of the real variable s 0 is
a completely monotone function, i.e., it has derivatives β(n)(s) of any order and
(−1)nβ(n)(s) ≥ 0, s 0.
Definition 1.14 The probability generating function (PGF) of the probability
distribution qk, k ≥ 0, of a discrete random variable ξ is a function
Q(z) = Mzξ
=
∞
k=0
qkzk
, |z| 1.
Let us mention the main properties of PGFs.
Property 1.8
|Q(z)| ≤ 1, Q(0) = q0, Q(1) = 1.
Property 1.9 Random variable ξ has the mth initial moment Eξm if and only if
there exists a finite left-side derivative Q(m)(1) of the PGF Q(z) at the point z = 1.
The initial moments are easily calculated through the factorial moments
Eξ(ξ − 1) . . .(ξ − m + 1) = Q(m)
(1).
In particular, Eξ = Q
(1).
41. 1.5 Single-Server Markovian Queuing Systems 15
Property 1.10 In principle, PGF Q(z) allows calculation (generation) of probabili-
ties qi using the following formula:
qi =
1
i!
diQ(z)
dzi
z=0
, i ≥ 0. (1.19)
Property 1.11 The PGF Q(z) can be given a probabilistic sense as follows. We
interpret the random variable ξ as the number of customers arriving during a certain
period of time. Each customer is colored red with probability z, 0 ≤ z ≤ 1, and is
colored blue with the complementary probability. Then it follows from the formula
of the total probability that Q(z) is the probability that only red customers arrive
during the considered time interval.
Thus, if the PGF Q(z) of the probabilities qk, k ≥ 0 is known, we can
easily calculate the moments of this distribution and, in principle, can calculate
the probabilities qk, k ≥ 0. If the direct calculation by (1.19) is difficult, we
can use the method of PGF inversion by expanding it into simple fractions or by
numerical methods (see, for example, [23, 97]). When solving practical problems, it
is possible to try to approximate this distribution by smoothing over a given number
of coinciding moments of the distribution.
1.5 Single-Server Markovian Queuing Systems
The most well-studied queuing systems are the single-server systems with a Poisson
input or (and) an exponential distribution of the service time. This is explained
by the fact that the processes that researchers are primarily interested in are
one-dimensional and, in addition, are either Markovian or easily treatable by a
Markovization procedure by considering them only at embedded time moments
or by expansion of the process phase space. These processes are the number it of
customers in the system at time t, the waiting time wt of a customer that arrives (or
may arrive) at time t, etc.
1.5.1 M/M/1 Queue
Let’s consider the system M/M/1, that is a single-server queuing system with
unlimited buffer (or unlimited waiting space), a simple flow of customers (a
stationary Poisson input) with parameter λ and exponentially distributed service
time with parameter μ.
42. 16 1 Mathematical Methods to Study Classical Queuing Systems
Analyzing the behavior of the system, we easily see that the process it of the
queue length at the time t is a birth-and-death process with parameters
γ0 = λ, γi = λ + μ, i ≥ 1,
pi =
∞
0
e−μt
λe−λt
dt =
λ
λ + μ
, i ≥ 1.
Therefore, Propositions 1.8 and 1.9 are valid with parameters λi = λ, i ≥ 0,
μi = μ, i ≥ 1. So the value ρi in the statement of Proposition 1.9 is defined as
ρi = ρi where ρ = λ/μ.
Parameter ρ characterizing the ratio of the input flow intensity and the service
intensity is called the system load and plays an important role in queuing theory.
Checking the condition of the process it, t ≥ 0 stationary state distribution
existence given in the statement of Proposition 1.9, we easily see that the stationary
distribution of the number of customers in the system exists if the following
condition holds:
ρ 1, (1.20)
which is often called the stability condition. Below we assume that this condition
holds.
Note that for most single-server queuing systems, the stability condition also has
the form (1.20), which agrees well with the following intuitive considerations: in
order that the system does not accumulate an infinite queue, it is necessary that the
customers are served, on average, faster than they arrive.
Thus, we can formulate the following corollary of Proposition 1.9.
Proposition 1.10 The stationary distribution πi, i ≥ 0 of the number of customers
in M/M/1 is obtained by
πi = ρi
(1 − ρ), i ≥ 0. (1.21)
It follows that the probability π0 that the system is idle at any time is 1 − ρ, and the
average number L of customers in the system is given by the formula
L =
∞
i=0
iπi =
ρ
1 − ρ
. (1.22)
The average queue length Lo is defined by the formula
Lo =
∞
i=1
(i − 1)πi = L − ρ =
ρ2
1 − ρ
. (1.23)
43. 1.5 Single-Server Markovian Queuing Systems 17
In situations where the distribution of inter-arrival times and the service time distri-
bution are unknown and we know only their averages, formulas (1.22) and (1.23)
are sometimes used to estimate (roughly) the average queue length in the system
and the average queue length at an arbitrary time.
As stated above, another interesting characteristic of the queuing system is the
waiting time distribution wt of a customer arriving at time t (i.e., the time from the
moment the customer arrives at the queue to the time its service starts).
Denote by W(x) the stationary distribution of the process wt,
W(x) = lim
t→∞
P{wt x}, x ≥ 0.
We assume that customers are served in the order they arrive at the system.
Sometimes such a discipline to choose customers for service is encoded as FIFO
(First In First Out) or the same FCFS (First Come First Served).
Proposition 1.11 The stationary distribution W(x) of the waiting time in an
M/M/1 queue is defined as
W(x) = 1 − ρe(λ−μ)x
. (1.24)
Proof The waiting time of an arbitrary customer depends on the number of
customers present in the system upon its arrival. For an M/M/1 queue, the queue
length distributions both at an arbitrary time and at an arbitrary time of a customer
arrival coincide and are given by formula (1.21). A customer arriving at an empty
system (with probability π0) has zero waiting time. An arriving customer that sees
i customers in the system (with probability πi) waits for a time having an Erlang
distribution with parameters (μ, i). The last conclusion follows from the facts that,
firstly, due to the lack of aftereffect in the exponential distribution, the service
time remaining after the arrival moment of an arbitrary customer has the same
exponential distribution with parameter μ as the total service time, and secondly,
the sum of i independent exponentially distributed random variables with parameter
μ is an Erlang random variable with parameters (μ, i).
From these arguments and (1.21), it follows that
W(x) = 1 − ρ +
∞
i=1
ρi
(1 − ρ)
x
0
μ
(μt)i−1
(i − 1)!
e−μt
dt
= 1 − ρ + (1 − ρ)λ
x
0
e(λ−μ)t
dt.
This immediately implies (1.24).
44. 18 1 Mathematical Methods to Study Classical Queuing Systems
The average waiting time W in the system is calculated as
W =
∞
0
(1 − W(x))dx = λ−1 ρ2
1 − ρ
. (1.25)
The average sojourn time V in the system (i.e., the time from the moment a customer
arrives at the system until the end of its service) is given by the formula
V = W + μ−1
= λ−1 ρ
1 − ρ
. (1.26)
Comparing expression (1.25) for the mean waiting time W and formula (1.23) for
the mean queue length Lo, as well as formula (1.26) for the mean sojourn time V
with formula (1.22) for the mean number L of customers in the system, we see that
Lo = λW, L = λV. (1.27)
Note that these formulas hold for many queuing systems that are more general than
an M/M/1 queues they are called the Little formulas. The practical significance
of these formulas is that they eliminate the necessity for a direct W, V calculation
given that Lo, L are known, and vice versa.
1.5.2 M/M/1/n Queue
Now we consider an M/M/1/n queuing system, i.e., a single-server queue with a
buffer of capacity n − 1. An incoming customer, which sees a server busy, waits for
service in the buffer if there is a free space. If all n − 1 waiting places for waiting
are busy the customer leaves the system unserved (is lost).
Denote by it, t ≥ 0 the number of customers in the system at time t. This process
can take values from {0, 1, . . ., n}. It is easy to see that the process it, t ≥ 0 is a
birth-and-death process, and non-zero parameters λi, μi are defined as λi = λ, 0 ≤
i ≤ n − 1, μi = μ, 1 ≤ i ≤ n. Then it follows from the formula for the stationary
state probabilities of a birth-and-death process that the stationary probabilities of
the number of customers in the system under consideration have the form
πi = ρi 1 − ρ
1 − ρn+1
, 0 ≤ i ≤ n. (1.28)
One of the most important characteristics of a system with possible loss of
customers is the probability Ploss that an arbitrary customer is lost. For the
considered queuing system, it can be shown that this probability coincides with
the probability that all waiting places are occupied at an arbitrary time, that is the
45. 1.5 Single-Server Markovian Queuing Systems 19
following formula holds:
Ploss = ρn 1 − ρ
1 − ρn+1
. (1.29)
Formula (1.29) can be used to choose the required buffer size depending on the
system load and the value of the permissible probability of customer loss.
Note that, unlike the M/M/1 queue, the stationary distribution of the number
of customers in M/M/1/n exists for any finite values of the load ρ. For ρ = 1
calculations by formulas (1.28), (1.29) can be performed using L’Hospital’s rule.
1.5.3 System with a Finite Number of Sources
In Sect. 1.2, we introduced the definition of an input flow from a finite number of
sources. Let us briefly consider the queuing model describing service of this flow.
This model was first investigated by T. Engset.
There is a single-server queuing system with a size m buffer and an input flow
of customers from m identical sources. Any source is in a busy state (and therefore
cannot generate customers) until its previous customer is served. The service time
of any customer from any source has an exponential distribution with parameter μ.
In a free state, the source can generate the next customer after a time exponentially
distributed with parameter λ and then it goes into a busy state.
Denote by it, t ≥ 0 the number of customers in the system (in the buffer and on
the server) at time point t. This process has state space {0, 1, . . ., m}. It is easy to
see that the process it, t ≥ 0 is a birth-and-death process, and its parameters (the
birth rate λi and the death rate μi), are defined as
λi = λ(m − i), 0 ≤ i ≤ m − 1, μi = μ, 1 ≤ i ≤ m.
From formula (1.11) for the stationary probabilities of a birth-and-death process,
we obtain the following expressions for stationary state probabilities πi, i =
0, . . . , m of the number of customers in the system in an obvious manner:
πi = π0ρi m!
(m − i)!
, 1 ≤ i ≤ m, (1.30)
where probability π0 is defined by the normalization condition
π0 =
m
j=0
ρj m!
(m − j)!
−1
. (1.31)
46. 20 1 Mathematical Methods to Study Classical Queuing Systems
Using formulas (1.30), (1.31), it is easy to calculate the average number of
customers in the system and in the queue. It is also possible to calculate the so-
called stationary coefficient kR of a source’s readiness (the probability that a source
is ready to generate a customer at an arbitrary time):
kR =
m−1
i=0
m − i
m
πi =
μ(1 − π0)
λm
.
1.6 Semi-Markovian Queuing Systems and Their
Investigation Methods
As noted in the previous section, the process it, t ≥ 0, which is the number of
customers in an M/M/1 queue at time t, is a birth-and-death process that is a special
case of a continuous-time MC. A similar process for M/G/1 -type queuing systems
with a non-exponential service time distribution and GI/M/1-type queues with
non-Poisson input is no longer a Markov process.
The obvious reason for this fact is that the process behavior after some fixed time
point t is not completely determined in general by the state of the process at that
point but also depends on how long ago the current customer service began or the
previous arrival happened.
Nevertheless, the study of the process it, t ≥ 0 can be reduced to the investigation
of a Markov process. The first method of Markovization (the so-called embedded
MC method) is illustrated by the example of an M/G/1 queuing system in
Sect. 1.6.1 and a GI/M/1 queue in Sect. 1.6.2. One of the variety of other methods
of Markovization is the method of introducing an additional variable and it is
illustrated in Sect. 1.6.3. In Sect. 1.6.4, one more powerful method to investigate
queues is briefly described and illustrated with examples, namely the method of
introducing an additional event.
1.6.1 Embedded Markov Chain Method to Study an M/G/1
We consider a single-server queue with an unlimited buffer and a Poisson input
with parameter λ. The customer service time has an arbitrary distribution with dis-
tribution function B(t), Laplace-Stieltjes transform β(s), and finite initial moments
bk, k = 1, 2.
As noted above, the process it, t ≥ 0, which is the number of customers in the
system at time t, is not Markovian because we cannot describe the behavior of the
process after an arbitrary time without looking back. At the same time, it is obvious
that if we know the state i, i 0 of the process it at the time tk of the service
47. 1.6 Semi-Markovian Queuing Systems and Their Investigation Methods 21
completion of the kth customer then we can predict the state of the process it at
the time of the (k + 1)th customer service completion, which occurs after a random
time u with distribution B(t). During this time, a random number of customers
(distributed according to a Poisson law with parameter λu) can arrive at the system
and one customer (which is served) leaves the system.
We can now introduce the idea of the method of an embedded MC. In general,
the method is as follows. For the non-Markovian process it, t ≥ 0, we find a
sequence of time moments tk, k ≥ 1 such that the process itk , k ≥ 1 forms an MC.
Using the methods of MC theory, we investigate the stationary distribution of the
embedded MC and then, by means of this distribution, we reconstruct the stationary
state distribution of the initial process. To this end, usually the theory of renewal
processes or Markovian renewal processes is applied.
Let tk be the kth customer service completion moment in an M/G/1 queue. The
process itk , k ≥ 1, is a discrete-time homogeneous MC.
It was noted above that an effective investigation of a discrete-time MC with a
countable state space is possible only if the matrix of its one-step transition proba-
bilities has a special structure. The matrix P of one-step transition probabilities pi,j
of the embedded MC itk , k ≥ 1 has such a structure. Below we find the entries of
the matrix P.
Let the number of customers itk in the system be i when i 0 at the moment
tk of the kth customer service completion. Since the number of customers in the
system makes a jump at time tk, we will assume that itk = itk+0, i.e., the served
customer exits the system and is no longer counted. Since i 0, the next customer
is immediately accepted for service, which leaves the system at the next moment
tk+1 of service completion. Therefore, in order that j customers are in the system at
time tk+1 + 0 it is necessary that j − i + 1 customers arrive at the queue during the
time interval (tk, tk+1). The probability of this event is fj−i+1 where the quantities
fl are given by the formula
fl =
∞
0
(λt)l
l!
e−λt
dB(t), l ≥ 0. (1.32)
Thus, the transition probability pi,j , i 0, j ≥ i − 1 is determined by
pi,j = fj−i+1, i 0, j ≥ i − 1. (1.33)
Suppose now that the system becomes empty after the moment tk, i.e., itk = 0.
Obviously, the system remains empty until the next customer arrival moment. From
now on, the system behaves exactly the same as after a service completion moment
with one customer left in the queue. Therefore, p0,j = p1,j , which implies that
p0,j = fj , j ≥ 0.
48. 22 1 Mathematical Methods to Study Classical Queuing Systems
Thus, we have completely described the non-zero elements of the matrix of
one-step transition probabilities of the embedded MC. This matrix P has a special
structure
P =
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
f0 f1 f2 f3 . . .
f0 f1 f2 f3 . . .
0 f0 f1 f2 . . .
0 0 f0 f1 . . .
0 0 0 f0 . . .
.
.
.
.
.
.
.
.
.
.
.
.
...
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎠
. (1.34)
Such a structure significantly facilitates analysis of this chain. Using the well-
known ergodicity criteria, it is easy to verify that the embedded MC under
consideration has a stationary distribution if and only if
ρ 1, (1.35)
where the system load ρ is equal to λb1.
The equilibrium equations (1.16) can be rewritten by taking into account the
form (1.34) of the one-step transition probability matrix as follows:
πj = π0fj +
j+1
i=1
πifj−i+1, j ≥ 0. (1.36)
To solve the infinite system of linear algebraic equations (1.36) we use the PGF.
Introduce the generating functions
(z) =
∞
j=0
πj zj
, F(z) =
∞
j=0
fj zj
, |z| 1.
Taking into account the explicit form (1.32) of the probabilities fl, l ≥ 0, we can
obtain an explicit expression for the PGF F(z)
F(z) =
∞
0
e−λ(1−z)t
dB(t) = β(λ(1 − z)). (1.37)
Multiplying the equations of system (1.36) by the corresponding powers of z and
summing them, we obtain
(z) = π0F(z) +
∞
j=0
zj
j+1
i=1
πifj−i+1.
49. 1.6 Semi-Markovian Queuing Systems and Their Investigation Methods 23
Changing the order of summation, we rewrite this relation in the form
(z) = π0F(z) +
∞
i=0
πizi−1
∞
j=i−1
fj−i+1zj−i+1
= π0F(z) + ((z) − π0)F(z)z−1
. (1.38)
Note that we succeeded in reducing the double sum in (1.38) due to the specific
properties of the transition probability matrix (1.34), namely, due to the fact that the
transition probabilities pi,j of the embedded MC for i 0 depend only on j − i
and do not depend on i and j separately. This property of the matrix is called quasi-
Toeplitz. It is also essential that all elements of the matrix below its subdiagonal are
zero.
Taking (1.37) into account, we can rewrite formula (1.38) in the following form:
(z) = π0
(1 − z)β(λ(1 − z))
β(λ(1 − z)) − z
. (1.39)
Formula (1.39) defines the required PGF of the embedded MC up to the value of
the yet unknown probability π0 that the system is empty at an arbitrary moment of
service completion. To find this probability, we recall that the system of equilibrium
equations also contains Eq. (1.17) (the normalization condition). It follows from the
normalization condition that (1) = 1. Therefore, to find the probability π0, we
must substitute z = 1 in (1.39). However, a simple substitution does not give a
result since both the numerator and the denominator in (1.39) vanish.
To evaluate an indeterminate form, we can use L’Hospital’s rule. However,
when calculating the average number of customers
(1) in the system at service
completion moments, we need to apply L’Hospital’s rule twice or even three times
to get the variance of the number of customers in the system and so on. In order
to avoid repeated application of L’Hospital’s rule, it is recommended to expand the
numerator and denominator of the fraction on the right-hand side of (1.39) in the
Taylor series as powers of (z − 1) (if we are interested in calculating the kth initial
moment of the queue length distribution then the values should be expanded in a
series up to (z − 1)k+1). Then we divide the numerator and denominator by (z − 1)
and then perform the operations of taking the derivatives and substituting z = 1.
Note that for calculation of the higher-order moments, one can use the Mathematica
package.
Using the considerations above, we obtain the following expressions for the
probability π0, the average number of customers L in the system, and the average
queue length Lo at service completion moments:
π0 = 1 − ρ, (1.40)
50. 24 1 Mathematical Methods to Study Classical Queuing Systems
L =
(1) = ρ +
λ2b2
2(1 − ρ)
, Lo = L − ρ. (1.41)
Substituting expression (1.40) into formula (1.39), we obtain
(z) = (1 − ρ)
(1 − z)β(λ(1 − z))
β(λ(1 − z)) − z
. (1.42)
Formula (1.42) is called the Pollaczek-Khinchine formula for the PGF of the number
of customers in an M/G/1 queue.
Note that the expressions for L, Lo include the second initial moment b2 of the
service time distribution. Therefore, with the same average service times, the mean
queue lengths can differ substantially. So, the average queue length Lo is ρ2
1−ρ for
an exponential service time distribution (in this case b2 = 2
μ2 ), and half that for a
deterministic service time (b2 = 1
μ2 ). With the Erlang service time distribution, the
average queue length Lo takes an intermediate value. And with hyperexponential
service, it can take significantly larger values. Therefore, estimating the type of the
service time distribution for a real model is very important. Taking into account only
the averages can result in a significant error when evaluating the system performance
characteristics.
Thus, the problem of finding the stationary state distribution of the embedded
MC is solved. It should be mentioned, however, that we are not interested in this
MC but in the non-Markov process it, t ≥ 0, which is the number of customers in
the system at an arbitrary time. Let us introduce the stationary state distribution of
this process:
pi = lim
t→∞
P{it = i}, i ≥ 0.
From the theory of renewal processes (see, for example, [22]), this distribution
exists under the same conditions as the embedded distribution (i.e., when con-
dition (1.35) is fulfilled) and is calculated through the embedded distribution as
follows:
p0 = τ−1
π0
∞
0
e−λt
dt, (1.43)
pi = τ−1
π0
∞
0
t
0
e−λv
λdv
(λ(t − v))i−1
(i − 1)!
e−λ(t−v)
(1 − B(t − v))dt
+
i
l=1
πl
∞
0
(λt)i−l
(i − l)!
e−λt
(1 − B(t))dt
, i ≥ 1. (1.44)
51. 1.6 Semi-Markovian Queuing Systems and Their Investigation Methods 25
Here τ is the average interval between the customer departure time points. For our
system (without loss of requests) τ = λ−1.
Introduce into consideration the PGF P(z) =
∞
i=0
pizi.
Multiplication of Eqs. (1.43) and (1.44) by the corresponding powers of z and
further summation yield
P(z)=τ−1
π0
λ−1
+
∞
0
t
0
e−λv
λdv
∞
i=1
(λ(t − v))i−1zi
(i − 1)!
e−λ(t−v)
(1−B(t−v))dt
+
∞
i=1
zi
i
l=1
πl
∞
0
(λt)i−l
(i − l)!
e−λt
(1 − B(t))dt
.
Changing the order of integration in the double integral and the order of summation
in the double sum and counting the known sums, we obtain
P(z) = τ−1
π0
λ−1
+ zλ
∞
0
e−λv
dv
∞
v
e−λ(1−z)(t−v)
(1 − B(t − v))dt
+
∞
l=1
πlzl
∞
0
e−λ(1−z)t
(1 − B(t))dt
.
Making the change of the integration variable u = t − v and taking into account
the equality τ−l = λ and the relation between the Laplace and Laplace-Stieltjes
transforms, we have
P(z) = π0[1 +
z
1 − z
(1 − β(λ(1 − z)))] + ((z) − π0)
1
1 − z
(1 − β(λ(1 − z))).
By applying expression (1.39) for the PGF (z), elementary transformations yield
P(z) = π0
(1 − z)β(λ(1 − z))
β(λ(1 − z)) − z
.
This results in the validity the equation
P(z) = (z). (1.45)
Thus, for the considered M/G/1 system, the PGF of the number of customers in the
system at the end of service and at arbitrary times coincide. Khinchine [57] called
this statement the basic law of the stationary queue.
52. 26 1 Mathematical Methods to Study Classical Queuing Systems
We consider now the problem of finding the stationary distribution of the waiting
time and the sojourn time in the system. We assume that customers are served in
order of their arrival at the queue (the FIFO discipline). Let wt be the waiting time
and vt be the sojourn time for a customer arriving at the queue at the time t. Denote
by
W(x) = lim
t→∞
P{wt x}, V (x) = lim
t→∞
P{vt x} (1.46)
and
w(s) =
∞
0
e−sx
dW(x), v(s) =
∞
0
e−sx
dV (x).
The limits (1.46) exist if the inequality (1.35) holds true. Since the sojourn time
in the system is the sum of the waiting time and the service time, and customer
service time in the classical queuing models is assumed to be independent of the
state of the system (and of the waiting time), then the LST Property 1.2 yields
v(s) = w(s)β(s). (1.47)
A popular method to obtain an expression for LSTs w(s) and v(s) is to derive
the integro-differential Takach equation for the distribution of the virtual waiting
time (i.e., the time a virtual customer would wait if it entered the system at a given
moment), see, for example, [45].
We get these expressions in a different, simpler way. It is easy to see that for the
FIFO discipline, the number of customers left in the system at a service completion
time coincides with the number of customers that arrived at the system during the
time that the departing customer spent in the system. Hence the following equations
hold:
πi =
∞
0
(λx)i
i!
e−λx
dV (x), i ≥ 0. (1.48)
Multiplying the equations (1.48) by the corresponding powers of z and summing
them up, we get
(z) =
∞
0
e−λ(1−z)x
dV (x) = v(λ(1 − z)).
53. 1.6 Semi-Markovian Queuing Systems and Their Investigation Methods 27
Substituting the explicit form (1.42) of the PGF (z), taking into account (1.47)
and making the change of variable s = λ(1 − z) in the relation above, we obtain the
following formula:
w(s) =
1 − ρ
1 − λ1−β(s)
s
. (1.49)
Formula (1.49) is called the Pollaczek-Khinchine formula for the LST of the waiting
time distribution in an M/G/1 queue.
Using (1.18) and formula (1.49) it is easy to derive the following expression for
the mean waiting time W:
W =
λb2
2(1 − ρ)
.
The average sojourn time V in the system is as follows:
V = b1 +
λb2
2(1 − ρ)
. (1.50)
The comparison of Eqs. (1.41) and (1.50) yields Little’s formula again:
L = λV.
If it is necessary to find the form of the waiting time distribution function
W(x) and not just its LST, the inversion of the transform defined by (1.49) can
be performed by expanding the right-hand side of (1.49) into simple fractions (if
possible) or by numerical methods (see, e.g., [23] and [97]). The so-called Benes’s
formula can also be useful
W(x) = (1 − ρ)
∞
i=0
ρi
B̃i(x) (1.51)
where B̃i(x) is the order i convolution of the distribution function
B̃(x) = b−1
1
x
0
(1 − B(u))du
and the convolution operation is defined recurrently:
B̃0(x) = 1, B̃1(x) = B̃(x),
B̃i(x) =
x
0
B̃i−1(x − u)dB̃(u), i ≥ 2.
54. 28 1 Mathematical Methods to Study Classical Queuing Systems
1.6.2 The Method of Embedded Markov Chains for a G/M/1
Queue
We briefly describe the application of the embedded-MC method to the analysis
of GI/M/1 systems with recurrent input flow and exponentially distributed service
time. Let A(t) be the distribution function of inter-arrival times, α(s) be its LST, and
λ = a−1
1 be the intensity of arrivals. The intensity of the exponentially distributed
service time is denoted by μ.
The random process it, t ≥ 0, which is the number of customers in the system at
an arbitrary time, is non-Markovianhere since the process behavior after an arbitrary
time moment t, t ≥ 0 is not completely determined by its state at that moment but
also depends on the time that has passed since the last customer arrived. Let the
time points tk, k ≥ 1 of the customer arrivals at the system be the embedded points.
Since the process it, t ≥ 0 makes a jump at these moments, we assume to be
definite that itk = itk−0, k ≥ 1, i.e., a customer entering the queue at this moment is
excluded from the queue length.
It is easy to see that the process itk , k ≥ 1 is a discrete MC with a state space that
coincides with the set of non-negative integers. The one-step transition probabilities
are calculated as follows:
pi,j =
∞
0
(μt)i−j+1
(i − j + 1)!
e−μt
dA(t), 1 ≤ j ≤ i + 1, i ≥ 0, (1.52)
pi,0 = 1 −
i+1
j=1
pi,j , i ≥ 0, (1.53)
pi,j = 0, j i + 1, i ≥ 0. (1.54)
Formula (1.52) is obtained from the following considerations. Since j ≥ 1, the
customers were served constantly between embedded moments, and the number
of customers served during this time interval is i − j + 1. Since the service
time has an exponential distribution with parameter μ, the time points of service
completions can be considered a Poisson input (see Proposition 1.2), therefore (see
Proposition 1.1) the probability that i −j +1 customers will be served during time t
is (μt)i−j+1
(i−j+1)! e−μt. By averaging over all possible values of the inter-arrival times, we
obtain formula (1.52). Formula (1.53) follows from the normalization condition.
Using the well-known ergodicity criteria, we can verify that a necessary and
sufficient condition for the existence of the stationary state distributions
pi = lim
t→∞
P{it = i}, ri = lim
k→∞
P{itk = i}, i ≥ 0,
60. This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.
Title: The Story of the Teasing Monkey
Author: Helen Bannerman
Release date: October 8, 2013 [eBook #43906]
Most recently updated: October 23, 2024
Language: English
Credits: Produced by David Edwards and the Online Distributed
Proofreading Team at http://www.pgdp.net (This file was
produced from images generously made available by The
Internet Archive)
*** START OF THE PROJECT GUTENBERG EBOOK THE STORY OF
THE TEASING MONKEY ***
61. The Story of
The Teasing Monkey
BY THE AUTHOR OF
LITTLE BLACK SAMBO
LITTLE BLACK MINGO
ETC.
62. NEW YORK
FREDERICK A. STOKES COMPANY
PHILADELPHIA
Once upon a time there was a
very mischievous little monkey,
who lived in a big banyan tree,
and his name was Jacko.
63. And in the jungle below there
lived a huge, fierce old lion and
lioness.
64. Now Jacko was a very teasing
monkey. He used to climb down
the long trailing roots of the
banyan tree, and pull the tails of
all the other creatures, and then
scamper up again, before they
could catch him.
And he was so bold, he even
pulled the tails of the lion and
lioness one day.
65. This made them so angry that
—
They went to a grim old bear
they knew, and they arranged
with him that he should come
with them to the banyan tree,
when Jacko was away.
66. So he came, and standing on
the lion's head, he gnawed the
roots through till they were so
thin they would not bear a jerk.
And next time Jacko pulled the
lion's tail he gave a great tug—the
roots broke, and down fell Jacko,
67. into the huge, fierce grim old
lion's jaws!!
Come here, my dear! roared
the lion.
The lioness came and looked at
Jacko. He is a very thin monkey,
said she; we had better put him
in the larder for a week to fatten
him, and then ask Mr. Bear to
dinner.
68. So they put him in the larder,
which was just a little piece at the
end of their cave, built up with big
stones, and while the lion built it
up, the lioness lay ready to spring
on him if he tried to escape. It
was very dark and very cold, and
Jacko did not like it at all.
69. They left a little window to feed
him by, and every day they gave
him as many bananas as he liked,
because they knew monkeys ate
bananas, and they could get them
easily.
Then the lioness wrote a leaf-
letter to the bear, asking him to
70. dinner, which he, of course,
accepted with pleasure.
But Jacko did not get fat, and
the reason of that was that he
soon tired of bananas, and only
ate one every day. He gave all the
others to the rats.
The lion and lioness were
rather worried because Jacko did
not get fat, so one day they stole
in to listen to him talking to the
71. rats, and as it happened they
were just talking about bananas.
I am tired of bananas, said
Jacko. I wish I could get a cocoa-
nut.
It would make you very fat,
said the rats.
Yes, said Jacko, and I don't
want to be fat for those old lions.
Ho, ho! said the lions. A
cocoa-nut will make him fat; we'll
get him one at once.
72. But when they came to the
tree they could not reach a single
cocoa-nut!
73. So the lion went back and told
the little rats very fiercely that he
would tear down the stones, and
eat them all up at once, if they
did not fetch him down some
cocoa-nuts at once.
74. This terrified the little rats.
They scampered up the tree, and
gnawed off the cocoa-nuts as fast
as they ever could.
But as the cocoa-nuts fell on
the heads of the lion and lioness,
and hurt them very much, the
75. little rats took care to stay up the
tree till it was dark.
As soon as their heads felt a
little better, the lion and lioness
took the cocoa-nuts.
And carried them to Jacko.
They had to make a very large
hole to put them in, but they built
76. it up carefully again.
Jacko was very much delighted
to get the cocoa-nuts, but he had
hard work tearing off the hairy
outside.
However at last he got it all off.
Then he smashed the cocoa-nuts
with a stone, and drank the milk,
and began eating the nut; and
77. wasn't it good after a whole week
of bananas!
While he ate it, he amused
himself making a nice warm coat
for himself of the hairy husk of
the cocoa-nuts, and he was so
busy he did not notice how much
he was eating.
78. And when he put his warm
coat on he just looked fearfully
fat.
And the lion and lioness
peeping in, thought it was all
Jacko, and they were delighted.
79. Isn't he fat and tender? they
said. We'll eat him to-night, and
not wait for Mr. Bear.
And they went out for a walk,
to get a good appetite.
Poor Jacko! He did not eat any
more cocoa-nut after he heard
that. He pulled off his coat, and
smoothed his hair down with his
little paws, but still he looked fat.
80. And he smeared himself all
over with bananas to make the
hair lie flat, but still he looked fat.
So he put on his warm coat
again, and lay down, and cried
himself to sleep.
But you must know the bear
was a very greedy old bear, and
that very afternoon, while Jacko
was asleep, he came to have a
private peep at him.
81. And when he saw him looking
so lovely and fat, he just could
not resist the temptation, and
began pulling down the stones as
fast as he could, intending to eat
him all by himself. But he was an
awkward, clumsy old bear, and all
of a sudden—
82. Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com