0% found this document useful (0 votes)
40 views

Solution Review Set P DS

Uploaded by

amircsgo4747
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Solution Review Set P DS

Uploaded by

amircsgo4747
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Probability for Data Science – Master’s Degree in Data Science – A.Y.

2023/2024
Academic Staff: Francesca Collet, Paolo Dai Pra

REVIEW PROBLEM SET


Weeks 13–24

• Discrete-time Markov chains. You should be comfortable with notions such as the Markov
property and (one-step and n-step) transition probabilities, representing a Markov chain through
the transition probability matrix P or the transition graph, computing n-step transition prob-
abilities and unconditional probabilities, identifying communicating classes and irreducible
Markov chains, classifying the states of a Markov chain, computing stationary distributions
and understanding as well as applying the convergence theorem, and computing absorption
probabilities.

RP.1. Suppose that (Xn )n∈N is a Markov chain with state space S = {1, 2}, transition
probability matrix " #
1 4
P= 5 5
2 3
5 5

and initial distribution P (X0 = 1) = 34 and P (X0 = 2) = 41 . Compute P (X3 = 1|X1 = 2),
P (X3 = 1|X2 = 1, X1 = 1, X0 = 2), P (X2 = 2), P (X0 = 1, X2 = 1) and E(X2 ).

Solution.
– The conditional probability P (X3 = 1|X1 = 2) is the probability of going from state 2
to state 1 in two steps. Recall that two-step transition probabilities are the entries of
the matrix P2 . Since " #
9 16
P2 = 25
8
25
17 ,
25 25
2 8
we obtain P (X3 = 1|X1 = 2) = P21 = 25 .
– Due to the Markov property, we have
1
P (X3 = 1|X2 = 1, X1 = 1, X0 = 2) = P (X3 = 1|X2 = 1) = P11 = ,
5
as P (X3 = 1|X2 = 1) is the probability of going from state 1 to state 1 in one step.
– Observe that P (X2 = 2) is the second component of the distribution of the chain at time
n = 2. In other words, P (X2 = 2) = (αP2 )2 , where α = 43 14 is the initial distribution

of the chain. Since " #


9 16
2 3 1
25 25 7 13

αP = 4 4 8 17 = 20 20 ,
25 25
13
we obtain P (X2 = 2) = 20 .
– By definition of conditional probability, we have

P (X0 = 1, X2 = 1) = P (X2 = 1|X0 = 1)P (X0 = 1)

and, since P (X2 = 1|X0 = 1) is the probability of going from state 1 to state 1 in two
steps and P (X0 = 1) is the first component of the initial distribution, we get

2 9 3 27
P (X0 = 1, X2 = 1) = P11 · α1 = · = .
25 4 100
7 13
– We know that P (X2 = 1) = 20 and P (X2 = 2) = 20 . As a consequence, we compute

7 13 33
E(X2 ) = 1 · P (X2 = 1) + 2 · P (X2 = 2) = 1 · +2· = .
20 20 20
RP.2. Draw the transition graph of the following Markov chains. Specify the communica-
tion classes and determine whether they are transient or recurrent. Are the Markov chains
irreducible? 1 3
0 0 21

4 1  5 10
5 5 0 0 1 1
 5 5 0 35 0 

3 1 0 1
P1 =  5 5 1 54 
 
P2 =  0 0 35 0 52 
 
0 0 5 5  
2 3

0 0 0
 
2 3 5 5
0 5 5 0
 
0 0 25 0 53

Solution.

4 1
5 1 5
5
Chain 1. We identify the state space of the chain %
with the set S = {1, 2, 3, 4}. From the transition •1 f •D 2
matrix P1 we deduce the transition graph shown 3
5
in the figure aside. All states communicate with 2 1
5 5
each other (1 → 2 → 4 → 3 → 4 → 2 → 1), there- 4
5
fore the chain has only one communication class %
and it is irreducible. As the Markov chain has a •3 Z f •4
finite state-space, the class must be recurrent. 3
5
1
5

1
5


Chain 2. We identify the state space of the chain 6 •1
with the set S = {1, 2, 3, 4, 5}. From the tran- 1
5
sition matrix P2 we deduce the transition graph 1
2
shown in the figure aside. Observe that states 1, 3
2 and 4 communicate and they form a first com- 10

2 v
munication class, as they are not accessible from 1
5 8 •D •D 5 f 3
5

any other state state. Finally, states 3 and 5 also


communicate. Accordingly, in this case we en- 2 3 2 2
5 5 5 5
counter two communication classes: {1, 2, 4} and
{3, 5}. The first class is transient and the second
•4 / •3
is recurrent. 3 Z
5

3
5

RP.3. Suppose that coin 1 has probability 0.7 of coming up heads, and coin 2 has probability
0.6 of coming up heads. If the coin flipped today comes up heads, then we select coin 1 to
flip tomorrow, and if it comes up tails, then we select coin 2 to flip tomorrow. If the coin
initially flipped is equally likely to be coin 1 or coin 2, then what is the probability that the
coin flipped on the third day after the initial flip is coin 1? Suppose that coin flipped on
Monday comes up heads. What is the probability that the coin flipped on Friday of the same
week also comes up heads?

Solution. Let Xn be the coin flipped at day n. Given the previous definition of the problem,
the stochastic process (Xn )n∈N is a discrete-time Markov chain with state space S = {1, 2},
transition matrix
0.7 0.3
P=
0.6 0.4
1 1

and initial distribution α = 2 2 . We are interested in computing the probability that the
coin flipped on the third day after the initial flip is coin 1; formally, P (X3 = 1). Observe that
the latter probability is the first component of the distribution of the chain at time n = 3. In
other words, P (X3 = 1) = (αP3 )1 . Since

0.667 0.333
P3 = ,
0.666 0.334

we compute
0.667 0.333
αP3 = 1 1

2 2 = 0.6665 0.3335
0.666 0.334
and, therefore, we obtain P (X3 = 1) = 0.6665. Moving on to the next question, suppose that
the coin flipped on Monday comes up heads. We want to compute the probability that the
coin flipped on Friday of the same week also comes up heads. Let n0 denote the Monday
when the coin was flipped. We know that Xn0 +1 = 1, because the flip that took place at
time n0 came up heads. Observe that the event “the coin flipped on Friday comes up heads”
is equivalent to selecting coin 1 to flip on Saturday. The Saturday of interest corresponds to
4
time n0 + 5 and so we are asked to compute P (Xn0 +5 = 1|Xn0 +1 = 1) = P11 = 0.667, as it is
the probability of going from state 1 to state 1 in four steps.

RP.4. Consider a Markov chain (Xn )n∈N with state space S = {1, 2, 3, 4, 5, 6} and transition
matrix  
0 1 0 0 0 0
1
 3 0 31 13 0 0

0 1 0 0 1
 
2 2 0
P= 0 1 0 0 1
.
1
 3 3 3
 0 0 12 12 0 0
 

0 1 0 0 0 0
If it exists (explain why or why not!), determine limn→+∞ P (Xn ∈ {1, 6}|X0 = 5). What is
in the long-run the proportion of time that the chain spends in state 3?
Now, modify the transition matrix P so that the states 3, 5 and 6 become absorbing states.
Use the first-step analysis to determine the probability that the chain, starting from state 1,
will eventually reach the state 6.

Solution.
– We have to determine limn→+∞ P (Xn ∈ {1, 6}|X0 = 5). Notice that
n n
P (Xn ∈ {1, 6}|X0 = 5) = P (Xn = 1|X0 = 5) + P (Xn = 6|X0 = 5) = P51 + P56 ,

as P (Xn = 1|X0 = 5) (resp. P (Xn = 6|X0 = 5)) is the probability of going from
state 5 to state 1 (resp. 6) in n steps. We need to understand if the limiting proba-
n n
bilities limn→+∞ P51 and limn→+∞ P56 exist and which are their values. We check the
assumptions of the convergence theorem.
2
Irreducibility. From the transition matrix P, we deduce the transition graph shown
in the figure below.
1
1 3

' '
•1 g ; D •2 g D •3
1 1
3 2
1 1 1 1
1 3 3 2 2
1
3

'
•6 o 1
•4 g •5
3
1
2

All states communicate with each other (1 → 2 → 3 → 5 → 4 → 6 → 2 → 1),


therefore the chain has only one communication class.
2
Aperiodicity. Being periodicity a class property, it suffices to analyze one single
2 3
state. Let us focus on state 2. Observe that P22 > 0 (2 → 4 → 2) and P22 > 0
(2 → 4 → 6 → 2). Therefore, since gcd(2, 3) = 1, we obtain d2 = 1 and the chain is
aperiodic.
2
Positive recurrence. As the state space is a finite set, the chain is positive recurrent.
Since the chain is irreducible, aperiodic and positive recurrent, the convergence theorem
holds. Therefore, the limiting probabilities
are the entries of the stationary distribution
π = π1 π2 π3 π4 π5 π6 , that is
lim Pijn = πj for all i, j ∈ S.
n→+∞
n n
In particular, we have limn→+∞ P51 = π1 and limn→+∞ P56 = π6 . To determine the
stationary distribution, we solve the following system of linear equations
(
π = πP
P6
j=1 πj = 1,

which is equivalent to

1
 π1 = 3 π2
 
7
 π1 = 12 π4
 π2 = π1 + 1 π3 + 1 π4 + π6

 


 2 3  7

 1 1

 π2 = 4 π4
π = π + π
 
3 2 5
 

 3 2 
 π =π
3 4
π4 = 13 π2 + 12 π5 ⇔ 5
  π5 = 6 π4
π5 = 12 π3 + 13 π4

 

 
π6 = 13 π4

 

π6 = 13 π4

 

 
π1 + π2 + π3 + π4 + π5 + π6 = 1

 

 π +π +π +π +π +π =1
1 2 3 4 5 6

7

 π1 = 66

 7


 π2 = 22


 π = 2
3 11
⇔ 2


 π4 = 11
 5
π5 =




 33
2
π6 = 33 .

n n
In conclusion, it yields limn→+∞ P51 = π1 and limn→+∞ P56 = π6 and thus we obtain
7 2 1
limn→+∞ P (Xn ∈ {1, 6}|X0 = 5) = 66 + 33 = 6 .
2
– In the long-run the proportion of time the chain spends in state 3 is π3 = 11 .
– We modify the evolution of the chain so that the states 3, 5 and 6 become absorbing
states. From the transition matrix P, we derive the new transition matrix P̄, given by
 
0 1 0 0 0 0
1
 3 0 13 13 0 0 

 
0 0 1 0 0 0
P̄ = 
0 1 0 0 1 1  .

 3 3 3
0 0 0 0 1 0
 

0 0 0 0 0 1
The corresponding transition graph is shown in the figure below.
1
1
' / •3
1
3
•1 g D •2
1
3
1 1
3 3


•6 X o 1
•4 1
/ •5
X
3 3

1 1
Let Pi denote the probability that, starting from state i, the chain will eventually reach
state 6. We want to determine P1 . By conditioning on the outcome of the first transition
of the chain, we obtain
 
P1 = P2 P = P2
 1

 


 1 1 1  1 1
 P2 = 3 P1 + 3 P3 + 3 P4  P2 = 3 P 1 + 3 P4

 


 

 P =0  P =0
3 3
1 1 1



 P4 = 3 P2 + 3 P5 + 3 P6 

 P4 = 31 P2 + 13
 
 P5 = 0  P5 = 0

 

 
 
P6 = 1 P6 = 1,
 

from which it follows P1 = 51 .

• Exponential distribution and homogeneous Poisson processes. You should be com-


fortable with the properties of the exponential distribution (loss of memory, convolution and
minimum of an independent family of exponential random variables), notions such as counting
process, independent and stationary increments, the definition of Poisson process, the char-
acterization of its arrival and inter-arrival times, and merging/splitting Poisson processes.

RP.5 (á difficult). Consider a post office that is run by two clerks. Suppose that when
Mr. Smith enters the system he discovers that Mr. Jones is being served by one of the clerks,
and Mr. Brown by the other. Mr. Smith is told that his service will begin as soon as either
Jones or Brown leaves. Suppose that the time that clerk i = 1, 2 spends with a customer is
exponentially distributed with rate λi . Show that
2 2
λ1 λ2
P (Mr. Smith is not the last one to leave the office) = + .
λ1 + λ2 λ1 + λ2

Solution. For the sake’s of brevity, let us refer to Mr. Smith (resp. Mr. Jones and Mr. Brown)
as S (resp. J and B). We want to compute the probability that S is not the last customer to
leave the office. For this to happen, we need that either J or B finishes being served after S.
Thus, in words, we have

P (S is not last) = P (S is not last|J finishes after B)P (J finishes after B)


+ P (S is not last|B finishes after J)P (B finishes after J)

= P (J finishes after S|J finishes after B)P (J finishes after B)


+ P (B finishes after S|B finishes after J)P (B finishes after J),

where the first equality follows from the total probability law. We have to determine the above
probabilities. We suppose J is being served by clerk 1 and B by clerk 2, and we introduce the
following random variables

TS = service time of S

TJ = service time of J

TB = service time of B

RJ = remaining service time of J given that B is gone

RB = remaining service time of B given that J is gone.

From the problem description and our assumption it follows that TJ ∼ Exp(λ1 ) and
TB ∼ Exp(λ2 ). By the memoryless property of the exponential distribution (key fact!), we
can also conclude RJ ∼ Exp(λ1 ) and RB ∼ Exp(λ2 ) (see lecture 28). Moreover, observe that
TS ∼ Exp(λ1 ) if J is the first customer to leave the post office, while TS ∼ Exp(λ2 ) if B is the
first customer to leave the post office. Therefore, we obtain

P (S is not last) = P (TS < RJ |TB < TJ )P (TB < TJ ) + P (TS < RB |TJ < TB )P (TJ < TB )
2 2
= [P (Exp(λ2 ) < Exp(λ1 ))] + [P (Exp(λ1 ) < Exp(λ2 ))]
2 2
λ2 λ1
= +
λ1 + λ2 λ1 + λ2

as desired.

RP.6. Suppose that the number of calls per hour to an answering service follows a Poisson
process with rate 4. What is the probability that fewer than 2 calls came in the first hour?
Suppose that 6 calls arrive in the first hour, what is the probability that there will be fewer
that 2 in the second hour? Suppose that the operator gets to take a break after she has
answered 10 calls. How long are her average work periods?

Solution. Let N (t) be the number of calls coming by time t (measured in hours). The counting
process (N (t))t≥0 is a Poisson process with rate 4.
– The number of calls coming in the first hour is N (1). Since N (1) ∼ Po(4) (Poissonianity),
we compute P (N (1) < 2) = P (N (1) = 0) + P (N (1) = 1) = e−4 + 4e−4 = 5e−4 .
– The number of calls coming in the second hour is N (2) − N (1). Since N (1) ∼ Po(4) and
N (2) − N (1) ∼ N (1) ∼ Po(4) (stationary increments and Poissonianity) and they are
independent, we obtain P (N (2) − N (1) < 2|N (1) = 6) = P (N (1) < 2) = 5e−4 .
– Let T10 be the arrival time of the tenth call. We know that T10 ∼ Γ(10, 4). Therefore,
we get E(T10 ) = 10
4 . As a consequence, on average, the operator work periods are 2.5
hour long.

RP.7. A policewoman on the evening shift writes a Poisson mean 6 number of tickets per
hour; 23 of these are for speeding and 13 of these are for DWI (driving while intoxicated).
What is the probability that between 02 : 00 and 03 : 00 she writes 5 tickets for speeding and
1 ticket for DWI?
Let A be the event that she writes no tickets between 01 : 00 and 01 : 30 and let K be the num-
ber of tickets she writes between 01 : 00 and 02 : 00. Which is larger P (A) or P (A|K = 5)?
Do not just answer yes or no, compute both probabilities.

Solution. Let N (t) (resp. N1 (t) and N2 (t)) be the number of tickets (resp. tickets for speeding
and tickets for DWI) written by the policewoman by time t (measured in hours). The counting
processes (N (t))t≥0 , (N1 (t))t≥0 and (N2 (t))t≥0 are Poisson processes with respective rates 6,
4 (= 6 · 23 , thinning/splitting) and 2 (= 6 · 31 , thinning/splitting). Moreover, the processes
(N1 (t))t≥0 and (N2 (t))t≥0 are independent.
– The number of tickets for speeding written by the policewoman between 02 : 00 and
03 : 00 is N1 (1); the one of tickets for DWI written in the same time interval is N2 (1).
Since N1 (1) ∼ Po(4) (Poissonianity), N2 (1) ∼ Po(2) (Poissonianity) and they are inde-
pendent random variables, we obtain
5
211 −6

4 −4
2e−2 =

P (N1 (1) = 5, N2 (1) = 1) = P (N1 (1) = 5)P (N2 (1) = 1) = e e .
5! 5!

– Define the event A = {the policewoman writes no tickets between 01 : 00 and 01 : 30}.
Observe that the following equivalences between events hold true: A = {N ( 12 ) = 0} and
{K = 5} = {N (1) = 5}. We obtain

P (A) = P N 12 = 0 = e−3 ≈ 0.05



(as N ( 12 ) ∼ Po(3))
and

1

P (A|K = 5) = P N = 0|N (1) = 5
2

P N 21 = 0, N (1) = 5

(by definition of conditional
= probability)
P (N (1) = 5)

P N 12 = 0, N (1) − N 12 = 5

=
P (N (1) = 5)

P N 21 = 0 P N 21 = 5

(by independent and stationary
= increments)
P (N (1) = 5)
1 (since N ( 12 ) ∼ Po(3) and N (1) ∼
= Po(6))
32
≈ 0.03.

RP.8. A light bulb has a lifetime that is exponential with a mean of 200 days. When it burns
out a janitor replaces it immediately. In addition there is a handyman who comes at times of
a Poisson process at rate 0.01 (per day) and replaces the bulb as “preventive maintenance”.
How often is the bulb replaced?

Solution. Both the janitor and the handyman come to replace the light bulb according to a
Poisson process. More precisely, the janitor arrivals follow a Poisson process with rate 0.005
(per day) and the handyman arrivals follow a Poisson process with rate 0.01 (per day). As
a consequence, the number of times the bulb is replaced evolves a Poisson process with rate
0.015 (= 0.005 + 0.01, superposition/merging) (per day). The inter-arrival times of the latter
have exponential distribution with parameter 0.015 and so, on average, the bulb is replaced
1
every 0.015 ≈ 67 days.

• Continuous-time Markov chains. You should be comfortable with notions such as the
Markov property and transition probability functions, holding times and discrete-time embed-
ded jump chain, and generator matrix, constructing a continuous-time Markov chain through
holding times and jump chain, determining the generator matrix, and computing stationary
distributions.

RP.9. At a hairdresser’s shop only one customer can be served at a time and there is only
one chair for the next customer to wait. A potential customer leaves if she/he arrives at
the shop when already two customers are inside. The interarrival times and service times
of the customers are independent identically distributed random variables Ta and Ts having
exponential distribution with parameter λ and µ, respectively. We are interested in the
number of customers in the shop.

(a) Define an appropriate continuous-time Markov chain for this model; give the holding
time parameters and the transition matrix of the embedded jump chain.
(b) Determine the generator matrix of the continuous-time Markov chain.
(c) Can we analyze this stochastic process as a birth and death process? If so, what are the
parameters?
(d) Determine the stationary distribution of the continuous-time Markov chain using all the
methods you know.

Solution.

(a) Let
X(t) = number of customers in the shop at time t
denote the state of the system at time t. The stochastic process (X(t))t≥0 is a continuous-
time Markov chain on S = {0, 1, 2}. We have to determine the holding time parameters
and the transition matrix of the embedded jump chain.

Holding time parameters:

v0 = λ (the waiting time for an arrival is Ta ∼ Exp(λ))

v1 = λ + µ (the waiting time for either a departure or an arrival is min{Ts , Ta } ∼ Exp(λ + µ))

v2 = µ (the waiting time for a departure is the service time Ts ∼ Exp(µ))

Jump chain transition matrix (non-zero entries):


µ
P01 = 1 P10 = P (Ts < Ta ) =
λ+µ
λ
P12 = P (Ta < Ts ) = P21 = 1,
λ+µ
that is  
0 1 0
 µ λ 
P =  λ+µ 0 λ+µ 
.
0 1 0
(b) Since the generator matrix G = (gij )i,j∈S has diagonal entries gii = −vi and out-of-
diagonal entries gij = vi Pij (with i 6= j), we obtain
 
−λ λ 0
G =  µ −(λ + µ) λ  .
0 µ −µ

(c) This stochastic process is a birth and death process as (i) its state space is a subset of
N0 ; (ii) the transitions with positive probability are of only two types: arrivals, which
increase the state variable by one, and departures, which decrease the state variable
by one; (iii) the waiting times for an arrival or a departure to occur are independent
and exponentially distributed with parameters that may depend only on the number of
persons in the systems.
In particular, (X(t))t≥0 is a birth and death process on S with birth rates λ0 = λ1 = λ,
as clients arrive according to a Poisson process with intensity λ, and with death rates
µ1 = µ2 = µ, as the service times are Exp(µ).
(d) Method 1 – via the stationary distribution of the embedded jump chain. To determine the
stationary distribution for the continuous-time Markov chain (X(t))t≥0 , we find first the
stationary distribution of the embedded chain and then we apply the theorem seen in
lecture 39 to get the stationary distribution for the continuous-time process.
Observe that the jump chain is irreducible (0 → 1 → 2 → 1 → 0) and, moreover, being
the state space a finite set, it is positive recurrent. Therefore, there exists a unique
stationary distribution π̃ for the embedded chain. To obtain it, we solve the system of
linear equations
µ

 π̃0 = λ+µ π̃1


( 

π̃ = π̃P  π̃1 = π̃0 + π̃2


λ
π̃0 + π̃1 + π̃2 = 1 

 π̃2 = λ+µ π̃1



 π̃ + π̃ + π̃ = 1,
0 1 2

µ 1 λ
which gives π̃ = 2(λ+µ) 2 2(λ+µ) . As a consequence, since we calculate

2
X π̃i µ2 + λµ + λ2
= ,
i=0
vi 2λµ(λ + µ)

µ2 λµ λ2
we have π = µ2 +λµ+λ2 µ2 +λµ+λ2 µ2 +λµ+λ2
.

Method 2 – via the generator matrix. To determine the stationary distribution π, we


solve πG = 0 together with the normalization condition. We obtain

−λ π0 + µ π1 = 0 
 λ π0 = µ π1


λ π0 − (λ + µ)π1 + µ π2 = 0

⇔ λ π1 = µ π2
 λ π1 − µ π2 = 0
π0 + π1 + π2 = 1
 
π0 + π1 + π2 = 1


µ2


 π0 = µ2 +λµ+λ 2


λµ
⇔ π1 = µ2 +λµ+λ 2


2


 π = λ
2 µ2 +λµ+λ2 .

Method 3 – via the characterization as birth and death process. We use directly the ex-
pression for the stationary distribution of a birth and death process we derived in class
(see lecture 40). We obtain

1 1 µ2
π0 = λ0 λ0 λ1
= λ2
=
1 + µ1 + µ1 µ2 1+ λ
µ + µ2
µ2 + λµ + λ2

λ0 λ λµ
π1 = = =
µ1 1 + λ0
+ λ0 λ1
µ 1+ λ
+ λ2 µ2 + λµ + λ2
µ1 µ1 µ2 µ µ2

λ0 λ1 λ2 λ2
π2 = = = .
µ1 µ2 1 + λ0
+ λ0 λ1
µ2 1 + µλ + λ2 µ2 + λµ + λ2
µ1 µ1 µ2 µ2

RP.10. At a hairdresser’s shop only one customer can be served at a time and there is only
one chair for the next customer to wait. A potential customer leaves if she arrives at the
shop when already two customers are inside. Customers arrive according to a Poisson process
with rate λ and they are served at an exponential rate µ. However, the waiting customer is
typically impatient and will only wait an exponentially distributed time with parameter γ; if
her service has not yet begun by this time then she will abandon the shop without getting a
haircut.
(a) Define an appropriate continuous-time Markov chain to describe the number of customers
in the shop; give the holding time parameters and the transition probability matrix of
the embedded jump chain.
(b) Determine the average number of customers in the shop in the long-run.

Solution.
(a) Let
X(t) = number of customers in the shop at time t
denote the state of the system at time t. The stochastic process (X(t))t≥0 is a continuous-
time Markov chain on S = {0, 1, 2}. We have to determine the holding time parameters
and the transition matrix of the embedded jump chain.
Holding time parameters:
(the waiting time for a customer to enter the shop is an exponential random
v0 = λ variable with parameter λ)

(the waiting time for either the served customer to have her hair cut or for
v1 = µ + λ a new customer to enter the shop is the minimum between two independent
exponential random variables with respective parameters µ and λ)

(the waiting time either the served customer to have her hair cut or for the
waiting customer to lose her patience is the minimum between two indepen-
v2 = µ + γ dent exponential random variables both with respective parameters µ and
γ)

Jump chain transition matrix (non-zero entries):


µ
P01 = 1 P10 = P (Exp(µ) < Exp(λ)) =
µ+λ
λ
P12 = P (Exp(λ) < Exp(µ)) = P21 = 1,
µ+λ
that is  
0 1 0
 µ λ 
P =  µ+λ 0 µ+λ 
.
0 1 0
(b) In the long-run, the average number of customers in theshop is π1 + 2π2 , where π1 and
π2 are the second and third entries of π = π0 π1 π2 , the stationary distribution of
the continuous-time Markov chain (X(t))t≥0 . To compute the limiting probabilities, we
determine the generator matrix G = (gij )i,j∈S and we use the condition πG = 0, together
with the normalization condition. Since the generator has diagonal entries gii = −vi and
out-of-diagonal entries gij = vi Pij (with i 6= j), it yields
 
−λ λ 0
G =  µ −(λ + µ) λ .
0 µ+γ −(µ + γ)
and we obtain
 µ
  π0 = π1
−λ π0 + µ π1 = 0 λ



 

λ π0 − (λ + µ)π1 + (µ + γ) π2 = 0
 
⇔ λ
λ π1 − (µ + γ) π2 = 0 π2 = π1

 
 µ+γ
π0 + π1 + π2 = 1
 


π0 + π1 + π2 = 1

µ(µ + γ)


 π0 = 2


 λ + (µ + λ)(µ + γ)


 λ(µ + γ)
⇔ π1 = 2

 λ + (µ + λ)(µ + γ)

λ2



 π2 =

.
λ2 + (µ + λ)(µ + γ)
In conclusion, in the long-run, the average number of customers in the shop is
λ(µ + γ) + 2λ2
π1 + 2π2 = .
λ2 + (µ + λ)(µ + γ)

Remark. Alternatively, to determine the stationary distribution π of the continuous-time


Markov chain (X(t))t≥0 , we could have either characterized the stationary distribution
of the embedded jump chain first and then obtained π by applying the theorem seen
in lecture 39 or, since (X(t))t≥0 is a birth and death process, used directly the general
expression for the stationary distribution of this type of processes we derived in lecture 40.

You might also like