0% found this document useful (0 votes)
120 views

Classproblem2 PDF

The document contains 6 problems involving Markov chains. Problem 1 asks to calculate the fraction of late bus stops given a transition matrix describing the bus arrival states. Problem 2 asks to find the limiting probability of state 0 for a Markov chain with a specified transition matrix. Problem 3 asks to determine the limiting distribution for a Markov chain with a cyclic transition matrix.

Uploaded by

ankiosa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
120 views

Classproblem2 PDF

The document contains 6 problems involving Markov chains. Problem 1 asks to calculate the fraction of late bus stops given a transition matrix describing the bus arrival states. Problem 2 asks to find the limiting probability of state 0 for a Markov chain with a specified transition matrix. Problem 3 asks to determine the limiting distribution for a Markov chain with a cyclic transition matrix.

Uploaded by

ankiosa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

IN CLASS PROBLEMS: SET 2

1. A bus in a mass transit system is operating on a continuous route


with intermediate stops. The arrival of the bus at a stop is classified into
one of the three states, namely, (1) Early arrival, (2) On-time arrival and
(3) Late arrival. Suppose that the successive states form a Markov chain
with transition matrix
 
0.5 0.4 0.1
P = 0.2 0.5 0.3 .
0.1 0.2 0.7

Over a long period of time, what fraction of stops can be expected to be


late?

2. A Markov chain has state space S = {0, 1, 2, 3, 4, 5} with transition


matrix  
a1 a2 a3 a4 a5 a6
1 0 0 0 0 0
 
0 1 0 0 0 0
P = 0 0 1
,
 0 0 0 
0 0 0 1 0 0
0 0 0 0 1 0
where a1 , . . . , a6 > 0, a1 + · · · + a6 = 1. Find the limiting probability of
state 0.

3. Consider a Markov chain with state space S = {0, 1, . . . , N } and


transition matrix
 
a0 a1 a2 · · · aN
 aN
 a0 a1 · · · aN −1 

aN −1 aN a0 · · · aN −2 
P = .
 .. .. 
 . ··· . 
a1 a2 a3 · · · a0

where 0 < a0 < 1 and a0 + a1 + · · · aN = 1. Determine the limiting distri-


bution.

4. Let P be a transition matrix and Π be the matrix whose every


row is identical with π, the corresponding stationary distribution. Define
Q = P − Π.

1
Show that P n = Π + Qn for any n.
When  
0.5 0.5 0
P = 0.25 0.5 0.25 ,
0 0.5 0.5
compute Qn and hence P n .

5. A Markov chain has state space S = {0, 1, 2} and transition proba-


bility matrix given by
 
0.4 0.4 0.2
P = 0.6 0.2 0.2 .
0.4 0.2 0.4

After a long period of time, you observe the chain and see that it is in state
1. What is the conditional probability that the previous state was state 2?
That is, find
lim P(Xn−1 = 2|Xn = 1).
n→∞

6. A system consists of two components connected in parallel, i.e., works


when at least one component is functional. Suppose that when both compo-
nents are functioning, probability of failure of each component is indepen-
dent and α. When one component has failed, the probability of failure of
the other component is β > α. A failed component takes a geometric p time
to get repaired. If one component is being repaired, and there is another
failed component, the latter will be repaired only after the repair on the first
is completed.
Describe the process by a Markov chain and determine the transition
matrix.
Find the fraction of time the system is operating.

You might also like