cs433 modeling and simulation lecture 06 – part 03 discrete markov chains

24
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University Al-Imam Mohammad Ibn Saud University

Upload: nixie

Post on 16-Mar-2016

43 views

Category:

Documents


0 download

DESCRIPTION

Al-Imam Mohammad Ibn Saud University. CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains. Dr. Anis Koubâa. 12 Apr 2009. Classification of States: 1. A path is a sequence of states, where each transition has a positive probability of occurring. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

CS433Modeling and Simulation

Lecture 06 – Part 03 Discrete Markov

Chains

Dr. Anis Koubâa12 Apr 2009

Al-Imam Mohammad Ibn Saud UniversityAl-Imam Mohammad Ibn Saud University

Page 2: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

2

Classification of States: 1 A path is a sequence of states, where each transition has a

positive probability of occurring. State j is reachable (or accessible) (يمكن الوصول إليه) from state i

(ij) if there is a path from i to j –equivalently Pij (n) > 0 for some n≥0,

i.e. the probability to go from i to j in n steps is greater than zero. States i and j communicate (ij) (يتصل) if i is reachable from j and

j is reachable from i.

(Note: a state i always communicates with itself)

A set of states C is a communicating class if every pair of states in C communicates with each other, and no state in C communicates with any state not in C.

Page 3: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

3

Classification of States: 1 A state i is said to be an absorbing state if pii = 1. A subset S of the state space X is a closed set if no

state outside of S is reachable from any state in S (like an absorbing state, but with multiple states), this means pij =

0 for every i S and j S

A closed set S of states is irreducible(غير قابل للتخفيض) if any state j S is reachable from every state i S.

A Markov chain is said to be irreducible if the state space X is irreducible.

Page 4: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

4

Example Irreducible Markov Chain

0 1 2p01 p12

p00p10

p21

p22

p01 p12

p00p10

p14

p224

p23

p32

p33

0 1 2 3

Absorbing State

Closed irreducible set

Reducible Markov Chain

Page 5: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

5

Classification of States: 2 State i is a transient state (حالة عابرة)if there exists a state j such that j is

reachable from i but i is not reachable from j. A state that is not transient is recurrent (حالة متكررة) . There are two types

of recurrent states:1. Positive recurrent: if the expected time to return to the state is finite.

2. Null recurrent (less common): if the expected time to return to the state is infinite (this requires an infinite number of states).

A state i is periodic with period k >1, if k is the smallest number such that all paths leading from state i back to state i have a multiple of k transitions.

A state is aperiodic if it has period k =1.

A state is ergodic if it is positive recurrent and aperiodic.

Page 6: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

6

Classification of States: 2

Example from BookIntroduction to Probability: Lecture Notes

D. Bertsekas and J. Tistsiklis – Fall 200

Page 7: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

7

Transient and Recurrent States

We define the hitting time Tij as the random variable that represents the time to go from state j to stat i, and is expressed as:

k is the number of transition in a path from i to j. Tij is the minimum number of transitions in a path from i to j.

We define the recurrence time Tii as the first time that the Markov Chain returns to state i.

The probability that the first recurrence to state i occurs at the nth-step is

Ti Time for first visit to i given X0 = i. The probability of recurrence to state i is

0min 0 : |ij kT k X j X i

( )1 1 0

0

Pr , ,..., |

Pr |

nii ii n n

i

f T n P X i X i X i X i

T n X i

0min 0 : |ii kT k X i X i

( )

1

Pr ni ii ii ii

n

f f T f

Page 8: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

8

Transient and Recurrent States

The mean recurrence time is

A state is recurrent if fi=1

If Mi < then it is said Positive Recurrent If Mi = then it is said Null Recurrent

A state is transient if fi<1

If , then is the probability of never returning to state i.

0Pr Pr | 1i ii if T T X i

0Pr Pr | 1i ii if T T X i

( )0

0

| ni ii i ii

n

M E T E T X i n f

1if 1 Pri iif T

Page 9: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

9

Transient and Recurrent States

We define Ni as the number of visits to state i given X0=i,

Theorem: If Ni is the number of visits to state i given X0=i,

then

Proof

( )0

0

1|1

ni ii

n i

E N X i Pf

0

1 if

0 if n

i n ni n

X iN I X i where I X i

X i

( )niiP Transition Probability from

state i to state i after n steps

Page 10: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

10

Transient and Recurrent States

The probability of reaching state j for first time in n-steps

starting from X0 = i.

The probability of ever reaching j starting from state i is

( )1 1 0Pr , ,..., |n

ij ij n nf T n P X j X j X j X i

( )

1

Pr nij ij ij

n

f T f

Page 11: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

11

Three Theorems

If a Markov Chain has finite state space, then: at least one of the states is recurrent.

If state i is recurrent and state j is reachable from state i then: state j is also recurrent.

If S is a finite closed irreducible set of states, then: every state in S is recurrent.

Page 12: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

12

Positive and Null Recurrent States

Let Mi be the mean recurrence time of state i

A state is said to be positive recurrent if Mi<∞. If Mi=∞ then the state is said to be null-recurrent. Three Theorems

If state i is positive recurrent and state j is reachable from state i then, state j is also positive recurrent.

If S is a closed irreducible set of states, then every state in S is positive recurrent or, every state in S is null recurrent, or, every state in S is transient.

If S is a finite closed irreducible set of states, then every state in S is positive recurrent.

1

Pri ii iik

M E T k T k

Page 13: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

13

Example

p01 p12

p00p10

p14

p224

p23

p32

p33

0 1 2 3

Recurrent State

Transient States Positive

Recurrent States

Page 14: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

14

Periodic and Aperiodic States Suppose that the structure of the Markov Chain is

such that state i is visited after a number of steps that is an integer multiple of an integer d >1. Then the state is called periodic with period d.

If no such integer exists (i.e., d =1) then the state is called aperiodic.

Example1 0.5

0.5

0 1 21

Periodic State d = 2

0 1 00.5 0 0.50 1 0

P

Page 15: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

15

Steady State Analysis Recall that the state probability, which is the probability

of finding the MC at state i after the kth step is given by:

Pri kX ik 0 1, ...k k k π An interesting question is what happens in the

“long run”, i.e., limi kk

Questions: Do these limits exists? If they exist, do they converge to a legitimate probability

distribution, i.e., How do we evaluate πj, for all j.

1i

This is referred to as steady state or equilibrium or stationary state probability

Page 16: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

16

Steady State Analysis Recall the recursive probability

1k kπ π P If steady state exists, then π(k+1) π(k), and

therefore the steady state probabilities are given by the solution to the equations

If an Irreducible Markov Chain, then the presence of periodic states prevents the existence of a steady state probability

Example: periodic.m

π πP and 1ii

0 1 00.5 0 0.50 1 0

P 1 0 00 π

Page 17: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

17

Steady State Analysis THEOREM: In an irreducible aperiodic Markov

chain consisting of positive recurrent states a unique stationary state probability vector π exists such that πj > 0 and

1limj jkj

kM

where Mj is the mean recurrence time of state j

The steady state vector π is determined by solving π πP and 1i

i

Ergodic Markov chain.

Page 18: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

18

Discrete Birth-Death Example

1-p 1-p

pp

1-p

p0 1 i

p1 0

0 10 0

p pp p

p

P

Thus, to find the steady state vector π we need to solve π πP and 1i

i

Page 19: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

19

Discrete Birth-Death Example

0 0 1p p In other words

1 1 , 1, 2,...1j j j p jp

1 01 p

p

Solving these equations we get2

2 0

1 pp

In general

0

1 j

j

pp

Summing all terms we get

0 00 0

1 11 1

i i

i i

p ppp

Page 20: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

20

Discrete Birth-Death Example

Therefore, for all states j we get

0

1 i

i

pp

If p<1/2, then 0, for all j j

0

1 1j i

ji

p pp p

All states are transient

0

10

2 1

i

i

p pp p

If p>1/2, then 12 1 , for all

j

j

pp jpp

All states are positive recurrent

Page 21: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

21

Discrete Birth-Death Example

If p=1/2, then

0

1 i

i

pp

0, for all j j

All states are null recurrent

Page 22: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

22

Reducible Markov Chains

In steady state, we know that the Markov chain will eventually end in an irreducible set and the previous analysis still holds, or an absorbing state.

The only question that arises, in case there are two or more irreducible sets, is the probability it will end in each set

Transient Set T

Irreducible Set S1

Irreducible Set S2

Page 23: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

23

Transient Set T

Reducible Markov Chains

Suppose we start from state i. Then, there are two ways to go to S. In one step or Go to r T after k steps, and then to S.

Define

Irreducible Set S

i

rs1

sn

0Pr | , 1, 2,...i kX S X i kS

Page 24: CS433 Modeling and Simulation Lecture 06 –  Part 03  Discrete Markov Chains

24

Reducible Markov Chains

Next consider the general case for k=2,3,… 1 0Pr |X S X i ij

j S

p

1 1 1 0Pr , ..., |k k kX S X r T X r T X i

1 1 1 0

1 0

Pr , ...,| ,

Pr |k k kX S X r T X r T X i

X r T X i

i ij r irj S r T

p pS S

First consider the one-step transition

1 1 1Pr , ...,|k k k irX S X r T X r T p r irpS