cs433 modeling and simulation lecture 06 – part 01 discrete markov chains

35
CS433 Modeling and Simulation Lecture 06 – Part 01 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University Al-Imam Mohammad Ibn Saud University

Upload: aziza

Post on 24-Jan-2016

62 views

Category:

Documents


0 download

DESCRIPTION

Al-Imam Mohammad Ibn Saud University. CS433 Modeling and Simulation Lecture 06 – Part 01 Discrete Markov Chains. Dr. Anis Koubâa. 12 Apr 2009. Goals for Today. Understand what is a Stochastic Process Understand the Markov property - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

CS433Modeling and Simulation

Lecture 06 – Part 01

Discrete Markov Chains

Dr. Anis Koubâa12 Apr 2009

Al-Imam Mohammad Ibn Saud UniversityAl-Imam Mohammad Ibn Saud University

Page 2: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

22

Goals for Today

Understand what is a Stochastic

Process

Understand the Markov property

Learn how to use Markov Chains

for modelling stochastic processes

Page 3: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

33

The overall picture …

Markov Process Discrete Time Markov Chains

Homogeneous and non-homogeneous Markov chains

Transient and steady state Markov chains Continuous Time Markov Chains

Homogeneous and non-homogeneous Markov chains

Transient and steady state Markov chains

Page 4: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

4

• Stochastic Process• Markov Property

Markov Process

Page 5: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

5

5

What is “Discrete Time”?

0 1 2 3 4

time

Events occur at a specific points in time

Page 6: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

6

6

What is “Stochastic Process”?

Day 1

Day

Day 2 Day 3 Day 4 Day 5 Day 6 Day 7

THU FRI SAT SUN MON TUE WED

X(dayi): Status of the weather observed each DAY

State Space = {SUNNY, RAINNY}

" "1X Sday " "3X Rday

" "2X Sday

" "5X Rday

" "4X Sday

" "7X Sday

" "6X Sday

" " or " " : RANDOM VARIABLE that varies with the DAYiX S Rday

IS A STOCHASTIC PROCESSiX day

Page 7: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

7

7

Markov Processes

Stochastic Process X(t) is a random variable that varies with time.

A state of the process is a possible value of X(t) Markov Process

The future of a process does not depend on its past, only on its present

a Markov process is a stochastic (random) process in which the probability distribution of the current value is conditionally independent of the series of past value, a characteristic called the Markov property.

Markov property: the conditional probability distribution of future states of the process, given the present state and all past states, depends only upon the present state and not on any past states

Marko Chain: is a discrete-time stochastic process with

the Markov property

Page 8: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

8

8

What is “Markov Property”?

Day 1

Day

Day 2 Day 3 Day 4 Day 5 Day 6 Day 7

THU FRI SAT SUN MON TUE WED

" "1X Sday " "3X Rday

" "2X Sday

" "5X Rday

" "4X Sday

NOW FUTURE EVENTSPAST EVENTS

Markov Property: The probability that it will be (FUTURE) SUNNY in DAY 6given that it is RAINNY in DAY 5 (NOW) is independent from PAST EVENTS

?Probability of “R” in DAY6 given all previous states Probability of “S” in DAY6 given all previous states

6 5 4 1

6 5

Pr " " | " ", " ",..., " "

Pr " " | " "

DAY DAY DAY DAY

DAY DAY

X S X R X S X S

X S X R

Page 9: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

9

9

Notation

X(tk) or Xk = xk

The stochastic process at time tk or k

Discrete time tk or k Value of the stochasticprocess at instant tk or k

Page 10: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

10

Discrete Time Markov Chains (DTMC)

Markov Chain

Page 11: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

11

11

Markov Processes

Markov Process The future of a process does not depend on its past,

only on its present

1 0 01

11

Pr | ,...,

Pr |

k kk k

k kk k

X x X x X t xt t

X x X xt t

Since we are dealing with “chains”, X(ti) = Xi can take discrete values from a finite or a countable infinite set.

The possible values of Xi form a countable set S called the state space of the chain

For a Discrete-Time Markov Chain (DTMC), the notation is also simplified to

1 1 0 0 1 1Pr | ,..., Pr |k k k k k k k kX x X x X x X x X x

Where Xk is the value of the state at the kth step

Page 12: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

12

12

General Model of a Markov Chain

S0 S1 S2

p01

p11

p12

p00p10

p21

p20

p22

Si State i

pijTransition Probability from State Si to State Sj

State Space 0, 1, 2S S S S

ior

Discrete Time (Slotted Time) 0 1 2, , ,...,

{0,1,2,..., }ktime t t t t

k

Page 13: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

13

13

Example of a Markov ProcessA very simple weather model

SUNNY RAINY

pSR=0.3

pRR=0.4pSS=0.7

pRS=0.6

State Space ,S SUNNY RAINY

If today is Sunny, What is the probability that to have a SUNNY weather after 1 week?

If today is rainy, what is the probability to stay rainy for 3 days?

Problem: Determine the transition probabilities from Problem: Determine the transition probabilities from

one state to another after one state to another after nn events. events.

Page 14: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

14

Five Minutes Break

You are free to discuss with your classmates about the previous slides, or to refresh a bit, or to ask questions.

Page 15: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

15

Determine transition probabilities from one state to anothe after n events.

Chapman Kolmogorov Equation

Page 16: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

16

16

Chapman-Kolmogorov Equations

We define the one-step transition probabilities at the instant k as 1Pr |ij k kp X j X ik

Necessary Condition: for all states i, instants k, and all feasible transitions from state i we have:

1 is all neighbor states to ijj i

p where i ik

xi

x1

xR

… xj

k u k+n

Pr |,ij k n kp X j X ik k n

We define the n-step transition probabilities from instant k to k+n as

Discrete time k+1

Page 17: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

17

17

Chapman-Kolmogorov Equations

1

Pr |,

Pr | , Pr |

ij k n k

R

k n u k u kr

p X j X ik k n

X j X r X i X r X i

Using Law of Total Probability

xi

x1

xR

… xj

k u k+nDiscrete time k+1

Page 18: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

18

18

1

, , , ,R

ij ir rjr

p p p k u k nk k n k u u k n

Chapman-Kolmogorov Equations

Pr | , Pr |k n u k k n uX j X r X i X j X r

Using the memoryless property of Markov chains

Therefore, we obtain the Chapman-Kolmogorov Equation

1

Pr |,

Pr | Pr |

ij k n k

R

k n u u kr

p X j X ik k n

X j X r X r X i

Page 19: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

19

19

Chapman-Kolmogorov EquationsExample on the simple weather model

1, 3 1, 2 2, 3

+ 1, 2 2, 3

sunny rainy sunny sunny sunny rainy

sunny rainy rainy rainy

p p pday day day say day day

p pday day day day

What is the probability that the weather is rainy on day 3 knowing that it is sunny on day 1?

1, 3

0.7 0.3 0.3 0.4 0.21 0.12 0.33

sunny rainy ss sr sr rrp p p p pday day

+1, 3 1, 2 2, 3 1, 2 2, 3sunny rainy ss sr sr rrp p p p pday day day say day day day day day day

SUNNY RAINY

pSR=0.3

pRR=0.4pSS=0.7

pRS=0.6

Page 20: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

20

Generalization Chapman-Kolmogorov Equations

Transition Matrix

Page 21: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

21

21

Transition Matrix Simplify the transition probability representation

Define the n-step transition matrix as

We can re-write the Chapman-Kolmogorov Equation as follows:

Choose, u = k+n-1, then

,, ijp k k nk k n H

, , ,k k n k u u k n H H H

, , 1 1,

, 1 1

k k n k k n k n k n

k k n k n

H H H

H P

One step transition probability

Forward Chapman-Kolmogorov

Page 22: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

22

22

Transition Matrix Simplify the transition probability representation

Choose, u = k+1, then

, , 1 1,

1,

k k n k k k k n

k k nk

H H H

P H

One step transition probability

Backward Chapman-Kolmogorov

Page 23: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

23

23

Transition Matrix Example on the simple weather model

What is the probability that the weather is rainy on day 3 knowing that it is sunny on day 1?

SUNNY RAINY

pSR=0.3

pRR=0.4pSS=0.7

pRS=0.6

1, 3 1, 31, 3

1, 3 1, 3sunny sunny sunny rainy

rainy sunny rainy rainy

p pday day day dayday day

p pday day day day

H

Page 24: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

24

24

Homogeneous Markov Chains Markov chains with time-homogeneous transition probabilities

The one-step transition probabilities are independent of time k.

1 or Pr |ij k kp X j X ik P P

Even though the one step transition is independent of k, this does not mean that the joint probability of Xk+1 and Xk is also independent of k. Observe that:

1 1Pr and Pr | Pr

Pr

k k k k k

ij k

X j X i X j X i X i

p X i

Time-homogeneous Markov chains (or, Markov chains with time-homogeneous transition probabilities) are processes where 1 1Pr | Pr |ij k k k kp X j X i X j X i

1Pr |ij k kp X j X i is said to be Stationary Transition Probability

Page 25: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

25

Two Minutes Break

You are free to discuss with your classmates about the previous slides, or to refresh a bit, or to ask questions.

Page 26: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

26

Example: Two Processors System

Consider a two processor computer system where, time is divided into time slots and that operates as follows: At most one job can arrive during any time slot and this can happen

with probability α. Jobs are served by whichever processor is available, and if both are

available then the job is given to processor 1. If both processors are busy, then the job is lost. When a processor is busy, it can complete the job with probability β

during any one time slot. If a job is submitted during a slot when both processors are busy but

at least one processor completes a job, then the job is accepted (departures occur before arrivals).

Q1. Describe the automaton that models this system (not included).

Q2. Describe the Markov Chain that describes this model.

Page 27: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

27

Example: Automaton (not included) Let the number of jobs that are currently processed by the

system by the state, then the State Space is given by X= {0, 1, 2}.

Event set: a: job arrival, d: job departure

Feasible event set: If X=0, then Γ(X)= a If X= 1, 2, then Γ(Χ)= a, d.

State Transition Diagram

0 1 2

a

- / a,d

a

-d d / a,d,d

dd

-/a/ad

Page 28: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

28

Example: Alternative Automaton(not included) Let (X1,X2) indicate whether processor 1 or 2 are busy, Xi= {0, 1}. Event set:

a: job arrival, di: job departure from processor i Feasible event set:

If X=(0,0), then Γ(X)= a If X=(0,1) then Γ(Χ)= a, d2. If X=(1,0) then Γ(Χ)= a, d1. If X=(0,1) then Γ(Χ)= a, d1, d2.

State Transition Diagram

a

- / a,d1

a

-

d2 d1

d1,d2

-/a/ad1/ad2

-

d1

a,d2

a,d1,d2

00

10

11

01

Page 29: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

29

29

Example: Markov Chain

For the State Transition Diagram of the Markov Chain, each transition is simply marked with the transition probability

0 1 2

p01

p11

p12

p00p10

p21

p20

p22

00 1p 01p 02 0p

10 1p 11 1 1p 12 1p

220 1p 2

21 2 1 1p 2

22 21 1p

Page 30: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

30

30

Example: Markov Chain

Suppose that α = 0.5 and β = 0.7, then,

0 1 2

p01

p11

p12

p00p10

p21

p20

p22

0.5 0.5 0

0.35 0.5 0.15

0.245 0.455 0.3ijp

P

Page 31: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

31

How much time does it take for going from one state to another?

State Holding Time

Page 32: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

32

32

State Holding Times

Suppose that at point k, the Markov Chain has transitioned into state Xk=i. An interesting question is how long it will stay at

state i. Let V(i) be the random variable that represents the number of

time slots that Xk=i.

We are interested on the quantity Pr{V(i) = n}

1 1

1

1 1

Pr Pr , ,..., |

Pr | ,...,

Pr ,..., |

k n k n k k

k n k n k

k n k k

V n X i X i X i X ii

X i X i X i

X i X i X i

1 1 2

2 1

Pr | Pr | ...,

Pr ,..., |

k n k n k n k n k

k n k k

X i X i X i X X i

X i X i X i

| | |P A B C P A B C P B C

Page 33: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

33

33

State Holding Times

This is the Geometric Distribution with parameter Clearly, V(i) has the memoryless property

1

1 2

2 1

Pr Pr |

Pr | ...,

Pr ,..., |

k n k n

k n k n k

k n k k

V n X i X ii

X i X X i

X i X i X i

1Pr 1 nii iiV n p pi

1 2

2 3

3 1

1 Pr |

Pr | ,...,

Pr ,..., |

ii k n k n

k n k n k

k n k k

p X i X i

X i X i X i

X i X i X i

iip

Page 34: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

34

34

State Probabilities

An interesting quantity we are usually interested in is the probability of finding the chain at various states, i.e., we define

Pri kX ik For all possible states, we define the vector

0 1, ...k k k π Using total probability we can write

1 1Pr | Pr

1

i k k kj

ij jj

X i X j X jk

p k k

In vector form, one can write

1k k k π π P 1k k π π POr, if homogeneous Markov Chain

Page 35: CS433 Modeling and Simulation Lecture 06 –  Part 01  Discrete Markov Chains

35

35

State Probabilities Example

Suppose that

1 0 00 π

Find π(k) for k=1,2,…

with0.5 0.5 0

0.35 0.5 0.15

0.245 0.455 0.3

P

0.5 0.5 0

1 0 0 0.35 0.5 0.15 0.5 0.5 010.245 0.455 0.3

π

Transient behavior of the system In general, the transient behavior is obtained by

solving the difference equation 1k k π π P