stochastic processes and markov chains

Upload: gra-vity

Post on 04-Apr-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/29/2019 Stochastic Processes and Markov Chains

    1/40

    Exercise 2:Stochastic Processes and Markov Chains

    Roman Dunaytsev

    Department of Communications EngineeringTampere University of Technology

    [email protected]

    November 05, 2009

  • 7/29/2019 Stochastic Processes and Markov Chains

    2/40

    Outline

    1 Random variables

    2 Stochastic processes

    3 Markov chainsTransition matrices and probability vectorsRegular matricesProbability to get from one state to another in a given number ofstepsProbability distribution after several stepsLong-term behavior

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 2 / 40

  • 7/29/2019 Stochastic Processes and Markov Chains

    3/40

    Outline

    1 Random variables

    2 Stochastic processes

    3 Markov chainsTransition matrices and probability vectorsRegular matricesProbability to get from one state to another in a given number ofstepsProbability distribution after several stepsLong-term behavior

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 3 / 40

  • 7/29/2019 Stochastic Processes and Markov Chains

    4/40

    Random Variables

    Suppose that to each point of a sample space we assign a real number

    We then have a real-valued function defined on the sample spaceThis function is called a random variable

    It is usually denoted by a capital letter such as X

    Example: Suppose that a coin is tossed twice so that the samplespace is = {TT,TH,HT,HH}

    Let X represent the number of heads that can come up

    Thus, we have:

    X(TT) = 0, X(TH) = 1, X(HT) = 1, X(HH) = 2

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 4 / 40

  • 7/29/2019 Stochastic Processes and Markov Chains

    5/40

    Random Variables (contd)

    Example: In an experiment involving 2 rolls of a die, the followingare examples of random variables:

    The sum of the 2 rollsThe number of 6s in the 2 rollsThe second roll raised to the 5th power

    Example: In an experiment involving the transmission of a message,

    the following are examples of random variables:The number of symbols received in errorThe number of retransmissions required to get an error-free copyThe time needed to transmit the message

    Example: You ask people whether they approve of the presentgovernment. The sample space could be:

    {disapprove, indifferent, approve}

    To analyze your results, you could use X = {1, 0, 1} or X = {1, 2, 3}

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 5 / 40

  • 7/29/2019 Stochastic Processes and Markov Chains

    6/40

    Random Variables (contd)

    A discrete random variable maps events to values of a countable

    set (e.g., the integers)

    A continuous random variable maps events to values of anuncountable set (e.g., the real numbers)

    A function of a random variable defines another random variable

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 6 / 40

  • 7/29/2019 Stochastic Processes and Markov Chains

    7/40

    Outline

    1 Random variables

    2 Stochastic processes

    3 Markov chainsTransition matrices and probability vectorsRegular matricesProbability to get from one state to another in a given number ofstepsProbability distribution after several stepsLong-term behavior

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 7 / 40

  • 7/29/2019 Stochastic Processes and Markov Chains

    8/40

    Stochastic Processes

    A stochastic process is a collection of random variables

    {S

    (t

    ),tT}, where

    tis a parameter that runs over an

    index set T

    In general, we call t the time-parameter

    Each S(t) takes values in some set E called the state space

    Then S(t) is the state of the process at time (or step) t

    E.g., S(t) may be:The number of incoming emails at time tThe balance of a bank account on day tThe number of heads shown by t flips of a coin

    Since any stochastic process is simply a collection of randomvariables, the name random process is also used for them

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 8 / 40

  • 7/29/2019 Stochastic Processes and Markov Chains

    9/40

    Stochastic Processes (contd)

    Stochastic processes are classified in a number of ways, such as by

    the index set and by the state space

    When the index set is countable, the stochastic process is said to be adiscrete-time stochastic process

    I.e., T = {0, 1, 2, . . . } or T = {0,1,2, . . . }

    When the index set is an interval of the real line, the stochasticprocess is said to be a continuous-time stochastic process

    I.e., T = {t : t 0} or T = {t : < t< }

    When the state space is countable, the stochastic process is said tobe a discrete-space stochastic process

    When the state space is an interval of the real line, the stochasticprocess is said to be a continuous-space stochastic process

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 9 / 40

  • 7/29/2019 Stochastic Processes and Markov Chains

    10/40

    Stochastic Processes (contd)

    Income of a self-employedperson at day t

    tt

    S S

    SS

    t t

    Income of an employee attime t in the course of year

    Air temperature at noonover t days

    Air temperature at time t

    DT & DS CT & DS

    CT & CSDT & CS

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 10 / 40

  • 7/29/2019 Stochastic Processes and Markov Chains

    11/40

    Outline

    1 Random variables

    2 Stochastic processes

    3 Markov chainsTransition matrices and probability vectorsRegular matricesProbability to get from one state to another in a given number ofstepsProbability distribution after several stepsLong-term behavior

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 11 / 40

  • 7/29/2019 Stochastic Processes and Markov Chains

    12/40

    Markov Chains and Markov Processes

    A Markov chain , named after the Russian mathematician Andrey

    Markov, is a stochastic process for which the future behavior onlydepends on the present and not on the past

    A stochastic process has the Markov property (aka memorylessnessor one-step memory) if the likelihood of a given future state, at any

    given moment, depends only on its present state, and not on any paststates

    A process with this property is called Markovian or a Markov process

    The classic example of a Markov chain is a frog sitting in a pond fullof lily pads

    Each pad represents a stateThe frog starts on one of the pads and then jumps from lily pad to lilypad with the appropriate transition probabilities

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 12 / 40

  • 7/29/2019 Stochastic Processes and Markov Chains

    13/40

    Markov Chains

    A Markov chain as a frog jumping on a set of lily pads

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 13 / 40

  • 7/29/2019 Stochastic Processes and Markov Chains

    14/40

    Markov Chains (contd)

    A finite Markov chain is a stochastic process with a finite number

    of states in which the probability of being in a particular state at stepn + 1 depends only on the state occupied at step n

    Let S= {S1,S2, . . . , Sm} be the possible states

    Let p(n) = (p1, p2, . . . , pm) be the vector of probabilities of each

    state at step nHere pi, i= 1, 2, . . . ,m, is the probability that the process is in stateSi at step n

    For such a probability vector, p1 + p2 + + pm = 1

    A Markov process at time n is fully defined by

    pij = Pr{S(n + 1) = j|S(n) = i}

    Where pij is the conditional probability of being in state j at stepn + 1 given that the process was in state i at step n

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 14 / 40

  • 7/29/2019 Stochastic Processes and Markov Chains

    15/40

    Markov Chains (contd)

    Let P be the transition matrix (aka the stochastic matrix)

    Then P contains all the conditional probabilities of the Markov chain

    P=

    p11 . . . p1m...

    . . ....

    pm1 . . . pmm

    A vector p = (p1, p2, . . . , pm) is called a probability vector if thecomponents are nonnegative and their sum is 1

    The entries in a probability vector can represent the probabilities offinding a system in each of the states

    A square matrix P is called a transition matrix if each of its rows isa probability vector

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 15 / 40

    ( )

  • 7/29/2019 Stochastic Processes and Markov Chains

    16/40

    Markov Chains (contd)

    Example: Which of the following vectors are probability vectors?

    (0.75, 0,0.25, 0.5), (0.75, 0.5, 0, 0.25), (0.25, 0.25, 0.5)

    Example:Which of the following matrices are matrices of transitionprobabilities?

    1/3 0 2/33/4 1/2 1/4

    1/3 1/3 1/3

    1/4 3/41/3 1/3

    0 1 0

    1/2 1/6 1/3

    1/3 2/3 0

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 16 / 40

    M k Ch i ( d)

  • 7/29/2019 Stochastic Processes and Markov Chains

    17/40

    Markov Chains (contd)

    Example: A meteorologist studying the weather in a region decidesto classify each day as {sunny} and {cloudy}After analyzing several years of weather records, he finds:

    The day after a sunny day is sunny 80% of the time,and cloudy 20% of the timeThe day after a cloudy day is sunny 60% of the time,and cloudy 40% of the time

    Thus, the transition matrix and the transition diagram are as follows:

    P=

    pss pscpcs pcc

    =

    0.8 0.20.6 0.4

    0.2

    0.6

    0.8 0.4

    sunny cloudy

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 17 / 40

    M k Ch i ( d)

  • 7/29/2019 Stochastic Processes and Markov Chains

    18/40

    Markov Chains (contd)

    Example from Ryan ODonnells lecture: Every day I wake up andtry to get a lot of work done

    When I am working, I am easily distracted though

    After each minute of work, I only keep working with probability 0.4,and with probability 0.6, I begin surfing the Web

    After each minute of surfing the Web, I only return to working withprobability 0.1, and with probability 0.6 I continue surfing the Web

    With probability 0.3 I feel kind of guilty, so I check my email, which issort of like working

    After each minute of email-checking, I have probability 0.5 of coming

    back to work

    With probability 0.5 I continue checking my email

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 18 / 40

    M k Ch i ( d)

  • 7/29/2019 Stochastic Processes and Markov Chains

    19/40

    Markov Chains (contd)

    Missing arrows indicate zero probability

    P=

    pww pws pwepsw pss psepew pes pee

    =

    0.4 0.6 00.1 0.6 0.3

    0.5 0 0.5

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 19 / 40

    M k Ch i ( td)

  • 7/29/2019 Stochastic Processes and Markov Chains

    20/40

    Markov Chains (contd)

    Example: 3 boys, A, B and C, are throwing a ball to each other

    A always throws the ball to B and B always throws the ball to CC is just as likely to throw the ball to B as to A

    Find the transition matrix of the Markov chain

    This is a Markov chain since the person throwing the ball is notinfluenced by those who previously had the ball

    P=

    paa pab pacpba pbb pbcpca pcb pcc

    =

    0 1 00 0 1

    0.5 0.5 0

    The first row of the matrix corresponds to the fact that A alwaysthrows the ball to B

    The second row corresponds to the fact that B always throws it to C

    The last row corresponds to the fact that C throws the ball to A or Bwith equal probability, and does not throw it to himself

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 20 / 40

    M k Ch i ( td)

  • 7/29/2019 Stochastic Processes and Markov Chains

    21/40

    Markov Chains (cont d)

    Let us consider a particle which moves in a straight line in unit steps

    Each step is one unit to the right with probability p or one unit to theleft with probability q

    It moves until it reaches one of two extreme points which are calledboundary points

    We consider the case of 5 states: S1, S2, S3, S4, S5States S1 and S5 are the boundary statesThen states S2, S3, and S4 are interior states

    S1 S2 S3 S4 S5

    S1 p11 . . . . . . . . . p15S2 . . . . . . . . . . . . . . .S3 . . . . . . . . . . . . . . .S4 . . . . . . . . . . . . . . .S5 p51 . . . . . . . . . p55

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 21 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    22/40

    Markov Chains (cont d)

    Example: Let us assume that once state S1 or S5 is reached, theparticle remains there

    P=

    1 0 0 0 0q 0 p 0 00 q 0 p 00 0 q 0 p

    0 0 0 0 1

    A state Si of a Markov chain is called absorbing if it is impossible toleave it (i.e., pii = 1)

    A Markov chain is absorbing if it has at least one absorbing state, andif from every state it is possible to go to an absorbing state (notnecessarily in one step)

    In an absorbing Markov chain, a state which is not absorbing is called

    transientRoman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 22 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    23/40

    Markov Chains (cont d)

    Example: Assume that the particle is reflected when it reaches aboundary point and returns to the point from which it came

    Thus, if it reaches S1, it goes on the next step back to S2

    If it hits S5, it goes on the next step back to S4

    Find the matrix of transition probabilities

    P=

    0 1 0 0 0q 0 p 0 00 q 0 p 00 0 q 0 p

    0 0 0 1 0

    Since q+ p= 1, then q= 1 p

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 23 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    24/40

    Markov Chains (cont d)

    Example: Assume that whenever the particle hits one of theboundary states, it goes directly to the center state S3

    We may think of this as the process started at state S3 and repeatedeach time the boundary is reached

    Find the matrix of transition probabilities

    P=

    0 0 1 0 0q 0 p 0 00 q 0 p 00 0 q 0 p

    0 0 1 0 0

    q= 1 p

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 24 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    25/40

    Markov Chains (cont d)

    Example: Assume that once a boundary state is reached, the particlestays at this state with probability 0.5 and moves to the otherboundary state with probability 0.5

    Find the matrix of transition probabilities

    P=

    0.5 0 0 0 0.5q 0 p 0 00 q 0 p 00 0 q 0 p

    0.5 0 0 0 0.5

    q= 1 p

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 25 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    26/40

    Markov Chains (cont d)

    A matrix P is called a stochastic matrix, if it does not contain anynegative entries and the sum of each row of the matrix is equal to 1

    The product of 2 stochastic matrices is again a stochastic matrix

    Therefore, all powers of stochastic matrices Pn are stochasticmatrices

    A stochastic matrix P is said to be regular if all elements of at leastone particular power ofP are positive and different from 0

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 26 / 40

    Matrix Multiplication

  • 7/29/2019 Stochastic Processes and Markov Chains

    27/40

    Matrix Multiplication

    In order for matrix multiplication to be defined, the dimensions of thematrices must satisfy (nm)(m p) = (n p)

    The product C of 2 matrices A and B is defined as

    c1,1 c1,2 c1,3

    c2,1 c2,2 c2,3

    c3,1 c3,2 c3,3

    a1,1 a1,2

    a2,1 a2,2

    a3,1 a3,2

    b1,1 b1,2 b1,3

    b2,1 b2,2 b2,3

    A

    x

    C

    =

    B

    That is

    c1,2 = a1,1b1,2 + a1,2b2,2

    c3,3 = a3,1b1,3 + a3,2b2,3

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 27 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    28/40

    Markov Chains (cont d)

    Example: Let A be a stochastic matrix:

    A =a1,1 a1,2a2,1 a2,2

    =

    0 10.5 0.5

    Let us calculate A2:

    A2 =a1,1 a1,2a2,1 a2,2

    a1,1 a1,2a2,1 a2,2

    Thus, we get:

    A2 =a1

    ,

    1a1,

    1 + a1,

    2a2,

    1 a1,

    1a1,

    2 + a1,

    2a2,

    2a2,1a1,1 + a2,2a2,1 a2,1a1,2 + a2,2a2,2

    =

    0.5 0.50.25 0.75

    Hence, A is regular

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 28 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    29/40

    Markov Chains (cont d)

    The entry pij in the transition matrix P of a Markov chain is theprobability that the system changes from state Si to state Sj in 1 step

    P=

    p11 . . . p1m

    .... . .

    ...pm1 . . . pmm

    What is the probability that the system changes from state Si to stateSj in exactly n steps?

    As a rule, this probability is denoted as p(n)

    ij

    The probability of going from any state Si to another state Sj in afinite Markov chain with the transition matrix P in n steps is given bythe element (i,j) of the matrix Pn

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 29 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    30/40

    Markov Chains (cont d)

    Example: The Land of Oz is blessed by many things, but not bygood weather:

    They never have 2 nice days in a rowIf they have a nice day, they are just as likely to have snow as rain thenext dayIf they have snow or rain, they have an even chance of having the samethe next day

    If there is change from snow or rain, only half of the time is this achange to a nice day

    With this information let us form a Markov chain

    Let us denote as states the kinds of weather: {rain}, {nice}, and{snow}

    From the above information we determine the transition matrix:

    P=

    prr prn prspnr pnn pnspsr psn pss

    =

    1/2 1/4 1/41/2 0 1/2

    1/4 1/4 1/2

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 30 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    31/40

    Markov Chains (cont d)

    We consider the question of determining the probability that, giventhe chain is in state Si today, it will be in state Sj n days from now

    (p(n)ij )

    Example: Let us determine the probability that if it is rainy today

    then the event that it is snowy 2 days from now (p(2)rs)

    We have the transition matrix:prr prn prspnr pnn pnspsr psn pss

    =

    p1,1 p1,2 p1,3p2,1 p2,2 p2,3p3,1 p3,2 p3,3

    =

    1/2 1/4 1/41/2 0 1/2

    1/4 1/4 1/2

    Then

    P2 =

    p

    (2)1,1 p

    (2)1,2 p

    (2)1,3

    p(2)2,1 p

    (2)2,2 p

    (2)2,3

    p(2)3,1 p

    (2)3,2 p

    (2)3,3

    p(2)1,3 = p1,1p1,3 + p1,2p2,3 + p1,3p3,3 = 38

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 31 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    32/40

    ( )

    Consider a Markov chain process with the transition matrix P

    Let

    p

    (0)

    = (p

    (0)

    1 , p

    (0)

    2 , . . . , p

    (0)

    m ) be the initial probability distributionAnd let p(n) = (p

    (n)1 , p

    (n)2 , . . . , p

    (n)m ) be the probability distribution at

    step n

    Then the probability distribution at step n can be found as

    p(n) = p(0)Pn

    That is

    p(1) = p(0)Pp(2) = p(1)P= p(0)P2

    . . .

    p(n) = p(n1)P= p(0)Pn

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 32 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    33/40

    ( )

    Example: In the Land of Oz example, we have:

    P=

    prr prn prspnr pnn pnspsr psn pss

    =

    0.5 0.25 0.250.5 0 0.5

    0.25 0.25 0.5

    Let the initial probability vector be p(0) = (1/3, 1/3, 1/3)

    Hence, the probability distribution of the states after 3 days isp(3) = p(0)P3:

    (1/3, 1/3, 1/3)

    0.406 0.203 0.3910.406 0.188 0.406

    0.391 0.203 0.406

    = (0.401, 0.198, 0.401)

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 33 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    34/40

    ( )

    Example: Every day, a man either catches a bus or drives his car towork

    Suppose he never goes by bus 2 days in a row

    But if he drives to work, then the next day he is just as likely to driveagain as he is to travel by bus

    The state space of the system is {bus} and {car}

    This stochastic process is a Markov chain since the outcome on anyday depends only on what happened the preceding day

    P=pbb pbcpcb pcc

    = 0 1

    1/2 1/2

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 34 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    35/40

    ( )

    Example: Suppose now that on the first day the man tosses a fair dieand drives to work if and only if a 6 appeared

    Then the initial probability distribution is given by

    p(0) = (5/6, 1/6), P=

    pbb pbcpcb pcc

    =

    0 1

    1/2 1/2

    Find the probability distribution after 4 days

    We have that p(4) = p(0)P4

    Then

    (5/6, 1/6)

    3/8 5/85/16 11/16

    = (35/96, 61/96)

    Thus, the probability of traveling to work by bus is 35/96 and drivingto work is 61/96

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 35 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    36/40

    ( )

    Let P be the transition matrix for a regular chain

    Then, as n, the powers Pn approach a limiting matrix W,where all rows approach the same vector w

    The vector w is a steady-state vector

    A regular Markov chain has only one steady-state vector

    The steady-state vector w givesthe long-term probability distribution of the states of the Markov

    chainw = w P,

    i

    wi

    = 1

    Thus, if a Markov system is regular, then its long-term transitionmatrix is given by the square matrix whose rows are all the same andequal to the steady-state vector

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 36 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    37/40

    Example from Ryan ODonnells lecture: After 1 minute:

    P=

    pww pws pwepsw pss psepew pes pee

    =

    0.4 0.6 00.1 0.6 0.3

    0.5 0 0.5

    After 10 minutes we get:

    P10

    0.2940 0.4413 0.26480.2942 0.4411 0.2648

    0.2942 0.4413 0.2648

    And after 1 hour (i.e., P60

    ) the result is almost the same0.294117647058823 0.441176470588235 0.2647058823529410.294117647068823 0.441176470588235 0.264705882352941

    0.294117647068823 0.441176470588235 0.264705882352941

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 37 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    38/40

    Example: After 1 day, the weather in the Land of Oz is given by

    P=

    prr prn prspnr pnn pnspsr psn pss

    = 0.5 0.25 0.250.5 0 0.5

    0.25 0.25 0.5

    After 1 week:

    P7 =0.4 0.2 0.40.4 0.2 0.4

    0.4 0.2 0.4

    Let the initial probability vector p(0) be either (1/3, 1/3, 1/3) or(1/10, 8/10, 1/10)

    Even then, after 1 week the result is the same

    p(7) = p(0)P7 = (0.4, 0.2, 0.4)

    Hence, in the long run, the starting state does not really matter

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 38 / 40

    Markov Chains (contd)

  • 7/29/2019 Stochastic Processes and Markov Chains

    39/40

    If the higher and higher powers ofP approach a fixed matrix P, werefer to P as the steady-state or long-term transition matrix

    It is often possible to approximate P with great accuracy by simplycomputing a very large power ofP

    Q: How large?A: You know it is large enough when the rows are all the same with the

    accuracy you desire

    Matrix Algebra Tool:

    http://people.hofstra.edu/stefan waner/RealWorld/matrixalgebra/

    fancymatrixalg2.html

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 39 / 40

    Markov Chains (contd)

    http://people.hofstra.edu/stefan_waner/RealWorld/matrixalgebra/fancymatrixalg2.htmlhttp://people.hofstra.edu/stefan_waner/RealWorld/matrixalgebra/fancymatrixalg2.htmlhttp://people.hofstra.edu/stefan_waner/RealWorld/matrixalgebra/fancymatrixalg2.htmlhttp://people.hofstra.edu/stefan_waner/RealWorld/matrixalgebra/fancymatrixalg2.html
  • 7/29/2019 Stochastic Processes and Markov Chains

    40/40

    Example: For the Land of Oz, the transition matrix is

    P=

    prr prn prspnr pnn pnspsr psn pss

    = 0.5 0.25 0.250.5 0 0.5

    0.25 0.25 0.5

    The steady-state vector is given by w = (0.4, 0.2, 0.4), so

    w P= (0.4, 0.2, 0.4)

    0.5 0.25 0.250.5 0 0.5

    0.25 0.25 0.5

    = (0.4, 0.2, 0.4) = w

    Therefore, the long-term transition matrix is

    W = P =

    0.4 0.2 0.40.4 0.2 0.4

    0.4 0.2 0.4

    Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 40 / 40