encs 6161 - ch11

Upload: dania-alashari

Post on 30-May-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/9/2019 ENCS 6161 - ch11

    1/13

    Chapter 11

    Markov ChainsENCS6161 - Probability and Stochastic

    ProcessesConcordia University

  • 8/9/2019 ENCS 6161 - ch11

    2/13

    Markov ProcessesA Random Process is a Markov Process if the futureof the process given the present is independent ofthe past, i.e., if t

    1< t

    2< < t

    k< t

    k+1, then

    P[X(tk+1) = xk+1|X(tk) = xk, , X(t1) = x1]

    = P[X(tk+1) = xk+1|X(tk) = xk]

    if X(t) is discreete-valued or

    fX(tk+1)(xk+1|X(tk) = xk, , X(t1) = x1)

    = fX(tk+1)(xk+1|X(tk) = xk)

    if X(t) is continuous-valued.

    ENCS6161 p.1/12

  • 8/9/2019 ENCS 6161 - ch11

    3/13

    Markov ProcessesExample: Sn = X1 + X2 + + Xn

    Sn+1 = Sn + Xn+1P[Sn+1 = sn+1|Sn = sn, , S1 = s1]

    = P[Sn+1 = sn+1|Sn = sn]So Sn is a Markov process.

    Example: The Poisson process is a continuous-timeMarkov process.

    P[N(tk+1) = j|N(tk) = i, , N(t1) = x1]

    = P[j i events in tk+1 tk]

    = P[N(tk+1) = j|N(tk) = i]

    An integer-valued Markov process is called MarkovChain.

    ENCS6161 p.2/12

  • 8/9/2019 ENCS 6161 - ch11

    4/13

    Discrete-time Markov ChainXn is a discrete-time Markov chain starts at n = 0with

    Pi(0) = P[X0 = i], i = 0, 1, 2, Then from the Markov property,P[Xn = in, , X0 = i0]

    = P[Xn = in|Xn1 = in1] P[X1 = i1|X0 = i0]P[X0 = i0]

    where P[Xk+1 = ik+1|Xk = ik] is called the one-stepstate transition probability.

    If P[Xk+1 = j|Xk = i] = pij for all k, Xn is said to have

    homogeneous transition probabilities.P[Xn = in, , X0 = i0] = Pi0(0)pi0,i1 pin1,in

    ENCS6161 p.3/12

  • 8/9/2019 ENCS 6161 - ch11

    5/13

    Discrete-time Markov ChainThe process is completely specified by the initial pmfPi0(0) and the transition matrix

    P =

    p00 p01 p02

    p10 p11 p12 ...

    ......

    ...

    where for each row:

    j

    pij = 1

    ENCS6161 p.4/12

  • 8/9/2019 ENCS 6161 - ch11

    6/13

    Discrete-time Markov ChainExample: Two-state Markov Chain for speach activity(on-off source)

    two states:

    0 silence (off)1 with speach activity (on)

    State Transition Diagram

    0 1

    1 1

    P =

    1

    1

    ENCS6161 p.5/12

  • 8/9/2019 ENCS 6161 - ch11

    7/13

    Discrete-time Markov ChainThe n-Step Transition Probabilities

    pij(n) P[Xk+n = j|Xk = i] n 0

    Let P(n) be the n-step transition probability matrix,i.e.

    P(n) = p00(n) p01(n) p02(n)

    p10(n) p11(n) p12(n)

    ... ... ... ...

    Then P(n) = Pn, where P is the one-step transitionprobability matrix.

    ENCS6161 p.6/12

  • 8/9/2019 ENCS 6161 - ch11

    8/13

    The State ProbabilitiesLet p(n) = {Pj(n)} be the state prob. at time n then

    Pj(n) = i

    P[Xn = j|Xn1 = i]P[Xn1 = i]

    =i

    pijPi(n 1)

    i.e. p(

    n) =

    p(

    n 1)

    P.

    By recursion:

    p(n) = p(n 1)P = p(n 2)P2 = = p(0)Pn

    ENCS6161 p.7/12

  • 8/9/2019 ENCS 6161 - ch11

    9/13

    Steady State ProbabilitiesIn many cases, when n , the Markov chain goesto steady state, in which the state probabilities do notchange with n anymore, i.e.,

    p(n) , as n

    is called the Stationary State pmf.

    If the steady state exists, then when n is large, we

    havep(n) = p(n 1) =

    = P

    (note:

    i i = 1)

    ENCS6161 p.8/12

  • 8/9/2019 ENCS 6161 - ch11

    10/13

    Steady State ProbabilitiesExample: Find the steady state pmf of the on-offsource.

    P =

    [0, 1]

    1

    1

    = [0, 1]

    together with 0 + 1 = 1 0 =

    + 1 =

    +

    ENCS6161 p.9/12

  • 8/9/2019 ENCS 6161 - ch11

    11/13

    Continuous-Time Markov ChainsIf P[X(s + t) = j|X(s) = i] = pij(t), t 0 for all s, thenthe continuous-time Markov chain X(t) hashomogeneous transition prob.

    The transition rate of X(t) entering state j from i isdefined as

    rij p

    ij(t)|t=0 = lim0 pij()

    i = j

    lim0pij()1

    i = j

    Note:

    pij(0) = 0 i = j

    1 i = j

    ENCS6161 p.10/12

  • 8/9/2019 ENCS 6161 - ch11

    12/13

    Continuous-Time Markov ChainsFrom

    Pj(t + ) = i

    Pi(t)pij()

    Pj(t) =

    i Pi(t)pij(0)We can show that:

    Pj(t + ) Pj(t)

    =

    i

    Pi(t)pij() pij(0)

    Let 0, we have:

    Pj(t) =

    iPi(t)rij

    This is called Chapman-Kolmogorov equations.

    ENCS6161 p.11/12

  • 8/9/2019 ENCS 6161 - ch11

    13/13

    Steady State ProbabilitiesIn the steady-state, Pj(t) doesnt change with t, so

    Pj(t) = 0

    and hence from Chapman-Kolmogorov equations

    i

    Pirij = 0 for all j

    These are called the Global Balancee Equations.

    ENCS6161 p.12/12