part iii discrete models notes - school of computingmcrane/ca659/newslides/ca...part iii discrete...

26
Discrete Models Part III Discrete Models 151 / 302 Discrete Models Linear Difference Equations Non-Linear Models with Applications 3 Discrete Models Linear Difference Equations Non-Linear Models with Applications 152 / 302 Notes Notes

Upload: others

Post on 28-Jan-2021

6 views

Category:

Documents


0 download

TRANSCRIPT

  • Discrete Models

    Part III

    Discrete Models

    151 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    3 Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    152 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Glossary of Terms

    Here are some of the types of symbols you will see in thissection:

    Name Symbol Face ExamplesVector Lower-case bold u,v,pMatrix Upper-case bold M,X,AVector at Time step Subscript u0Age Category Subscript & ut0at Time step Superscript

    153 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    First Order Linear Difference Equations

    We start with the most basic equations.State at time t purely related to that at t− 1Example in nature is cell division

    Mn+1 = aMn (3.1)

    a constant, n is the generation numberSo number in nth generation related to that in first by:

    Mn = aMn−1 = . . . = anM0 (3.2)

    So if1 |a| > 1 the population will increase,2 |a| = 1 the population will be stable,3 |a| < 1 the population will decrease.

    154 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Examples of Linear Difference Equations

    Example 3.1: Rabbit ReproductionDifference eqn Order: no. of terms determining present state.

    Examples of higher order difference eqns common in nature.

    Leonardo of Pisa (a.k.a.Fibonacci) modelled rabbit reproduction.

    Assumptions of Fibonacci model:1 Each pair of rabbits can reproduce from two months old2 Each reproduction produces only one pair of rabbits3 All rabbits survive.

    Number of rabbit pairs at tn+1, Mn+1 (n =months) is:

    Mn+1 = Mn +Mn−1. (3.3)

    With M0 = 1, M1 = 1, (1 pair at t = 0) number grows as1, 1, 2, 3, 5, 8, 13, . . .

    155 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Examples of Linear Difference Equations (/2)

    Example 3.1: Rabbit Reproduction (cont’d)

    FIGURE 3.1: Fibonacci Number of Immature (:) & Mature Rabbits (=)

    156 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Examples of Linear Difference Equations (/3)

    Example 3.1: Rabbit Reproduction (cont’d)Instead of Eqn.(3.3), ‘one step’ eqn (eg. Eqn.(3.1)) is better

    Get this by writing Eqn.(3.3) in the form:

    Mn+1 + Mn = Mn+2Mn+1 = Mn+1

    (3.4)

    which, by writing

    un =(Mn+1Mn

    )takes the form

    un+1 =

    (1 11 0

    )un. (3.5)

    157 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Digression: Matrix Basics

    Matrices & VectorsA matrix is an array of coefficients of the form:

    A =

    a11 a12 . . . a1na21 a22 . . . a2n...

    .... . .

    ...am1 am2 . . . amn

    (3.6)This is an m× n matrix with m rows & n columns.A vector is an array of coefficients of the form:

    A =

    a11a21...

    am1

    158 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Digression: Matrix Basics (/2)

    Matrix SystemsIn the course will see systems of equations of the form:

    a11x1 + a12x2 = b1a21x1 + a22x2 = b2

    (3.7)

    for a system of two equations in 2 unknowns x1, x2constantcoefficients a11, a12, a21, a22 & right-hand side b1, b2.

    With matrix multiplication, this can be written as:

    Ax = b ≡(a11 a12a21 a22

    )(x1x2

    )=(b1b2

    )(3.8)

    159 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Digression: Matrix Basics (/3)

    Matrix Inverse, Identity MatrixCan show that Eqn.(3.8) has a unique solution if A−1 exists.

    A−1 has the property:

    A × A−1 = Identity Matrix I

    The n× n identity matrix is given by:

    I =

    1 0 . . . 00 1 . . . 0...

    .... . .

    ...0 0 . . . 1

    (3.9)

    160 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Digression: Matrix Basics (/4)

    Solutions to Matrix Systems: Matrix DeterminantTo solve x = (x, y) in Eqn.(3.8) must find A−1For a 2× 2 Matrix A−1 is given by:

    A−1 = 1det(A)

    (a22 −a12−a21 a11

    )(3.10)

    where det(A) is the determinant of the matrix ADeterminant of A is det(A) = a11a22 − a12a21.Eqn.(3.10) holds for a 2× 2 matrix only.The solution in Eqn.(3.8)) only exists if this condition is met:

    det(A) ≡∣∣∣∣ a11 a12a21 a22

    ∣∣∣∣ 6= 0 (3.11)161 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Digression: Matrix Basics (/5)

    Matrix Characteristic Equation, TraceThe characteristic equation for A is given by det(A− λI) = 0.It arises in a number of circumstances.

    For a 2× 2 matrix, this expression becomes:

    det(A− λI) = 0 ≡∣∣∣∣ a11 − λ a12a21 a22 − λ

    ∣∣∣∣ = 0 (3.12)which reduces to λ2 − (a11 + a22)λ+ (a11a22 − a12a21) = 0This we rewrite as

    λ2 − pλ+ q = 0 (3.13)

    where p = a11 + a22 is Trace of A & q = det(A), its determinant.

    162 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Digression: Matrix Basics (/6)

    Matrix Eigenvalues, EigenvectorsThe roots of the quadratic equation in Eqn.(3.13) are:

    λ1,2 =p

    2 ±√p2 − 4q

    2 (3.14)

    these are known as the eigenvalues of A.Can show that any matrix A can be decomposed as follows:

    A = SΛS−1 (3.15)

    – S has eigenvectors of A, v1,v2 on the columns,– Λ matrix has eigenvalues as diagonals & zeros elsewhere:

    A = S(λ1 00 λ2

    )S−1 (3.16)

    163 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Digression: Matrix Basics (/7)

    Matrix Eigendecompostition, Singular Value DecompositionEqn.(3.15) is useful to raise matrices to powers:

    A3 =(SΛS−1

    ) (SΛS−1

    ) (SΛS−1

    )=(SΛ3S−1

    )(3.17)

    The eigenvectors v1,v2 are the solutions to the linear system

    Ax = λx

    for λ = λ1, λ2 respectively.

    – As with λ’s, these have physical meanings for the system.

    – Eqn.(3.15) is known as eigen decomposition for a square matrix

    – Where matrix is not square, it’s known as singular value decomposition164 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Digression: Matrix Basics (/8)

    Matrix Decomposition & Difference EquationsFor difference equations the system at time step n is related tothat at the previous step n− 1 through the system:

    un = Aun−1 = Anu0 (3.18)

    Using eigendecomposition A = SΛS−1 and setting(c1c2

    )= S−1u0 = S−1

    (u1u2

    )n=0

    can see that

    un =(u1u2

    )= c1λn1 v1 + c2λn2 v2 (3.19)

    where c1, c2 are constants.165 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Digression: Matrix Basics (/9)

    Matrix Decomposition, Difference & Differential EquationsDifferential equations have similar result: a second order ODEsystem (often) has a solution thus:

    x(t) =(x(t)y(t)

    )= c1v1eλ1t + c2v2eλ2t. (3.20)

    where c1, c2 are constants, et is the exponential function.

    – So solutions of difference & differential equations are expressedas a linear combination of the original matrix system’s λ’s, v’s.

    – This is useful in finding longterm solutions of matrix systems e.g.Fibonacci series.

    166 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Back to Fibonacci Sequences

    Eigenvalues and the Fibonacci Difference EquationLong-term system behaviour (Eqn.(3.5)), found from Eqn.(3.17)

    un = Anu0 = SΛnS−1u0 (3.21)

    – Given that

    A =(

    1 11 0

    )– from Eqn.(3.5), we find the characteristic equation to be

    λ2 − λ− 1 = 0

    (from Eqn.(3.12)).– This gives the eigenvalues

    λ1 =1 +√

    52 , λ2 =

    1−√

    52 .

    167 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Stability of Fibonacci Sequences

    The full eigendecomposition for A can then be found to be

    A = 1λ1 − λ2

    (λ1 λ21 1

    )(λ1 00 λ2

    )(1 −λ2−1 λ1

    )(3.22)

    Thus Eqn.(3.21) reduces to(Mn+1Mn

    )= S

    (λn1 00 λn2

    )S−1

    (10

    )(3.23)

    The nth Fibonacci number is 2nd element of vector on left handside of Eqn.(3.23), Mn, can be shown to be:

    Mn =λn1 − λn2λ1 − λ2

    = 1√5

    (1 +√52)n−��

    ����*0(

    1−√

    52

    )n (3.24)168 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Stability of Fibonacci Sequences (/2)

    The Golden Number & Fibonacci SequencesAncient Greeks knew φ = (1 +

    √5)/2 as the golden number.

    This was because rectangles with sides in the ratio 1 : 1.618were thought the most elegant.

    It occurs frequently in nature and persists in the everydaydesigns e.g. credit cards etc.

    As λ2 > 1 & −1 < λ1 < 0, ∴ λ2 is λmaxIts magnitude means the Fibonacci sequence is monotonicallyincreasing.

    This can be seen in Fig. 3.2.

    169 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Stability of Fibonacci Sequences (/3)

    FIGURE 3.2: Fibonacci Sequence to 10 Generations

    170 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Example 3.2: Bonham Sequences

    Example 3.2: Pig ReproductionA pair of bonhams matures to a pair of pigs next season.

    Mature pairs produce 6 pairs of bonhams the following season &every successive season thereafter

    Each pair of bonhams produced takes one season to mature & afurther season to start breeding every subsequent season.

    This can be seen in the diagram (Fig 3.3),

    Assume breeding is seasonal so generations do not overlap &pigs are long-lived.

    171 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Example 3.2: Bonham Sequences (/2)

    FIGURE 3.3: Number of Immature (:) & Mature Pigs (=)

    172 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Example 3.2: Bonham Sequences (/3)

    As with eqn(3.5), can express number of pairs of pigs in n+ 1thgeneration w.r.t. nth:

    un+1 =(

    1 61 0

    )un. (3.25)

    which (from eqn(3.12)) leads to the eigenvalue problem:

    det(A− λI) = 0 ≡∣∣∣∣ 1− λ 61 0− λ

    ∣∣∣∣ = 0 (3.26)which reduces to

    λ2 − λ− 6 = 0 (3.27)

    giving eigenvalues λ1 = 3 and λ2 = −2.

    173 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Example 3.2: Bonham Sequences (/4)

    The full eigendecomposition can then be found to be:

    A = 15

    (3 21 −1

    )(3 00 −2

    )(1 21 −3

    )(3.28)

    Thus, as in eqn(3.18) above for Fibonacci:

    un = Aun−1 = Anu0 (3.29)Which may be shown to be:

    un =15

    (3 21 −1

    )(3n 00 −2n

    )(1 21 −3

    )u0 (3.30)

    Which reduces to

    un =15

    [3(3n) + 2(−2)n

    3n − (−2)n]

    (3.31)

    for an initial population u0 = (1, 0)T (i.e. one breeding pair).174 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Example 3.3: An Example Closer to Home...

    Example 3.3: CA659 Class SizeEvery year the size of the class CA659 seems to increase...This year’s numbers are about 90% up on last years (a fairlystandard increase).Also in the mix would be Repeating students (i.e. failed last yearor deferred).

    – These latter group would typically number 20% of the previousyear’s cohortSo, following Eqn.(3.3), if amount at week n+ 1 is given byMn+1:

    Mn+1 = 1.9Mn + 0.2Mn−1. (3.32)

    With M0 = 0, M1 = 10What would the future hold if this were to continue?

    175 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Example 3.3: An Example Closer to Home... (/2)

    Hence1.9Mn+1 + 0.2Mn = Mn+2Mn+1 = Mn+1

    (3.33)

    which, by writing (as per Rabbit Reproduction above)

    un =(Mn+1Mn

    )looks like:

    un+1 =(

    1.9 0.21 0

    )un, with u0 =

    (100

    )(3.34)

    ∴ u1 =(

    1910

    ),u2 =

    (38.119

    )etc.

    176 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Example 3.3: An Example Closer to Home... (/3)

    As with eqn(3.5) & eqn(3.25)), derive an expression for thenumber in the n+ 1th delivery w.r.t. the nth:

    un+1 =(

    1.9 0.21 0

    )un. (3.35)

    again, (from eqn(3.12)) this leads to the eigenproblem:

    det(A− λI) = 0 ≡∣∣∣∣ 1.9− λ 0.21 0− λ

    ∣∣∣∣ = 0 (3.36)yielding the characteristic equation:

    λ2 − 1.9λ− 0.2 = 0 (3.37)

    with λ1 = −0.1 and λ2 = 2.

    177 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Example 3.3: An Example Closer to Home... (/4)

    Thus, using Eqn(3.17) above

    A3 =(SΛS−1

    ) (SΛS−1

    ) (SΛS−1

    )=(SΛ3S−1

    ), (3.38)

    the full eigendecomposition can then be found to be:

    A = 121

    (2 −0.11 1

    )(2 00 −0.1

    )(10 1−10 20

    )(3.39)

    So, as in eqn(3.18) above for the Fibonacci example:

    un = Aun−1 = Anu0 (3.40)

    178 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Example 3.3: An Example Closer to Home... (/5)

    Which may be shown to be:

    un =121

    (2 −0.11 1

    )(2n 00 (−0.1)n

    )×(

    10 1−10 20

    )u0

    Simplifying to

    un = 4.76[

    (2n+1)− (−0.1)n+12n − (−0.1)n

    ](3.41)

    for an initial vector of u0 = (10, 0)T , this gives u1 ≈(

    1910

    )etc.

    179 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    The Leslie Matrix

    The Leslie matrix is a generalization of the above.It describes annual increases in various age categories ofa population.As above we write pn+1 = Apn where pn, A are given by:

    pn =

    p1np2n...pmn

    , A =

    α1 α2 . . . αm−1 αmσ1 0 . . . 0 0

    0 σ2...

    . . ....

    0 0 . . . σm−1 0

    (3.42)αi, σi, the number of births in age class i in year n & probabilitythat i year-olds survive to i+ 1 years old, respectively.

    180 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    The Leslie Matrix (/2)

    Long-term population demographics found as with Eqn.(3.21)using λis of A in Eqn.(3.42)& det(A− λI) = 0 to give Leslie characteristic equation:

    λn − α1λn−1 − α2σ1λn−2 − α3σ1σ2λn−3 − · · · − αnn−1∏i=1

    σi = 0

    (3.43)αi, σi, are births in age class i in year n & the fraction that iyear-olds live to i+ 1 years old, respectively.

    181 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    The Leslie Matrix (/3)

    Eqn.(3.43) has one +ive eigenvalue λ∗ & correspondingeigenvector, v∗.For a general solution like Eqn.(3.19)

    Pn = c1λn1 v1 +����:0c2λn2 v2 + · · ·+���

    ��:0cnλnmvm,

    with dominant eigenvalue λ1 = λ∗ gives long-term solution:

    Pn ≈ c1λn1 v1 (3.44)

    with stable age distribution v1 = v∗. The relative magnitudes ofits elements give stable state proportions.

    182 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    The Leslie Matrix (/4): Example 3.4

    Example 3.4: Leslie Matrix for a Salmon PopulationSalmon have 3 age classes & females in the 2nd & 3rd produce4 & 3 offspring, each season.Suppose 50% of females in 1st age class survive to 2nd ageclass & 25% of females in 2nd age class live on into 3rd.The Leslie Matrix (c.f. Eqn.(3.43)) for this population is:

    A =

    0 4 30.5 0 00 0.25 0

    (3.45)Fig. 3.4 shows the growth of age classes in the population.

    183 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Leslie Matrix (/5): Example 3.4

    Example 3.4: Leslie Matrix for a Salmon Population

    FIGURE 3.4: Growth of Salmon Age Classes

    184 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    The Leslie Matrix (/6): Example 3.4

    Example 3.4: Leslie Matrix for a Salmon PopulationThe eigenvalues of the Leslie matrix may be shown to be

    Λ =

    λ1 0 00 λ2 00 0 λ3

    = 1.5 0 00 −1.309 0

    0 0 −0.191

    (3.46)and the eigenvector matrix S to be given by

    S =

    0.9474 0.9320 0.22590.3158 −0.356 −0.5910.0526 0.0680 0.7741

    (3.47)Dominant e-vector: (0.9474, 0.3158, 0.0526)T , can benormalized (divide by sum), to (0.72, 0.24, 0.04)T .

    185 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    The Leslie Matrix (/7): Example 3.4

    Example 3.4: Leslie Matrix for a Salmon Population cont’dLong-term, 72% of pop’n are in 1st age class, 24% in 2ndand 4% in 3rd.Thus, due to principal e-value λ1 = 1.5, populationincreases.Can verify by taking any initial age distribution &multiplying it by A.It always converges to the proportions above.

    186 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    The Leslie Matrix (/8)

    A side note on matrices similar to the Leslie matrix.Any lower diagonal matrix of the form 0 0 01 0 0

    0 1 0

    (3.48)Can ‘move’ a vector of age classes forward by 1 generation e.g. 0 0 01 0 0

    0 1 0

    × ab

    c

    = 0a

    b

    (3.49)187 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Stability in Difference Equations

    If difference equation system has the form un = Aun−1then growth as n→∞ depends on the λi thus:

    If all eigenvalues |λi| < 1, system is stable & un → 0 asn→∞.

    Whenever all values satisfy |λi| ≤ 1, system is neutrallystable & un is bounded as n→∞.

    Whenever at least one value satisfies |λi| > 1, system isunstable & un is unbounded as n→∞.

    188 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Markov ProcessesOften with difference equations don’t have certainties of events, butprobabilities.So with Leslie Matrix Eqn.(3.42):

    pn =

    p1np2n...pmn

    , A =

    α1 α2 . . . αm−1 αmσ1 0 . . . 0 0

    0 σ2. . .

    ......

    0 0 . . . σm−1 0

    (3.50)σi is probability that i year-olds survive to i+ 1 years old.Leslie model resembles a discrete-time Markov chain

    Markov chain: discrete random process with Markov propertyMarkov property: state at tn+1 depends only on that at tn.

    The difference between Leslie model & Markov model, is:In Markov αm + σm must = 1 for each m.Leslie model may have these sums 1.

    189 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Markov Processes (/2)

    Stochastic ProcessesA Markov Process is a particular case of a Stochastic5 Process.

    A Markov Process is a Stochastic Process where probability to enter astate depends only on last state & on governing Transition matrix.

    If Transition Matrix has terms constant between subsequent timesteps,process is Stationary.

    FIGURE 3.5: General Case of a Markov Process © Max Heimel, TÜ Berlin

    5One where probabilities govern entering a state190 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Markov Processes (/3)

    General form of discrete-time Markov chain is given by:

    un+1 = Mun

    where un, M are given by:

    un =

    u1nu2n...upn

    , M =

    m11 m12 . . . m1 p−1 m1pm21 m22 . . . m2 p−1 m2p

    ......

    . . ....

    ...mp1 mp2 . . . mp p−1 mpp

    (3.51)M is p× p Transition matrix & its mij terms are called Transitionprobabilities such that

    ∑pi=1 mij = 1.

    mij is probability that that item goes from state i at tn to state j at tn+1.

    191 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Markov Processes (/4): Example 3.5

    Example 3.5: Two Tree Forest EcosystemIn a forest there are only two kinds of trees: oaks and cedars.At any time n sample space of possible outcomes is (O,C)Here O = % of tree population that is oak in a particular yearand C, = % that is cedar.If same life spans & on death same chance an oak is replacedby an oak or a cedarBut that cedars are more likely (p = 0.74) to be replaced by anoak than another cedar (p = 0.26).How can we track changes in the different tree types with time?

    192 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Markov Processes (/5): Example 3.5

    Example 3.5: Two Tree Forest EcosystemThis is a Markov Process as oak/cedar fractions at tn+1 etc aredefined by those at tn.Transition Matrix (from Eqn.(3.51)) is Table 3.1:

    FromOak Cedar

    Oak 0.5 0.74To

    Cedar 0.5 0.26

    TABLE 3.1: Tree Transition Matrix

    Table 3.1 in matrix form is:

    M =(

    0.5 0.740.5 0.26

    )(3.52)

    193 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Markov Processes (/6): Example 3.5

    Example 3.5: Two Tree Forest EcosystemTo track system changes, let un = (on, cn)T be probability of oak& cedar after n generations.If forest is initially 50% oak and 50% cedar, then u0 = (0.5, 0.5)T .Hence

    un = Mun−1 = Mnu0 (3.53)

    M can be shown to have one positive λ & correspondingeigenvector (0.597, 0.403)T

    This is the distribution of oaks and cedars in the nth generation.

    194 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Markov Processes (/7): Example 3.6

    Example 3.6: Soft Drink Market ShareIn a soft drinks market there are two Brands: Coke & Pepsi.At any time n sample space of possible outcomes is (P,C)Here P = % market share that is Pepsi’s in one year and C,= % that is Coke’s.Know that chance of switching from Coke to Pepsi is 0.1And the chances of someone switching from Pepsi to Cokeare 0.3.How can the changes in the different proportions bemodelled?

    195 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Markov Processes (/8): Example 3.6

    Example 3.6: Soft Drink Market ShareThis is a Markov Process as shares of Coke/Pepsi at tn+1 are definedby those at tn.Transition Matrix (from Eqn.(3.51)) is Table 3.2:

    FromCoke Pepsi

    Coke 0.9 0.3To

    Pepsi 0.1 0.7

    TABLE 3.2: Soft Drink Market Share Matrix

    Table 3.2 in matrix form:

    M =(

    0.9 0.30.1 0.7

    )(3.54)

    196 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Markov Processes (/9): Example 3.6

    Example 3.6 can also be represented using a TransitionDiagram, thus:

    FIGURE 3.6: Market Share Transition Diagram

    The eigenvalues of the matrix in Eqn(3.53) are 1, 35 .The largest eigenvector, can be found to be (0.75, 0.25)T .This is the proportions of Coke and Pepsi in the nthgeneration.

    197 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Markov Processes (/10): Markov States

    Absorbing States

    A state of a Markov Process is said to be absorbing or Trappingif Mii = 1 and Mij = 0 ∀j

    Absorbing Markov Chain

    A Markov Chain is absorbing if it has one or more absorbingstates. If it has one absorbing state (for instance state i, thenthe steady state is given by the eigenvector X where Xi = 1and Xj = 0 ∀j 6= i

    198 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Markov Processes (/11): Example 3.6

    Example 3.6: Soft Drink Market Share, RevisitedAs Soft Drinks market is ‘liquid’, KulKola decides to trial productBrand ‘X’.Despite its name, Brand ‘X’ has potential6 to ‘Shift the Paradigm’in Cola consumption.They think, inside 5 years, they can capture nearly all the market.Investigate if this is true, given that they take 20% of Coke’sshare and 30% of Pepsi’s per annum.

    6from KulKola’s Marketing viewpoint199 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Markov Processes (/11): Example 3.6

    Example 3.6: Soft Drink Market ShareAgain, shares of Coke/Pepsi/Brand ‘X’ at n+ 1 etc are defined by thoseat n.Transition Matrix (from Eqn.(3.51)) is Table 3.3:

    FromCoke Pepsi Brand ‘X’

    Coke 0.7 0.3 0To Pepsi 0.1 0.4 0

    Brand ‘X’ 0.2 0.3 1

    TABLE 3.3: Soft Drink Market Share Matrix Revisited

    Table 3.3 in matrix form:

    M =

    ( 0.7 0.3 00.1 0.4 00.2 0.3 1

    )(3.55)

    200 / 302

    Notes

    Notes

  • Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Markov Processes (/12): Example 3.6

    The Transition Diagram for Example 3.6 Revisited is:

    FIGURE 3.7: Market Share Transition Diagram

    λmax of the matrix in Eqn(3.55) is 1.vmax, is (0, 0, 1)T giving the shares of Coke, Pepsi andBrand ‘X’ in the nth generation, respectively.

    201 / 302

    Discrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Markov Processes (/13): Hidden Markov Models

    Markov Models have a visible stateSo transition probabilities & matrix are observable.In Hidden Markov Models visibility restriction is relaxedThe transition probabilities are generally not known.Possibly observer sees underlying variable thro noise layer.

    FIGURE 3.8: Hidden Markov Process © Max Heimel, TÜ Berlin202 / 302

    Notes

    Notes

    Introduction & BasicsIntro to the Topic

    Time Series ModellingTime SeriesAn Intro to Time Series AnalysisTrends in Time SeriesTime Series DecompositionAdvanced Models of Univariate Time seriesForecasting in Univariate Time seriesSummary of Time Series

    Discrete ModelsDiscrete ModelsLinear Difference EquationsNon-Linear Models with Applications

    Growth & DecayGrowth and DecayIntroduction & Simple ModelsLogistic Growth Models

    Linear & Non-Linear Interaction Models Linear & Non-Linear Interaction ModelsLinear Models of InteractionNon-Linear ModelsAside: Phase Plane ModellingNon-Linear Interaction Models