monopoly an analysis using markov chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf ·...

61
Monopoly – An Analysis using Markov Chains Benjamin Bernard Columbia University, New York PRIME event at Pace University, New York April 19, 2017

Upload: others

Post on 05-Sep-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Monopoly – An Analysis using Markov Chains

Benjamin Bernard

Columbia University, New York

PRIME event at Pace University, New York

April 19, 2017

Page 2: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Introduction

Page 3: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Applications of Markov chains

Applications of Markov chains:

Equilibrium distribution in networks: page rank in search engines,

Speech analysis: voice recognition, word predictions on keyboard,

Agriculture: population growth and harvesting,

Research in related fields: modelling of random systems in physics andfinance, sampling from arbitrary distributions

Markov decision processes in artificial intelligence: sequential decisionproblems under uncertainty, reinforcement learning,

Games: compute intricate scenarios in a fairly simple way.

Skills required to use Markov chains:

Knowledge of the system we are modelling,

Some basic programming skills.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 1 / 20

Page 4: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Applications of Markov chains

Applications of Markov chains:

Equilibrium distribution in networks: page rank in search engines,

Speech analysis: voice recognition, word predictions on keyboard,

Agriculture: population growth and harvesting,

Research in related fields: modelling of random systems in physics andfinance, sampling from arbitrary distributions

Markov decision processes in artificial intelligence: sequential decisionproblems under uncertainty, reinforcement learning,

Games: compute intricate scenarios in a fairly simple way.

Skills required to use Markov chains:

Knowledge of the system we are modelling,

Some basic programming skills.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 1 / 20

Page 5: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Which squares are the best?

Markov chains allow us to:

Calculate the probability distribution of pieces on the board that arisesafter sufficiently long play.

Compute expected income/return of properties and their developments.

Evaluate the quality of a trade by its impact on the ruin probability.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 2 / 20

Page 6: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Distribution after the first turn

Page 7: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Rules

Roll dice to advance on board.

If you roll 3 doubles in a row → jail.

Properties can be bought or traded. Visitors pay “rent”.

If all properties of one color are owned, they can be developed for asubstantial increase of rent.

Players who cannot afford rent are eliminated. Last remaining player wins.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 3 / 20

Page 8: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Rules

×3

Roll dice to advance on board. If you roll 3 doubles in a row → jail.

Properties can be bought or traded. Visitors pay “rent”.

If all properties of one color are owned, they can be developed for asubstantial increase of rent.

Players who cannot afford rent are eliminated. Last remaining player wins.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 3 / 20

Page 9: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Rules

Roll dice to advance on board. If you roll 3 doubles in a row → jail.

Properties can be bought or traded. Visitors pay “rent”.

If all properties of one color are owned, they can be developed for asubstantial increase of rent.

Players who cannot afford rent are eliminated. Last remaining player wins.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 3 / 20

Page 10: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Rules

Roll dice to advance on board. If you roll 3 doubles in a row → jail.

Properties can be bought or traded. Visitors pay “rent”.

If all properties of one color are owned, they can be developed for asubstantial increase of rent.

Players who cannot afford rent are eliminated. Last remaining player wins.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 3 / 20

Page 11: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Rules

Roll dice to advance on board. If you roll 3 doubles in a row → jail.

Properties can be bought or traded. Visitors pay “rent”.

If all properties of one color are owned, they can be developed for asubstantial increase of rent.

Players who cannot afford rent are eliminated. Last remaining player wins.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 3 / 20

Page 12: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Probability distribution of the sum of two dice

2 3 4 5 6 7 8 9 10 11 12

Each combination of dice has a probability of 16 ·

16 = 1

36 .

2 is attained by

3 is attained by and

4 is attained by , and

5 is attained by , , and

Monopoly – An Analysis using Markov Chains Benjamin Bernard 4 / 20

Page 13: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Advancing on the monopoly board

Three possible ways of moving your piece:

Rolling the dice,

Being sent to jail,

Community chest and chance cards.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 5 / 20

Page 14: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Advancing on the monopoly board

Three possible ways of moving your piece:

Rolling the dice,

Being sent to jail,

Community chest and chance cards.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 5 / 20

Page 15: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Advancing on the monopoly board

Three possible ways of moving your piece:

Rolling the dice,

Being sent to jail,

Community chest and chance cards.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 5 / 20

Page 16: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Steady state distribution

Page 17: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Finite state Markov chain

A B

C D

p

q

1− p1− q1

1

0 p 0 0

q 0 1 0

1− q 0 0 0

0 1− p 0 1

A Markov chain transitions randomly between several possible states:

Sum of transition probabilities is 1.

Representation by a matrix P = (pij): Element pij in row i andcolumn j is the probability to transition from state j to state i .

n-step transition probability given by Pn (matrix multiplication).

Monopoly – An Analysis using Markov Chains Benjamin Bernard 6 / 20

Page 18: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Finite state Markov chain

A B

C D

p

q

1− p1− q1

1

0 p 0 0

q 0 1 0

1− q 0 0 0

0 1− p 0 1

A Markov chain transitions randomly between several possible states:

Sum of transition probabilities is 1.

Representation by a matrix P = (pij): Element pij in row i andcolumn j is the probability to transition from state j to state i .

n-step transition probability given by Pn (matrix multiplication).

Monopoly – An Analysis using Markov Chains Benjamin Bernard 6 / 20

Page 19: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Finite state Markov chain

A B

C D

p

q

1− p1− q1

1

0 p 0 0

q 0 1 0

1− q 0 0 0

0 1− p 0 1

A Markov chain transitions randomly between several possible states:

Sum of transition probabilities is 1.

Representation by a matrix P = (pij): Element pij in row i andcolumn j is the probability to transition from state j to state i .

n-step transition probability given by Pn (matrix multiplication).

Monopoly – An Analysis using Markov Chains Benjamin Bernard 6 / 20

Page 20: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Finite state Markov chain

A B

C D

p

q

1− p1− q1

1

0 p 0 0

q 0 1 0

1− q 0 0 0

0 1− p 0 1

Example: Initial state is A

The initial distribution equals π0 = (1, 0, 0, 0)>.

After one step, the distribution equals π1 = (0, q, 1 − q, 0)>. Thiscan also be obtained by matrix multiplication: π1 = P · π0.

After two steps: π2 = P · π1 = P2 · π0 =(qp, 1− q, 0, q(1− p)

)>.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 7 / 20

Page 21: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Finite state Markov chain

A B

C D

p

q

1− p1− q1

1

0 p 0 0

q 0 1 0

1− q 0 0 0

0 1− p 0 1

Example: Initial state is A

The initial distribution equals π0 = (1, 0, 0, 0)>.

After one step, the distribution equals π1 = (0, q, 1 − q, 0)>. Thiscan also be obtained by matrix multiplication: π1 = P · π0.

After two steps: π2 = P · π1 = P2 · π0 =(qp, 1− q, 0, q(1− p)

)>.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 7 / 20

Page 22: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Finite state Markov chain

A B

C D

p

q

1− p1− q1

1

0 p 0 0

q 0 1 0

1− q 0 0 0

0 1− p 0 1

Example: Initial state is A

The initial distribution equals π0 = (1, 0, 0, 0)>.

After one step, the distribution equals π1 = (0, q, 1 − q, 0)>. Thiscan also be obtained by matrix multiplication: π1 = P · π0.

After two steps: π2 = P · π1 = P2 · π0 =(qp, 1− q, 0, q(1− p)

)>.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 7 / 20

Page 23: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Convergence of distribution?

Does there exist a limiting distribution π∞ = limn→∞

Pn · π0?

If such a distribution exists, it has to satisfy π∞ = P · π∞. That is, it is aso-called steady state distribution.

Theorem

If a steady state distribution exists, it is unique.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 8 / 20

Page 24: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Convergence of distribution?

Does there exist a limiting distribution π∞ = limn→∞

Pn · π0?

If such a distribution exists, it has to satisfy π∞ = P · π∞. That is, it is aso-called steady state distribution.

Theorem

If a steady state distribution exists, it is unique.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 8 / 20

Page 25: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Steady state distribution

A Markov chain is called:

irreducible if any state is reachable from any other state.

positive recurrent if the chain returns to any state in finite time with

probability 1.

periodic with period k if returns to the same state is possible only in

multiples of k steps.

aperiodic if it is not periodic.

Theorem

An irreducible Markov chain with transition matrix P has a unique steadystate distribution π with π = Pπ if and only if it is positive recurrent.

Moreover, if the Markov chain is aperiodic, then π = limn→∞

Pnπ0 for any π0.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 9 / 20

Page 26: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Steady state distribution

A Markov chain is called:

irreducible if any state is reachable from any other state.

positive recurrent if the chain returns to any state in finite time with

probability 1.

periodic with period k if returns to the same state is possible only in

multiples of k steps.

aperiodic if it is not periodic.

Theorem

An irreducible Markov chain with transition matrix P has a unique steadystate distribution π with π = Pπ if and only if it is positive recurrent.

Moreover, if the Markov chain is aperiodic, then π = limn→∞

Pnπ0 for any π0.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 9 / 20

Page 27: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Steady state distribution

A Markov chain is called:

irreducible if any state is reachable from any other state.

positive recurrent if the chain returns to any state in finite time with

probability 1.

periodic with period k if returns to the same state is possible only in

multiples of k steps.

aperiodic if it is not periodic.

Theorem

An irreducible Markov chain with transition matrix P has a unique steadystate distribution π with π = Pπ if and only if it is positive recurrent.

Moreover, if the Markov chain is aperiodic, then π = limn→∞

Pnπ0 for any π0.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 9 / 20

Page 28: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Steady state distribution

A Markov chain is called:

irreducible if any state is reachable from any other state.

positive recurrent if the chain returns to any state in finite time with

probability 1.

periodic with period k if returns to the same state is possible only in

multiples of k steps.

aperiodic if it is not periodic.

Theorem

An irreducible Markov chain with transition matrix P has a unique steadystate distribution π with π = Pπ if and only if it is positive recurrent.

Moreover, if the Markov chain is aperiodic, then π = limn→∞

Pnπ0 for any π0.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 9 / 20

Page 29: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Steady state distribution

A Markov chain is called:

irreducible if any state is reachable from any other state.

positive recurrent if the chain returns to any state in finite time with

probability 1.

periodic with period k if returns to the same state is possible only in

multiples of k steps.

aperiodic if it is not periodic.

Theorem

An irreducible Markov chain with transition matrix P has a unique steadystate distribution π with π = Pπ if and only if it is positive recurrent.

Moreover, if the Markov chain is aperiodic, then π = limn→∞

Pnπ0 for any π0.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 9 / 20

Page 30: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Steady state distribution

A Markov chain is called:

irreducible if any state is reachable from any other state.

positive recurrent if the chain returns to any state in finite time with

probability 1.

periodic with period k if returns to the same state is possible only in

multiples of k steps.

aperiodic if it is not periodic.

Theorem

An irreducible Markov chain with transition matrix P has a unique steadystate distribution π with π = Pπ if and only if it is positive recurrent.

Moreover, if the Markov chain is aperiodic, then π = limn→∞

Pnπ0 for any π0.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 9 / 20

Page 31: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Steady state distribution

A Markov chain is called:

irreducible if any state is reachable from any other state.

positive recurrent if the chain returns to any state in finite time with

probability 1.

periodic with period k if returns to the same state is possible only in

multiples of k steps.

aperiodic if it is not periodic.

Theorem

An irreducible Markov chain with transition matrix P has a unique steadystate distribution π with π = Pπ if and only if it is positive recurrent.

Moreover, if the Markov chain is aperiodic, then π = limn→∞

Pnπ0 for any π0.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 9 / 20

Page 32: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Steady state distribution

A Markov chain is called:

irreducible if any state is reachable from any other state.

positive recurrent if the chain returns to any state in finite time with

probability 1.

periodic with period k if returns to the same state is possible only in

multiples of k steps.

aperiodic if it is not periodic.

Theorem

An irreducible Markov chain with transition matrix P has a unique steadystate distribution π with π = Pπ if and only if it is positive recurrent.

Moreover, if the Markov chain is aperiodic, then π = limn→∞

Pnπ0 for any π0.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 9 / 20

Page 33: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Monopoly as a Markov chain

Page 34: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Monopoly as a Markov chain

There are 119 states:

There are 40 distinct squares on the board (being in jail and visitingjail are distinct).

For each non-jail square, we keep track of how many previous doubleshave been rolled → 3 states for each non-jail square.

In jail, we keep track of how many turns the player has been in jailalready → 3 states for jail.

The jail state “been there two turns already” is equivalent to the state“visiting jail without previous doubles” → 1 fewer states in total.

A more extensive Markov chain would involve the amount of players’ cash.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 10 / 20

Page 35: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Monopoly as a Markov chain

There are 119 states:

There are 40 distinct squares on the board (being in jail and visitingjail are distinct).

For each non-jail square, we keep track of how many previous doubleshave been rolled → 3 states for each non-jail square.

In jail, we keep track of how many turns the player has been in jailalready → 3 states for jail.

The jail state “been there two turns already” is equivalent to the state“visiting jail without previous doubles” → 1 fewer states in total.

A more extensive Markov chain would involve the amount of players’ cash.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 10 / 20

Page 36: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Decomposition of transition matrix

Transition D due to dice and transition C due to cards is simple. Becausewe roll dice first, then pick cards, it follows that P = C · D.

D =

(0, 0, 0

) (0, 0, 0

)· · ·(

0, 0, 0) (

0, 0, 0)

· · ·(0, 1

36, 0

) (0, 0, 0

)· · ·(

236, 0, 0

) (0, 1

36, 0

)· · ·(

236, 1

36, 0

) (2

36, 0, 0

)· · ·(

436, 0, 0

) (2

36, 1

36, 0

)· · ·(

436, 1

36, 0

) (4

36, 0, 0

)· · ·(

636, 0, 0

) (4

36, 1

36, 0

)· · ·(

436, 1

36, 0

) (6

36, 0, 0

)· · ·(

436, 0, 0

) (2

36, 1

36, 0

)· · ·

......

. . .

π0 =

(1, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)...

Initial distribution π0: probability 1 on “Go” without previous doubles.

Distribution after n rolls: πn = Pn · π0.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 11 / 20

Page 37: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Decomposition of transition matrix

Transition D due to dice and transition C due to cards is simple. Becausewe roll dice first, then pick cards, it follows that P = C · D.

D =

(0, 0, 0

) (0, 0, 0

)· · ·(

0, 0, 0) (

0, 0, 0)

· · ·(0, 1

36, 0

) (0, 0, 0

)· · ·(

236, 0, 0

) (0, 1

36, 0

)· · ·(

236, 1

36, 0

) (2

36, 0, 0

)· · ·(

436, 0, 0

) (2

36, 1

36, 0

)· · ·(

436, 1

36, 0

) (4

36, 0, 0

)· · ·(

636, 0, 0

) (4

36, 1

36, 0

)· · ·(

436, 1

36, 0

) (6

36, 0, 0

)· · ·(

436, 0, 0

) (2

36, 1

36, 0

)· · ·

......

. . .

π0 =

(1, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)...

Initial distribution π0: probability 1 on “Go” without previous doubles.

Distribution after n rolls: πn = Pn · π0.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 11 / 20

Page 38: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Decomposition of transition matrix

Transition D due to dice and transition C due to cards is simple. Becausewe roll dice first, then pick cards, it follows that P = C · D.

D =

(0, 0, 0

) (0, 0, 0

)· · ·(

0, 0, 0) (

0, 0, 0)

· · ·(0, 1

36, 0

) (0, 0, 0

)· · ·(

236, 0, 0

) (0, 1

36, 0

)· · ·(

236, 1

36, 0

) (2

36, 0, 0

)· · ·(

436, 0, 0

) (2

36, 1

36, 0

)· · ·(

436, 1

36, 0

) (4

36, 0, 0

)· · ·(

636, 0, 0

) (4

36, 1

36, 0

)· · ·(

436, 1

36, 0

) (6

36, 0, 0

)· · ·(

436, 0, 0

) (2

36, 1

36, 0

)· · ·

......

. . .

π0 =

(1, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)(0, 0, 0

)...

Initial distribution π0: probability 1 on “Go” without previous doubles.

Distribution after n rolls: πn = Pn · π0.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 11 / 20

Page 39: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Distribution after several moves

Move count: 1

Monopoly – An Analysis using Markov Chains Benjamin Bernard 12 / 20

Page 40: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Distribution after several moves

Move count: 2

Monopoly – An Analysis using Markov Chains Benjamin Bernard 12 / 20

Page 41: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Distribution after several moves

Move count: 3

Monopoly – An Analysis using Markov Chains Benjamin Bernard 12 / 20

Page 42: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Distribution after several moves

Move count: 4

Monopoly – An Analysis using Markov Chains Benjamin Bernard 12 / 20

Page 43: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Distribution after several moves

Move count: 5

Monopoly – An Analysis using Markov Chains Benjamin Bernard 12 / 20

Page 44: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Distribution after several moves

Move count: 6

Monopoly – An Analysis using Markov Chains Benjamin Bernard 12 / 20

Page 45: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Distribution after several moves

Move count: 7

Monopoly – An Analysis using Markov Chains Benjamin Bernard 12 / 20

Page 46: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Distribution after several moves

Move count: 8

Monopoly – An Analysis using Markov Chains Benjamin Bernard 12 / 20

Page 47: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Distribution after several moves

Move count: 9

Monopoly – An Analysis using Markov Chains Benjamin Bernard 12 / 20

Page 48: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Distribution after several moves

Move count: 10

Monopoly – An Analysis using Markov Chains Benjamin Bernard 12 / 20

Page 49: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Steady state distribution

Move count: ∞

Monopoly – An Analysis using Markov Chains Benjamin Bernard 13 / 20

Page 50: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Implications for strategy

Page 51: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Expected income per turn on empty property1

1

11

22

33

44

55

Expected income = rent× probability

1Railroads and utilities: empty means only a single property of that kind is owned.Monopoly – An Analysis using Markov Chains Benjamin Bernard 14 / 20

Page 52: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Expected income per turn for fully developed property2

1

11

22

3344

55

Expected income = rent× probability

2Railroads and utilities: fully developed means all properties of that kind are owned.Monopoly – An Analysis using Markov Chains Benjamin Bernard 15 / 20

Page 53: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Expected return per turn for fully developed property3

1

1122

33

4455

Expected return =rent× probability

cost of purchase and development

3Railroads and utilities: fully developed = all properties of that kind are owned.Monopoly – An Analysis using Markov Chains Benjamin Bernard 16 / 20

Page 54: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Expected income/return per turn and color

Income

Return

97.1 87.7 87.7 80.9 80.6 57.5 36.9 14.2

3.9% 3.5% 3% 3% 2.9% 2.9% 2.5% 2.3%

After fully developing the orange and green properties, it takes an expected6.41 and 10 rounds, respectively, to earn back the costs in a 5-player game.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 17 / 20

Page 55: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Expected income/return per turn and color

Income

Return

97.1 87.7 87.7 80.9 80.6 57.5 36.9 14.2

3.9% 3.5% 3% 3% 2.9% 2.9% 2.5% 2.3%

After fully developing the orange and green properties, it takes an expected6.41 and 10 rounds, respectively, to earn back the costs in a 5-player game.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 17 / 20

Page 56: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Expected return per color and stage of development

0 1 2 3 4 5

Monopoly – An Analysis using Markov Chains Benjamin Bernard 18 / 20

Page 57: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Quality of trades

Extend Markov chain by adding players’ capital to the state space:

If player A lands on property of player B, rent is transferred.

0 capital is an absorbing state → player is eliminated.

Summing up probabilities of all states where player A’s capital is 0gives player A’s ruin probability.

A trade is “good” if it reduces one’s ruin probability.

Difficulty: Depends on players’ future actions → Markov decision process.

Assume optimal play in the future:

Trades are executed only if it is beneficial for all parties involved inthe trade → no trades if there are only two players left.

Players develop the properties that minimize their ruin probability.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 19 / 20

Page 58: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Quality of trades

Extend Markov chain by adding players’ capital to the state space:

If player A lands on property of player B, rent is transferred.

0 capital is an absorbing state → player is eliminated.

Summing up probabilities of all states where player A’s capital is 0gives player A’s ruin probability.

A trade is “good” if it reduces one’s ruin probability.

Difficulty: Depends on players’ future actions → Markov decision process.

Assume optimal play in the future:

Trades are executed only if it is beneficial for all parties involved inthe trade → no trades if there are only two players left.

Players develop the properties that minimize their ruin probability.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 19 / 20

Page 59: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Quality of trades

Extend Markov chain by adding players’ capital to the state space:

If player A lands on property of player B, rent is transferred.

0 capital is an absorbing state → player is eliminated.

Summing up probabilities of all states where player A’s capital is 0gives player A’s ruin probability.

A trade is “good” if it reduces one’s ruin probability.

Difficulty: Depends on players’ future actions → Markov decision process.

Assume optimal play in the future:

Trades are executed only if it is beneficial for all parties involved inthe trade → no trades if there are only two players left.

Players develop the properties that minimize their ruin probability.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 19 / 20

Page 60: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Conclusions

Monopoly:

Aim to obtain all orange or light blue properties in the beginning ofthe game. Next best choices are red and yellow.

Develop these properties until the third house before investing in an-other property.

Aim to have 2 colors of orange, red, yellow, green, or dark blue forend game.

Markov chains:

Math is awesome.

This is not a simulation: calculated probabilities are exact.

You don’t have to be a mathematician to use Markov chains.

Make sure you check the conditions of any theorems you use.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 20 / 20

Page 61: Monopoly An Analysis using Markov Chainscarlabernard.ch/beni/downloads/bernard_monopoly.pdf · 2018. 7. 2. · Monopoly { An Analysis using Markov Chains Benjamin Bernard 9/20. IntroductionDistribution

Introduction Distribution after first turn Steady state distribution Monopoly as a Markov chain Implications for strategy

Conclusions

Monopoly:

Aim to obtain all orange or light blue properties in the beginning ofthe game. Next best choices are red and yellow.

Develop these properties until the third house before investing in an-other property.

Aim to have 2 colors of orange, red, yellow, green, or dark blue forend game.

Markov chains:

Math is awesome.

This is not a simulation: calculated probabilities are exact.

You don’t have to be a mathematician to use Markov chains.

Make sure you check the conditions of any theorems you use.

Monopoly – An Analysis using Markov Chains Benjamin Bernard 20 / 20