gibbons & kumar - the unofficial solution manual to a primer in game theory - unfinished draft

Upload: luu-behl

Post on 07-Aug-2018

1.215 views

Category:

Documents


137 download

TRANSCRIPT

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    1/36

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    2/36

    2

    This version is an unreleased and unfinished manuscript.

    The author can be reached at [email protected]

    Last Updated: January  20,  2013

    Typeset using L AT EX and the Tufte book class.

    This work is not subject to copyright. Feel free to reproduce, distributeor falsely claim authorship of it in part or whole.

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    3/36

    3

    This is strictly a beta version. Two thirds of it are missing and there are er-

    rors aplenty. You have been warned.

    On a more positive note, if you do find an error, please email me at

    [email protected], or tell me in person.

    - Navin Kumar

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    4/36

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    5/36

    Static Games of Complete Information

    Answer 1.1 See text.

    Answer 1.2 B is strictly dominated by T. C is now strictly dom-

    inated by R. The strategies (T,M) and (L,R) survive the iterated

    elimination of strictly dominated strategies. The Nash Equilibrium

    are (T,R) and (M,L).

    Answer 1.3 For whatever value Individual  1  chooses (denoted

     by S1), Individual 2’s best response is  S2   =   B2(S1) =   1 − S2.Conversely, S1   =   B1(S2) =  1 − S1. We know this because if  S2   <1 − S1, then there is money left on the table and Individual 2  couldincrease his or her payoff by asking for more. If, however,  S2   >

    1 − S1, Individual 2  earns nothing and can increase his payoff byreducing his demands sufficiently. Thus the Nash Equilibrium is

    S1 + S2  =  1.

    Answer 1.4 The market price of the commodity is determined by

    the formula  P  =   a − Q in which Q   is determined Q  =  q1 + ... + qnThe cost for an individual company is given by  Ci   =   c · qi. Theprofit made by a single firm is

    π i  = ( p − c) · qi  = (a − Q − c) · qi  = (a − q∗1 − ... − q∗n − c) · qiWhere q∗ j   is the profit-maximizing quantity produced by firm  j  inequilibrium. This profit is maximized at

    dπ i

    dqi

    = (a

    −q∗1

     −...

    −q∗i

     −...

    −q∗n

     −c)

    −q∗i   = 0

    ⇒ a − q∗1 − ... − 2 · q∗i − ... − q∗n − c =  0⇒ a − c =  q∗1 + ... + 2 · q∗i   + ... + q∗n

    For all i=1, ... ,n. We could solve this question using matrices and

    Cramer’s rule, but a simpler method would be to observe that since

    all firms are symmetric, their equilibrium quantity will be the same

    i.e.

    q∗1  = q∗2  = ...  =  q∗i   = ...  =  q∗n

    Which means the preceding equation becomes.

    a − c = (n + 1)q∗i ⇒ q∗i   =  a − cn + 1

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    6/36

    6   Static Games of Complete Information

    A similar argument applies to all other firms.

    Answer 1.5 Let  qm  be the amount produced by a monopolist.

    Thus, if the two were colluding, they’d each produce

    q1m  =  q2m  =   qm2  =   a − c

    4

    In such a scenario, the profits earned by Firm  1  (and, symmetrically,

    Firm  2) is

    π 1mm  = (P − c) · q1m = (a − Q − c) · q1m  =

    a −  a − c

    2  − c

    · q1m

    =  a − c

    2  ·  a − c

    4  =

      (a − c)28

      = 0.13 · (a − c)2

    If both are playing the Cournot equilibrium quantity, than the prof-

    its earned by Firm 1  (and Firm 2) are:   1   1 The price is determined by

    P =  a − Q=  a − q1cc −q2cc=  a − 2 ·  a − c

    3

    =  a + c

    2

    π 1cc  = (P − c) · q1c  = (a −

     2(a − c)3

      − c) · q1c  = (a + 2c

    3  − c) · q1c

    =  (a − c)2

    9

    = 0.11 · (a − c)2

    What if one of the firms (say Firm  1) plays the Cournot quantity

    and the other plays the Monopoly quantity?   2 Firm 1’s profits are:   2 The price is given by

    P =  a − Q=  a − q1c − q2m

    =  a − a

    −c

    3   − a

    −c

    4

    =  a −   712

     · (a − c)

    =  5a

    12 +

     7c

    12

    π 1cm  = (P − c) · q1c  = (

    5a

    12 +

     7c

    12 − c) ·  a − c

    3  =

      5

    36 · (a − c)2

    = 0.14 · (a − c)2And Firm 2’s profits are:

    π 2cm  = (P − c) · q2m  = (

    5a

    12 +

     7c

    12 − c) ·  a − c

    4  =

      5

    48 · (a − c)2

    = 0.10 · (a − c)2

    For notational simplicity, let

    α ≡ (a − c)2

    Their profits are reversed when their production is. Thus, the pay-

    offs are:

    Player 2

    qm   qc

    Player 1qm   0.13α, 0.13α   0.10α, 0.14α

    qc   0.14α, 0.10α   0.11α, 0.11α

    As you can see, we have a classic Prisoner’s Dilemma: regardless of 

    the other firm’s choice, both firm’s would maximize their payoff by

    choosing to produce the Cournot quantity. Each firm has a strictly

    dominated strategy ( qm2  ) and they’re both worse off in equilibrium(where they make 0.11 · (a − c)2 in profits) than they would have

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    7/36

    7

     been had they cooperated by producing  qm  together (which would

    have earned them 0.13 · (a − c)2).

    Answer 1.6 Price is determined by  P  =  a − Q and  Q   is determined by Q  =  q1 + q2. Thus the profit generated for Firm  1  is given by:

    π 1  = (P − c1) · q1  = (a − Q − c1) · q1  = (a − q1 − q2 − c1) · q1At the maximum level of profit,

    dπ 1dq1

    = (a − c1 − q1 − q2) + q1 · (−1) = 0 ⇒ q1 =   a − c1 − q22And by a similar deduction,

    q2  =  a − c2 − q1

    2

    Plugging the above equation into it’s predecessor,

    q1 =  a − c1 −   a−c2−q12

    2  =

      a − 2c1 + c23

    And by a similar deduction,

    q2  =  a − 2c2 + c1

    3

    Now,

    2c2  > a + c1 ⇒ 0 > a − 2c2 + c1 ⇒ 0 > a − 2c2 + c13   ⇒ 0 > q2⇒ q2  =  0

    Since quantities cannot be be negative. Thus, under certain condi-

    tions, a sufficient difference in costs can drive one of the firms to

    shut down.

    Answer 1.7 We know that,

    qi  =

    a − pi   if  pi  <  p j,a− pi

    2   if  pi  =  p j,

    0 if  pi  >  p j.

    We must now prove  pi   =   p j   =   c is the Nash Equilibrium of this

    game. To this end, let’s consider the alternatives exhaustively.

    If  pi   >   p j   =   c, q i   =  0 and  π  j   =  0. In this scenario, Firm j can

    increase profits by charging  p j +  ε  where  ε   >  0 and  ε   <   pi − p j,raising profits. Thus this scenario is not a Nash Equilibrium.

    If  pi   >   p j   >   c, q i   =  0 and  π i   =  0 and Firm i can make pos-

    itive profits by charging  p j − ε   >   c. Thus, this cannot be a NashEquilibrium.

    If  pi  =  p j  >  c,  π i  = ( pi − c) ·   a− pi2   . Firm i can increase profits bycharging pi − ε such that  p j  >  pi − ε > 0, grab the entire market anda larger profit, provided

    ( pi − ε − c) · (a − pi − ε) > ( pi − c) ·  (a − pi)2

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    8/36

    8   Static Games of Complete Information

    Therefore, this is not a Nash Equilibrium

    If  pi   =   p j   =   c, then  π i   =   π  j   =  0. Neither firm has any reason

    to deviate; if FFirm i were to reduce  pi,  π i  would become negative.

    If Firm i were to raise  pi, q i   =   π i   =  0 and he would be no better

    off. Thus Firm i (and, symmetrically, Firm j) have no incentive to

    deviate, making this a Nash Equilibrium.

    Answer 1.8 The share of votes received by a candidate is given by

    Si  =

    xi +  12 · (x j − xi)   if  x i  < x j,

    12   if  x i  =  x j,

    (1 − xi) +   12 · (xi − x j)   if  x i  > x j.

    We aim to prove the Nash Equilibrium is ( 12 , 12 ). Let us exhaustively

    consider the alternatives.Suppose x i   =

      12  and x j   >

      12  i.e. One candidate is a centrist

    while the other (Candidate j) isn’t. In such a case, Candidate j can

    increase his share of the vote by moving to the left i.e. reducing  x j.

    If  x j  <  1

    2 , Candidate j can increase his share of the vote by moving

    to the right.

    Suppose x i   >   x j   >  1

    2 ; Candidate i can gain a larger share by

    moving to a point  x j  >  x i  >  12 . Thus this is not a Nash Equilibrium.

    Suppose x i  =  x j  =  12 ; the share of i and j are

      12 . If Candidate i were

    to deviate to a point (say)  x i   >  1

    2 , his share of the vote would de-

    cline. Thus ( 12 , 12 ) is a unique Nash Equilibrium. This is the famous

    Median Voter Theorm, used extensively in the study of politics. Itexplains why, for example, presidential candidates in the US veer

    sharply to the center as election day approaches.

    Answer 1.9 See text.

    Answer 1.10   (a) Prisoner’s Dilemma

    Player 2

    ( p) Mum   (1 − p)Fink Player 1

    (q) Mum   −1, −1   −9, 0(1 − q)Fink    0, −9   −6, −6

    Prisoner’s Dilemma

    In a mixed strategy equilibrium, Player  1  would choose q  such that

    Player 2  would be indifferent between Mum and Fink. The payoff 

    from playing Mum and Fink must be equal. i.e.

    −1 · q + −9 · (1 − q) = 0 · q + −6 · (1 − q) ⇒ q  = −3.5

    This is impossible.

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    9/36

    9

    (b)

    Player 2

    Le f t(q0)   Middle(q1)   Right(1 − q0 − q1)

    Player 1U p( p)   1, 0 1, 2 0, 1

    Down(1 − p)   0, 3 0,1 2, 0Figure  1.1.1.

    Here, Player 1  must set p so that Player  2  is indifferent between

    Left, Middle and Right. The payoffs from Left and Middle, for

    example, have to be equal. i.e.

     p · 0 + (1 − p) · 3 =  p · 2 + (1 − p) · 1

    ⇒  p =  0.5Similarly, the payoffs from Middle and Right have to be equal

    2 · p + 1 · (1 − p) = 1 · p + 0 · (1 − p)⇒  p = −0.5

    Which, besides contradicting the previous result, is quite impossi-

     ble.

    (c)

    Player 2

    L(q0)   C(q1)   R(1 − q0 − q1)

    Player 1

    T ( p0)   0, 5 4, 0 5, 3

     M( p1)   4, 0 0, 4 5, 3

    B(1 − p0 − p1)   3, 5 3, 5 6, 6Figure  1.1.4.

    In a mixed equilibrium, Player  1  sets  p0  and p1  so that Player  2

    would be indifferent between L, C and R. The payoffs to L and Cmust, for example, be equal i.e.

    4 · p0 + 0 · p1 + 5 · (1 − p0 − p1) = 0 · p0 + 4 · p1 + 5 · (1 − p0 − p1)⇒  p0  =  p1

    Similarly,

    0 · p0 + 4 · p1 + 5 · (1 − p0 − p1) = 3 · p0 + 3 · p1 + 6 · (1 − p0 − p1)⇒  p1  =  2.5 − p0

    Which violates  p0  =  p1.

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    10/36

    10   Static Games of Complete Information

    Answer 1.11  This game can be written as

    Player 2

    L(q0)   C(q1)   R(1 − q0 − q1)

    Player 1T ( p0)   2, 0 1, 1 4, 2

     M( p1)   3, 4 1, 2 2, 3

    B(1 − p0 − p1)   1, 3 0, 2 3, 0

    In a mixed Nash Equilibrium, Player  1  sets  p0  and  p1  so that the

    expected payoffs from L and C are the same. i.e.

    E2(L) = E2(C)

    ⇒ 0 · p0 + 4 · p1 + 3 · (1 − p0 − p1) = 1 · p0 + 2 · p1 + 2 · (1 − p0 − p1)

    ⇒  p1  =  2 · p0 − 1Similarly,

    E2(C) = E2(R)

    ⇒ 1 · p0 + 2 · p1 + 2 · (1 − p0 − p1) = 2 · p0 + 3 · p1 + 0 · (1 − p0 − p1)⇒  p1  =  23 − p0

    Combining these,

    2 · p0 − 1 =  23 − p0 ⇒ p0 =  5

    9

    ∴ p1 =  2 · p0 − 1 =  2 · 59 − 1 = 1

    9

    ∴1 − p0 − p1  =  1 −  59 − 1

    9 =

     3

    9

    Now we must calculate  q0  and  q1. Player 2  will set them such that

    E1(T ) =  E1( M)

    ⇒ 2 · q0 + 1 · q1 + 4 · (1 − q0 − q1) = 3 · q0 + 1 · q1 + 2 · (1 − q0 − q1)⇒ q1  =  1 − 1.5 · q0

    And

    E1( M) =  E1(B)

    ⇒ 3 · q0 + 1 · q1 + 2 · (1 − q0 − q1) = 1 · q0 + 0 · q1 + 3 · (1 − q0 − q1)

    Qed

    Answer 1.12

    (q)L2   (1 − q)R2( p)T 1   2, 1 0, 2

    (1 − p)B1   1, 2 3, 0

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    11/36

    11

    Player 1  will set p such that

    E2(L) = E2(R)

    ⇒ 1

    · p + 2

    ·(1

    − p) = 2

    · p + 0

    ·(1

    − p)

    ⇒  p =  23

    Player 2  will set q such that

    E1(T ) =  E1(B)

    ⇒ 2 · q + 0 · (1 − q) = 1 · q + 3 · (1 − q)

    ⇒ q  =  34

    Answer 1.13

    (q)Apply1 to Firm 1   (1 − q)Apply1 to Firm 2( p)Apply2 to Firm 1   12 w1,

    12 w1   w1,w2

    (1 − p)Apply2 to Firm 2   w2,w1 12 w2, 12 w2There are two pure strategy Nash Equilibrium (Apply to Firm  1,

    Apply to Firm 2) and (Apply to Firm  2, Apply to Firm  1). In a

    mixed-strategy equilibrium, Player 1  sets p such that Player  2  isindifferent between Applying to Firm 1  and Applying to Firm  2.

    E2(Firm 1) =  E2(Firm 2)

    ⇒  p ·  12

    w1 + (1 − p) · w1  =  p · w2 + (1 − p) ·  12 w2

    ⇒  p =  2w1 − w2w1 + w2

    Since 2w1  >

      w2, 2w1 − w2  is positive and  p   >  0. For  p   <  1 to betrue, it must be the case that

    2w1 − w2w1 + w2

    < 1 ⇒ 12

    w1  < w2

    Which is true. And since the payoffs are symmetric, a similar analy-

    sis would reveal that

    q = 2w1 − w2

    w1 + w2

    Answer 1.14

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    12/36

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    13/36

    Dynamic Games of Complete Information

    Answer 2.1 The total family income is given by

    I C( A) + I P( A)

    This is maximized at

    d(I C( A) + I P( A))dA

      = 0 ⇒ dI C( A)dA

      = −dI P( A)dA

    The utility function of the parents is given by

    V (I P − B) + kU (I C + B)

    This is maximized at

    k dU (I C + B)

    dB  +

     dV (I P − B)db

      = 0

    ⇒ k dU (I C + B)

    d(I C + B)   · d(I C + B)

    dB   +

     dV (I P

     −B)

    d(I P − B)   ·  I P

     −B

    dB   = 0

    ⇒ kU (I C + B) − V (I P − B) = 0

    ⇒ V (I P − B∗) = kU (I C + B∗)

    Where B∗  is the maximizing level of the bequest. We know it exists because a) there are no restrictions on B and b) V() and U() are

    concave and increasing

    The child’s utility function is given by U (I C( A) − B∗( A)). This is

    maximized atdU (I C( A) + B

    ∗( A))dA

      = 0

    ⇒ U (I C( A) + B∗( A)) ·

    dI C( A)

    dA  +

     d B∗( A)dA

    = 0

    ⇒  dI C( A)dA

      + d B∗( A)

    dA  = 0 ⇒ I C( A) = −B∗( A)

    We now have only to prove  B ∗( A) =  I P( A)

    dV (I P( A) − B∗( A))dA

      = 0

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    14/36

    14   Dynamic Games of Complete Information

    ⇒ V (I P( A)∗B( A)) ·

    dI P( A)

    dA  −  d B

    ∗( A)dA

    = 0

    ⇒  I P( A) =  B ∗( A)

    Answer 2.2 The utility function of the parent is given by  V (I P −B) + k [U 1(I C − S) + U 2(B + S)]. This is maximized at

    d · {V (I P − B) + k [U 1(I C − S) + U 2(B + S)]}dB

      = 0

    ⇒ −V  + [U 1(−SB) + U 2(SB + 1)] = 0 ⇒ V  = −kU 1S + U 2S + U 2The utility of the child is given by  U 1(I C − S) + U 2(S + B). This ismaximized at:

    d[U 1(I C − S) + U 2(S + B)]dS

      = 0 ⇒

     U 1

     =  U 2

    (1 + B)

    Total utility is given by

    V (I  p − B) + k (U 1(I C − S) + U 2(B + S)) + U 1(I C − S) + U 2(B + S)

    = V (I  p − B) + (1 + k )(U 1(I C − S) + U 2(B + S))

    This is maximized (w.r.t S) at:

    V  · (−BS) + (1 + K ) · [U 1(−1) + U 2(1 + BS)] = 0

    ⇒ U 1  =  U 2(1 + BS) −  V BS1 + k as opposed to U 1  =  U 2(1 + BS), which is the equilibrium condition.Since

      V BS1+k    > 0, the equilibrium U 

    1  is ‘too high’ which means that S

    - the level of savings - must be too low (since   dU 

    dS     R − c1. He will put in  R − c1  if it’s better than putting innothing i.e.

    V − (R − c1)2 ≥ 0 ⇒ c1 ≥ R −√ 

    For player 1, any c1  > R −√ 

    V  is dominated by c1  =  R −√ 

    V . Now,

    player 1  will do this if the benefit exceeds the cost:

    δV  ≥ (R − √ V )2

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    15/36

    15

    ⇒  δ ≥

      R√ V 

    − 12

    If  R2 ≥  4V ,  δ would have to be greater than one, which is impossi- ble. Therefore, if  δ

     ≥   R√ 

    V  −1

    2and R2

    ≥ 4V  (i.e. the cost is not

    ‘too high’), c1   =   R − √ V  and  c2   = √ V . Otherwise, c2   =  0 andc1 =  0.

    Answer 2.5 Let the ‘wage premium’ be  p   =   wD − wE, where p ∈   (−∞,∞). In order to get the worker to acquire the skill, thefirm has to credibly promise to promote him if he acquires the skill

    - and not promote him if he doesn’t.

    Let’s say that he hasn’t acquired the skill. The firm will not  pro-

    mote him iff  the returns to the firm are such that:

     yD0 −

    wD ≤

     yE0 −

    wE ⇒

     yD0 −

     yE0 ≤

     wD −

    wE

     =  p

    If he does acquire the skill, the firm will promote if the returns to

    the firm are such that:

     yDS − wD ≥ yES − wE ⇒ yDS − yES ≥ wD − wE  =  p

    Thus the condition which the firm behaves as it ought to in the

    desired equilibrium is:

     yD0 − yE0 ≤ p ≤ y DS − yES

    Given this condition, the worker will acquire the promotion  iff   heacquires the skill. He will acquire the skill iff  the benefit outweighs

    the cost i.e.

    wD − C ≥ wE ⇒ wE +  p − C ≥ wE ⇒  p ≥ c

    That is, the premium paid by the company must cover the cost of 

    training. The company wishes (obviously) to minimize the pre-

    mium, which occurs at:

    wD − wE  =  p =

    C   if  C ≥ y D0 − yE0, yD0

    − yE0   if  C <  yD0

    − yE0

    A final condition is that the wages must be greater than the alterna-

    tive i.e.  wE ≥  0 and wD ≥  0. The firm seeks to maximize  y ij − wi,which happens at wE  =  0 and wD  =  p.

    Answer 2.6 The price of the good is determined by

    P(Q) = a − q1 − q2 − q3The profit earned by a firm is given by  π i  = ( p − c) · qi. For Firm 2,for example

    π 2 = (a − q∗1 − q2 − q3 − c) · q2

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    16/36

    16   Dynamic Games of Complete Information

    which is maximized at

    dπ 2dq2

    = (a − q∗1 − q2 − q3 − c) + q2(−1) = 0

    ⇒ q2 =   a − q∗1 − q3 − c2Symmetrically,

    q3  =  a − q∗1 − q2 − c

    2

    Putting these two together,

    q2 =  a − q∗1 −

      a−q∗1−q2−c2   − c

    2  ⇒ q2  =

      a − c − q∗13

    Which, symmetrically, is equal to  q3.

    ∴  π 1 = (a − q1 − q2 − q3 − c) · q1  = (a − q1 − 2 ·  a − c − q13   − c) · q1  =

    a − q1 − c3

    · q1

    This is maximized at

    dπ 1dq1

    =  a − q1 − c

    3  +

     −q13

      = 0 ⇒ q∗1  =  a − c

    2

    Plugging this into the previous equations, we get

    q2  =  q3  =  a − c

    6

    Answer 2.7 The profit earned by firm i is

    π i  = ( p − w) · LiWhich is maximized at

    dπ idLi

    =  d( p − w) · Li

    dLi=

      d(a − Q − w)LidLi

    = 0

    ⇒   d(a − L1 − . . . − Li − . . . − Ln − w) · LidLi

    = 0

    ⇒ (a − L1 − . . . − Li − . . . − Ln − w) + Li(−1) = 0

    ⇒ L1 + . . . + 2Li + . . . + Ln  =  a − w ∀ i  =  1, . . . , n

    This generates a system of equation similar to the system in Ques-

    tion (1.4)

    2 1 . . . 1

    1 2 . . . 1...

    ...  . . .

      ...

    1 1 . . . 2

    ·

    L1

    L2...

    Ln

    =

    a − wa − w

    ...

    a − w

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    17/36

    17

    Which resolves to

    Li  =  a − wn + 1

    Thus, total labor demand is given by

    L =  L1 + L2 + . . . + Ln  =  a − wn + 1

     + . . . + a − wn + 1

      =  n

    n + 1 · (a − w)

    The labor union aims to maximize

    U  = (w − wa) · L = (w − wa) ·   nn + 1

     · (a − w) =   nn + 1

     · (aw − awa − w2 + wwa)

    The union maximize U by setting w. This is maximized at

    dU 

    dw  = (a − 2w + wa) ·   n

    n + 1 = 0 ⇒ w  =   a + wa

    2

    The sub-game perfect equilibrium is  Li   =

      a

    −w

    n+1   and w   =

      a+wa

    2   .Although the wage doesn’t change with  n, the union’s utility (U )

    is an increasing function of    nn+1 , which is an increasing function

    of  n. This is so because the more firms there are in the market, the

    greater the quantity produced: more workers are hired to produce

    this larger quantity, increasing employment and the utility of the

    labor union.

    Answer 2.8

    Answer 2.9 From section 2.2.C, we know that exports from coun-

    try i can be driven to 0  iff 

    e∗i   =a − c − 2t j

    3  = 0 ⇒ t j  =   a − c2

    which, symmetrically, is equal to  ti. Note that in this model  c   =

    wi  and we will only be using  wi  from now, for simplicity. What

    happens to domestic sales?

    ∴ hi  =  a − wi + ti

    3  =

      a − wi +   a−wi23

      =  a − wi

    2

    Which is the monopoly amount. Now, in the monopoly-union bar-

    gaining model, the quantity produced equals the labor demanded.Thus,

    Le=0i   =  a − wi

    2

    The profit earned by firm i in this case is given by

    π e=0i   = ( pi − wi)hi  = (a − wi − hi)hi  =

    a − wi −

    a − wi

    2

    a − wi

    2

    =

    a − wi

    2

    2

    The union function’s payoff is defined by

    U e=0 = (wi − wa)Li  = (wi − wa)

    a − wi2

    =   wi a − wa − w

    2i   + wiwa

    2

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    18/36

    18   Dynamic Games of Complete Information

    The union sets wages to maximize utility with the following condi-

    tion:

    dU e=0

    dwi=

      a − 2wi + wa2

      = 0 ⇒ w i  =   a + wa2Now, tariffs decline to zero. In this situation, t j  =  0 and therefore

    h j  =  h i  =  a − wi

    3

    and

    e j  =  ei  =  a − wi

    3

    Due to this, prices fall:

    Pi  =  P j  = a − Q =  a − hi − e j  =  a −  a − wi3   − a − wi

    3  =

      a + 2wi3

    So what happens to profits?

    π t=0i   = ( pi − wi)hi + ( p j − wi)e j  =

    a + 2wi

    3  − wi

    a − wi

    3

    +

    a + 2wi

    3  − wi

    a − wi

    3

    ⇒  π t=0i   = 2

    a − wi3

    2= 2 · π e=0i

    Thus, profits are higher at zero tariffs. What happens to employ-

    ment?

    Lt=0i   = q i  =  h i + ei  = 2

    3 ·(a

    −wi) =

     4

    3 ·a − wi

    2 =

     4

    3 ·Le=0i

    Employment rises. And what happens to the wage? That depends

    on the payoff the union now faces:

    U t=0 = (wi − wa) · Li  =  23 (wi − wa)(a − wi) = 2

    3 · (wi a − waa − w2i   + wawi)

    This is maximized at

    dU t=0

    dwi=

     2

    3(a − 2wi + wa) = 0 ⇒ wi  =   a + wa2

    Which is the same as before.

    Answer 2.10  Note that (P1, P2), (R1, R2) and  (S1, S2) are Nash

    Equilibrium.

    P2   Q2   R2   S2

    P1   2, 2   x, 0   -1, 0 0,0

    Q1   0, x   4, 4   -1, 0 0, 0

    R1   0, 0 0, 0 0, 2 0, 0

    S2   0, -1 0, -1   -1, -1 2, 0

    So what are player  1’s payoff from playing the strategy? Let  PO ji (X 1, X 2)denote player i’s payoff in round j when player  1  plays X 1  and

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    19/36

    19

    player 2  plays  X 2. If player  2  doesn’t deviate, he earns the sum of 

    payoffs from two rounds of gaming:

    PO11(Q1, Q2) + PO21(P1, P2) = 4 + 2 =  6

    And if player 2  deviates from the strategy, player  1  earns:

    PO11(Q1, P2) + PO21(S1, S2) = 0 + 2 =  2

    Now, let’s look at his payoff from deviating, when  2  doesn’t:

    PO11(P1, Q2) + PO21(R1, R2) = x  + 0 =  x

    And when they both deviate:

    PO11(P1, P2) + PO21(P1, P2) = 2 + 2 =  4

    Thus, if player 2  deviates, player  1’s best response is to deviate for

    a payoff of  4

     (as opposed to the 2

     he’d get when not deviating). If,however, player 2  doesn’t deviate, player 1  gets a payoff of  6  when

    playing the strategy and x when he doesn’t. Thus, he (player 1)

    will play the strategy  i f f x  <  6. A symmetric argument applies to

    player 2.

    Thus the condition for which the strategy is a sub-game perfect

    Nash Equilibrium is

    4 < x < 6

    Answer 2.11  The only pure-strategy subgame-perfect Nash Equi-

    librium in this game are (T , L) and  ( M, C).

    L C R

    T   3,1 0,0 5,0

    M   2,1 1,2 3,1

    B   1,2 0,1 4,4

    Unfortunately, the payoff (4,4) - which comes from the actions (B,R)

    - cannot be maintained. Player 2  would play R, if player  1  plays B -

    player 1  however would deviate to T to earn a payoff of  5. Consider,

    however, the following strategy for player  2:

    • In Round 1, play R.• In Round 2, if Round  1  was (B,R), play L. Else, play M.Player 1’s best response in Round 2  is obviously T or M, depend-

    ing on what player  2  does. But what should he do in Round  1? If 

    he plays T, his playoff is:   3   3 If you do not understand the nota-tion, see Answer 2.10

    PO11(T , R) + PO21( M, C) = 5 + 1 =  6

    If he plays B:

    PO11(B, R) + PO21(T , L) = 4 + 3 =  7

    Thus, as long as player  2  is following the strategy given above, wecan induce (B,R).

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    20/36

    20   Dynamic Games of Complete Information

    To get a intuitive idea of how we constructed the strategy, note

    that there are two Nash equilibrium that player  2  can play in the

    second round: a "reward" equilibrium in which player  1  gets 3  and

    "punishment" equilibrium in which player 1  gets 1. By (credibly)

    threatening to punish player  1  in Round  2, player 2  induces "good"

     behavior in Round 1.

    Answer 2.12  See text.

    Answer 2.13  The monopoly quantity is   a−c2  . The monopoly priceis, therefore:

    P =  a − Q =  q −

    a − c2

    =

      a + c

    2

    The players can construct the following strategy.

    • In Round 1, play  pi  =  a+c

    2• In Round t=1, if the previous Round had  p j =   a+c2   , play  pi  = c(the Bertrand equilibrium). Else, play  pi  =

      a+c2

    Now, if player i deviates by charging   a−c2   − ε where  ε   >  0, hesecures the entire market and earns monopoly profits for one round

    and Bertrand profits (which are  0) for all future rounds which totals

    to

    π deviate =

    a −  a − c

    2  − c

    ·

    a − c2

    + δ · 0 + δ2 · 0 + . . . =  (a − c)

    2

    4

    The payoff from sticking to the strategy (in which both firms pro-

    duce  a

    −c

    4  ) the payoff is

    π  f ol low =

    a −  a − c

    2  − c

    ·

    a − c4

    + δ ·

    a −  a − c

    2  − c

    ·

    a − c4

    + . . .

    =

      1

    1 − δ

    ·

    a − c2

    ·

    a − c4

    =

      1

    1 − δ

    ·  (a − c)2

    8

    The strategy is stable if 

    π deviate ≤  π  f ol low

    ⇒   (a − c)2

    4  ≤

      1

    1 − δ

    ·  (a − c)2

    2

    ⇒  δ ≥ 12

    Q.E.D.

    Answer 2.14  The monopoly quantity when demand is high isa H −c

    2   which makes the monopoly price

     p H  = a H  −

    a H  − c2

    =   a H  + c

    2

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    21/36

    21

    This is the price that the firms have to maintain when demand is

    high. Conversely, when demand is low

     pL  =  aL + c

    2

    Let  p M  be the monopoly price, defined as

     p M  =

     p H    if  a i  = a H  pL   if  a i  = a L

    Consider the following strategy for firm i:

    • In Round 1, set  pi  =  p M.• In Round t = 1, if  p j  =  p M  in the previous round, play  p M, else

    play  pi  =  c

    The payoff received from deviating is the monopoly profit for

    one round  4 and then zero profits in all future rounds:   4 See the previous question for thederivation of one round monopolyprofits

    π deviate = (ai − c)2

    4  + δ · 0 + δ2 · 0 + . . . =   (ai − c)

    2

    4

    If he follows the strategy, he earns:

    π  f ol low  =  (ai − c)2

    8  + δ ·

    π ·  (a H  − c)

    2

    8  + (1 − π ) ·  (aL − c)

    2

    8

    + . . .

    ⇒  π  f ol low  =   (ai − c)2

    8  +

      δ

    1 − δ ·π ·  (a H  − c)

    2

    8  + (1 − π ) ·  (aL − c)

    2

    8

    The strategy is stable if 

    π deviate ≤  π  f ol low

    ⇒   (ai − c)2

    4  ≤   (ai − c)

    2

    8  +

      δ

    1 − δ ·π  · ( a H  − c)

    2

    8  + (1 − π ) ·  (aL − c)

    2

    8

    Answer 2.15  If the quantity produced by a monopolist is   a−c2   , thequantity produced by a single company in a successful cartel is

    qmn   =  a − c

    2n

    Therefore, the profit earned by one of these companies is

    π m  = ( p − c)qmn   = (a − Q − c)qmn   =

    a −  a − c2

      − c

    a − c2n

    =

      1

    n ·

    a − c2

    2

    The Cournot oligopoly equilibrium quantity5 is   a−c1+n  which means   5 See Answer 1.4that the profit earned at this equilibrium is

    π c = (a − Q − c)qcn  =

    a − n

    a − c1 + n

    − c

    ·

    a − c1 + n

    =

    a − c1 + n

    2

    A grim trigger strategy for a single company here is• In Round t=1, produce qmn

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    22/36

    22   Dynamic Games of Complete Information

    • In Round t  > 1, if the total quantity produced in  t − 1 is n · qmn ,produce qmn , else produce q

    cn.

    Now, the best response to everyone else producing  qmn   is deter-

    mined by finding  q  which renders profit

    π   = (a − Q − c)q  = a −  a − c2n

      · (n − 1) − q − c q  =  n + 12n

      · (a − c)q − q2

    Which is maximized at

    dπ 

    dq  =

      n + 1

    2n  · (a − c) − 2q  =  0 ⇒ q  =   a + 1

    4n  (a − c)

    The profit at q  (cheating gain) is

    π   =

    a − c −  a − c

    2n  · (n − 1) −  n + 1

    4n  · (a − c)

    ·

    n + 1

    4n  · (a − c)

    =

    n + 1

    4n  · (a − c)

    2

    If the firm deviates, it earns the cheating gain for one round and

    Cournot profits for all future rounds i.e. the gain from deviating

    from the strategy is

    π deviate =

    n + 1

    4n  · (a − c)

    2+ δ

    a − c1 + n

    2+ δ2

    a − c1 + n

    2+ . . .

    ⇒  π deviate =

    n + 1

    4n

    2+

      δ

    1 + δ

      1

    1 + n

    2(a − c)2

    If the firm follows the strategy, it’s payoff is  π m  for all rounds:

    π  f ol low =  1n

    a − c

    2

    2

    + δ ·  1n

    a − c

    2

    2

    + δ2 ·  1n

    a − c

    2

    2

    + . . . =   11 − δ ·

     1n

    a − c

    2

    2

    The strategy is stable if 

    π  f ol low ≥  π deviate

    ⇒   11 − δ ·

     1

    n

    a − c

    2

    2≥

    n + 1

    4n

    2+

      δ

    1 − δ

      1

    n + 1

    2(a − c)2

    ⇒  δ∗ ≤  n2 + 2n + 1

    n2 + 6n + 1

    Thus as n  rises,  δ∗  falls. If you want to know more, see Rotemberand Saloner (1986).

    Answer 2.16

    Answer 2.17

    Answer 2.18  See text.

    Answer 2.19  In a one period game, player  1  would get a payoff of  1  and player  2  would get a payoff of  0  i.e. (1,0). In a two period

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    23/36

    23

    game, if the two players can’t agree, the game goes to the second

    stage, at which point, player  2  gets 1  and player 1  gets 0. This pay-

    off of  1  in the second round is worth  δ to player  2  in the first round.

    If player 1  offers  δ to player 2  in the first round, player  2  will accept,

    getting a payoff of 1

    −δ  i.e. (1

    −δ,  δ).

    In a three period game, if player  2  rejects the offer in the first

    round, they go on to the second round, at which point it becomes

    a two period game and player  2  gets a payoff of 1 − δ and player 1gets  δ. This payoff (δ,1 − δ) is worth (δ2,δ[1 − δ]) to the players inthe first round. Thus, if player  1  makes an offer of (1 − δ[1 − δ],δ[1 −δ])=(1 − δ +  δ2,δ − δ2), player 2  would accept and player 1  wouldsecure a higher payoff.

    Answer 2.20  In round 1, player A offers (   11−δ ,  δ

    1+δ ) to player B,

    which player B accepts since   δ1+δ ≥  δs∗  =  δ   11−δ .What if A deviates from the equilibrium, offers less and B re-

    fuses? The game then goes into the next round and B offers A   δ1+δ ,

    which will be accepted, leaving   11+δ  for B. This is worth  δ ·   11+δ   toB in the first round (which is why B will refuse anything less than

    this amount) and   δ2

    1+δ  to A in the first round. This is less than the1

    1+δ  A would have made had he not deviated.

    Answer 2.21

    Answer 2.22  In the first round, investors can either withdraw (w)

    or not (d). A strategy can represented as  x1x2  where  x1  is what the

    investor does in the first round and  x2  is what the investor does inthe second round. The game can be represented by the following

    table:

    ww wd dd dw

    ww r,r r,r D,2r-D D,2r-D

    wd r,r r,r D,2r-D D,2r-D

    dd   2r-D,D   2r-D,D R,R D,2R-D

    dw   2r-D,D   2r-D,D   2R-D,D R,R

    There are 5

     Nash Equilibria: (dw,dw) , (ww,ww) , (ww,wd) , (wd,ww)and (wd,wd). Of these, (ww,wd), (wd,ww) and (wd,wd) are not

    Subgame Perfect Equilibria since there is no subgame in which both

    or either player doesn’t withdraw his or her funds in the second

    round.

    Answer 2.23  The optimal investment is given by

    d(v + I − p − I 2)dI 

      = 0 ⇒ I ∗  =  12

    The boost in the value added is If the buyer had played ‘Invest’,

     buyer will buy if 

    v + I − p − I 2 ≥ −I 2 ⇒  p ≤ v + I 

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    24/36

    24   Dynamic Games of Complete Information

    Thus, the highest possible price that the buyer will pay at this point

    is  p   =   v +  I . If, however, the buyer doesn’t invest, the buyer will

     buy if 

    v

    − p

     ≥ 0

     ⇒ v

     ≥ p

    Thus the buyer would be willing to pay  p  =  v. Thus investment is

    I  ⊆ {0,  12}. The price is drawn from  p ⊆ {v, v +   12 }. There is no gainfrom charging anything other than these prices.

    v +   12   v

    12 ,  A   − 14 , v +   12 14 , v12 , R   − 14 , 0   − 14 , 00, A   − 12 , v +   12   0, v0, R   0, 0 0, 0

    As you can see from this (complicated) table, if  I   =   12 , A weakly

    dominates R. And if  I   =  0 R weakly dominates A. Thus we can

    collapse the above table into a simpler one:

    (q)v +   12   (1 − q)v( p)I  =   12 Accept   − 14 , v +   12 14 , v

    (1 − p)I  =  0Reject   0, 0 0, 0

    The only pure Nash Equilibrium is for the buyer to not invest and

    the

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    25/36

    Static Games of Incomplete Information

    Answer 3.1 See text

    Answer 3.2 Firm 1  aims to maximize

    π 1  = ( p − c)q1 = (ai − c − q1 − q2)q1Which is done by

    dπ 1dq1

    = a i − c − q1 − q2 + q1(−1) = 0 ⇒ q1  =   ai − c − q22Thus, the strategy for firm 1  is

    q1  =

    a H −c−q22   if  a i  = a H 

    aL−c−q22   if  a i  = a L

    Now, the firm 2  aims to maximize

    π 2 = ( p − c)q2  = (a − c − q1 − q2)q2This is maximized at

    dπ 2dq2

    =  a − c − q1 − q2 + q2(−1) = 0 ⇒ q2  =   a − c − q12   =  θa H  + (1 − θ)aL − c − q1

    2

    Plugging in q1, we get

    q2  =θa H  + (1 − θ)aL − c −

    θa H +(1−θ)aL−c−q2

    2

    2

    ⇒ q2  =   θa H  + (1 − θ)aL − c3Now, we need to find out what firm  1’s output would be. If  a i   =

    a H ,

    q H 1   =  a H  − c −   θa H +(1−θ)aL−c3

    2  =

      (3 − θ)a H  − (1 − θ)aL − 2c6

    But what if  a i  =  a L?

    qL1   =  aL − c −

      θ

    a H +(1−θ

    )aL−c32

      =  (2 + θ)aL − θa H  − 2c6

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    26/36

    26   Static Games of Incomplete Information

    Now, based on these results, the constraints for non-negativity are:

    q2 ≥ 0 ⇒ θa H  + (1 − θ)aL − c ≥ 0 ⇒ θ ≥   c − aLa H  − aL

    Which also requires

    θ ≤ 1 ⇒ 1 ≥ c − aL ⇒ a L ≥ c − 1

    Furthermore,

    qL1 ≥  0 ⇒  (2 + θ)aL − θa H  −2c ≥

    6  ≥ 0 ⇒ θ ≤ 2 ·   c − aL

    a H  − aLWhich subsumes the last-but-one result. And finally,

    q H 1  ≥ 0 ⇒ θ ≤ 2c − 3a H  + aL

    a H  − aL

    Answer 3.3 The profits earned by firm  1  is given by

    π 1  = ( p1 − c)q1 = ( p1 − c)(a − p1 − b1 p2)

    This is maximized at

    dπ 1dp1

    =  a − p1 − b1 p2 +  p1(−1) = 0 ⇒ p1  =   a − b1 p22   =  a − b1[θ p H  + (1 − θ) pL]

    2

    Now, what if  b1 =  b H ? To start with,  p1 =  p H  and

     p H  =  a − b H [θ p H  + (1 − θ) pL]

    2   ⇒ p H  =

      a − (1 − θ)b H  pL2 +

     θb H 

    And if  b1 =  bL:

     pL  =  a − bL[θ p H  + (1 − θ) pL]

    2  =

      a − θbL p H 2 + bL(1 − θ)

    Which means that,

     p H  =a − (1 − θ)

      a−θbL p H 2+(1−θ)bL

    2 + θb H 

    ⇒  p H  =   a(1 − [1 − θ]b H )4 + 2(1 − θ)bL + 2θb H Similarly,

     pL  =  a(1 − θbL)

    4 + 2(1 − θ)bL + 2θb H 

    Answer 3.4 Game 1  is played with  0.5 probability:

    L R

    (q)T   1,1 0,0

    (1-q)B   0,0 0,0

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    27/36

    27

    If nature picks game  2, which 0.5 probability:

    L R

    (q)T   0,0 0,0

    (1-q)B   0,0 2,2

    If nature picks game  2, player 1  will always play B, since it weakly

    dominates T and player 2  will play R, since it weakly dominates L.

    Now, if nature chooses game  1, player 1  will play T. If nature

    chooses game  2, player 1  will play B. Furthermore, if player  2  plays

    L with probability p:

    π 2 =  p[1

    2 · 0 +  1

    2 · 1] + (1 − p)[ 1

    2 · 0 +  1

    2 · 2] = 1 −  1

    2 · p

    This is maximized at  p =  0 i.e. player  2  will always play R. Thus the

    Pure-strategy Bayesian Nash equilibrium is

    PSNE = {(1, T , R), (2, B, R)}

    Answer 3.5

    Answer 3.6 The payoff is given by

    ui  =

    vi − bi   if  bi  > b j ∀ j =  1, 2, . . . , i − 1, i + 1 , . . . , nvi−b j

    m   if  bi  = b j

    0 if  bi  < b j  for any   j =  1, 2, . . . , i − 1, i + 1 , . . . , n

    The beliefs are:  v j   is uniformly distributed on [0,1]. Actions aregiven by bi ⊆ [0, 1] and types are given by  v i ⊆ [0, 1]. The strategy isgiven by bi  = a i + civi. Thus, the aim is to maximize

    π i  = (vi − bi) ·P(bi  > b j∀ j =  1, . . . , i − 1, i + 1 , . . . , n) = (vi − bi) · [P(bi  > b j)]n−1

    ⇒  π i  = (vi − bi) · [P(bi  > a j + c jv j)]n−1 = (vi − bi) ·P

    v j  <

    bi − a jc j

    n−1= (vi − bi)

    bi − a j

    c j

    n−1

    This is maximized at

    dπ i

    dbi = (−1) ·bi

     −a j

    c j

    n−1

    + (vi − bi)n

    −1

    c j bi −

    a j

    c j

    n−2

    = 0

    bi − a j

    c j

    n−2·

    a j + (n − 1)vi − nbic j

    = 0

    This requires that either

    bi − a jc j

    = 0 ⇒ bi  =  a j

    Or that

    a j + (n − 1)vi − nbic j

    = 0 ⇒ bi  = a jn   +  n − 1n   · vi

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    28/36

    28   Static Games of Incomplete Information

    Now, we know that  b i  =  a i + civi. Here, we know that  c i  =  n−1

    n   and

    ai  =a j

    n ⇒ a1  =   a2n   =

      a3n

      = . . . =  an

    n

    Which is only possible if  a1 =  a2 =  . . . =  an  =  0. Thus,

    bi  =  n − 1

    n  · vi

    Answer 3.7

    Answer 3.8

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    29/36

    Dynamic Games of Incomplete Information

    Answer 4.1 (a)

    (q)L’ (1-q)R’

    L   4,1 0,0

    M   3,0 0,1

    R   2,2 2,2

    The Nash Equilibria are  (L, L), (R, R) and they are both sub-gameperfect equilibria. Now, the payoff to player 2  from playing  L   is

    π 2(L) = 1 · p + 0 · (1 − p) =  p

    The payoff from playing  R

    π 2(R) = 0

    · p + 1

    ·(1

    − p) = 1

    − p

    Player 2  will always play  L   if 

    π 2(L) > π 2(R) ⇒ p > 1 − p ⇒ p > 12

    The playoff to player  1  from playing L is

    π 1(L) = 4 · q + 0 · (1 − q) = 4q

    And the payoff from playing M is

    π 1( M) = 3 · q + 0 · (1 − q) = 3q

    Player 1  will always play L if 

    π 1(L) > π 1( M) ⇒ 4q > 3q

    Which is true. Thus  p  =  1. In which case, player 2  will always play

    L. Thus the outcome (R, R) violates Requirements  1  and  2.(b)

    L   M   R

    L   1,3 1,2 4,0

    M   4,0 0,2 3,3

    R   2,4 2,4 2,4

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    30/36

    30   Dynamic Games of Incomplete Information

    The expected values of the payoffs to player  2  are:

    π 2(L) = 3 · p + 0 · (1 − p) = 3 p

    π 2( M) = 2

    · p + 2

    ·(1

    − p) = 2

    π 2(R) = 0 · p + 3 · (1 − p) = 3 − 3 p

    And the payoffs to player 1  are:

    π 1(R) = 2

    π 1(L) = 1 · q1 + 1 · q2 + 4 · (1 − q1 − q2) = 4 − 3q1 − 3q2

    π 1( M) = 4 · q1 + 0 · q2 + 3 · (1 − q1 − q2) = 3 + q1 − 3q2The only Nash Equilibrium is  (R, M); it is also sub-game perfect.To be a Perfect Bayesian Equilibrium, player  2  must believe that

    π 2( M) > π 2(L) ⇒ 2 > 3 p ⇒ 2

    3 >  p

    and

    π 2( M) > π 2(R) ⇒ 2 > 3 − 3 p ⇒ p > 1

    3

    Furthermore, player 1  must believe

    π 1(R)

    > π 1(L) ⇒ 2 > 4 − 3q1 − 3q2 ⇒ q1  >

    2

    3 − q2Since q1  > 0, this implies that

    2

    3 − q2  > 0 ⇒ 23  > q2

    and

    π 1(R) > π 1( M) ⇒ 2 > 3 + q1 − 3q2 ⇒ 3q2 − 1 > q1Which, in turn, requires

    3q2 − 1>

    2

    3 − q2 ⇒ q2 >

    5

    12

    The pure Bayesian Nash equilibrium is(R, M),

     1

    3  >  p >

    2

    3, 3q2 − 1 > q1  > 23 − q2,

     2

    3 > q2  >

    5

    12

    Answer 4.2

    (q)L   (1 − q)R( p)L   3,0 0,1

    (1

    − p) M   0,1 3,0

    R   2,2 2,2

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    31/36

    31

    As you can see, there is no Pure Nash Equilibrium. But, we need

    rigorous proof: A pure strategy Nash Equilibrium exists if 

    (a) Player 1  always picks either  L or  M. For example, player 1

    will always play  L  if 

    π 1(L) > π 1( M) ⇒ 3 · q + 0 · (1 − q) > 0 · q + 3 · (1 − q) ⇒ q > 12Thus if  q > 0.5,  p =  1.

    (b) Player 2  always picks either  L  or R ; player 2  will always playL   if 

    π 2(L) > π 2(R) ⇒ 0 · p + 1 · (1 − p) > 1 · p + 0 · (1 − p) ⇒  12  >  p

    Thus, if  p  

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    32/36

    32   Dynamic Games of Incomplete Information

    (b) We must find a pooling equilibrium in which the sender

    plays (L, L, L). For the receiver, the payoffs are

    π R(L, u) = 1

    3 · 1 +  1

    3 · 1 +  1

    3 · 1 =  1

    π R(L, d) = 1

    3 · 0 +  1

    3 · 0 +  1

    3 · 0 =  0

    There are two strategies:  (u, u) and  (u, d). Under (u, u):

    π 1(L, u) = 1 and  π 1(R, u) = 0

    π 2(L, u) = 2 and  π 2(R, u) = 1

    π 3(L, u) = 1 and  π 3(R, u) = 0

    None of the three types have an incentive to send  R  instead of  L.

    Thus, we have the following equilibrium:

    [(L, L, L), (u, u), p  = 1

    3, 1 ≥ q ≥ 0]

    Answer 4.4 (a) Let’s examine pooling equilibrium  (L, L).   p   =

    0.5.  π R(L, u) =   π R(L, d). Thus, it doesn’t matter for the receiver

    whether he/she plays u  or  d.

    • Under (u, u),  π 1(L, u) = 1 < π 1(R, u) = 2, making it unsustain-able.

    • Under (u, d),  π 2(L, u) = 0 < π 2(R, d) = 1, making it unsustain-able.

    • Under (d, d),  π 2(L, d) = 0 < π 2(R, d) = 1, making it unsustain-able.

    • Under (d, u),  π 2(L, u) = 0 < π 2(R, d) = 1, making it unsustain-able.

    Thus, (L, L) is not a sustainable equilibrium.

    Let’s examine separating equilibrium (L, R). The best response

    to this is  (u, d)6. Let’s see if either of the types have an incentive to   6 Since  π R(1, L, u)   >   π R(1, L, d) andπ R(2, R, u) > π R(2, R, d)deviate:

    • For type  1,  π 1(L, u) = 1 > π 1(R, d) =  0 i.e. no reason to play  Rinstead of  L.

    • For type  2,  π 2(L, u) = 0 <  π 2(R, d) = 1 i.e. no reason to play  Linstead of  R.

    Let’s examine pooling equilibrium (R, R).  π R(R, u) =   1   >

    π R(R, d) = 0.5. Therefore, the two strategies that can be followed by

    the receiver are (u, u) and  (d, u).

    • Under (u, u),  π (L, u) < π (R, u) for both types.• Under (d, u),  π (L, d) ≤ π (R, u) for both types.Let’s examine pooling equilibrium (R, L). The best response to

    this is (d, u)7.   7 Since  π R(1, R, u) = 2 >  π R(1, R, d) =

    0 and  π R(2, L, u) = 0 < π R(2, L, d) = 1• For type  1,  π 1(L, d) = 2 ≥ π 1(R, u) = 2, i.e. will play  L.• For type  2,  π 2(L, d) = 0 ≤ π 2(R, u) = 1, i.e. will play  R.

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    33/36

    33

    Thus the perfect Bayesian equilibrium are:

    [(L, R), (u, d), p, q]

    [(R, R), (u, u), p, q =  0.5]

    [(R, R), (d, u), p, q =  0.5]

    [(R, L), (d, u), p, q]

    (b) Let’s examine pooling equilibrium is  (L, L).  π R(L, u) =  1.5  >

    π R(L, d) =  1, therefore player 2  will respond to  L  with u. The two

    strategies are (u, u) and  (u, d).

    • Under (u, u),  π (L, u) > π (R, u) for both types.

    • Under (u, d),  π 1(L, u) = 3 < π 1(R, d) = 4, making it unsustain-able.

    Let’s examine separating equilibrium (L, R). The best response to

    this is (d, u).

    • For type  1,  π 1(L, d) = 1 > π 1(R, u) = 0 i.e. type  1  will play  L.• For type  2,  π 2(L, d) = 0 < π 2(R, u) = 1 i.e. type  2  will play  R.Let’s examine pooling equilibrium (R, R).  π R(R, u) =   1   >

    π R(L, d) = 0.5, therefore player 2  will respond to R  with u. The two

    strategies are (u, u) and  (d, u).

    • Under (u, u),  π 1(L, u) =   3   >   π 1(R, u) =  0, making (R, R)unsustainable.

    • Under (d, u),  π 1(L, d) =   1   >   π 1(R, u) =  0, making (R, R)unsustainable.

    Let’s examine separating equilibrium (R, L). The best response to

    this is (d, u).

    • For type  1,  π 1(L, d) =   1   >   π 1(R, u) =   0, making (R, L)unsustainable.

    • For type  2,  π 2(L, d) = 0  <  π 2(R, u) =  1, which doesn’t conflictwith the equilibrium.

    The perfect Bayesian Equilibria are

    [(L, L), (u, u), p =  0.5, q]

    [(L, R), (d, u), p  =  1, q =  0]

    Answer 4.5 Let’s examine  4.3(a). We’ve already tested equilib-

    rium (R, R). Let’s try another pooling equilibrium  (L, L).  q  =  0.5.

    π R(L, u) =   1   >   π R(L, d) =   0.5. Thus, the receiver’s response to

    L will always be u. We have to test two strategies for the receiver:

    (d, u) and(u, u).

    • Under the strategy  (d, u),  π (L, d) = 2 > π (R, u) = 0 i.e. there isno incentive for either type to play R  instead of  L.

    • Under the strategy  (u, u),  π 2(L, u) =  0 <  π 2(R, u) =  1, making(L, L) unsustainable.

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    34/36

    34   Dynamic Games of Incomplete Information

    Let’s examine (L, R). The best response to this is  (u, d). In re-

    sponse to this,

    • For type  1,  π 1(L, u) =  1  <  π 1(R, d) =  3 i.e. type  1  will play  Rwhich violates the equilibrium

    • For type  2,  π 2(L, u) =  0  <  π 2(R, d) =  2 i.e. type  2  will play  R

    which doesn’t violate the equilibrium.

    Let’s examine (R, L). The best response to this is  (u, d). In re-

    sponse to this,

    • For type  1,  π 1(L, u) = 1 < π 1(R, d) = 3, i.e. type  1  will play  R.• For type  2,  π 2(L, u) = 0 <  π 2(R, d) =  2, i.e. type  2  will play  R,

    violating the equilibrium.

    The perfect Bayesian Equilibrium is

    [(L, L), (d, u), p, q =  0.5]

    Now, let’s examine  4.3(b). There is one pooling equilibrium

    other than the (L, L, L):  (R, R, R). There are six pooling equilib-ria:   1.(L, L, R), 2.(L, R, L), 3.(R, L, L), 4.(L, R, R), 5.(R, L, R) and

    6.(R, R, L).

    Let’s start with pooling equilibrium (R, R, R).  π R(R, u) =  2

    3   >

    π R(R, d) =  13 . Thus, receiver will play the strategy (u, u) or  (d, u).

    • For strategy (u, u),  π (L, u)   <   π (R, u) for all types, making itunsustainable.

    • For strategy (d, u),  π 1(L, d) < π 1(R, u), making the equilibriumunsustainable. Let’s examine the various separating the equilib-

    rium.

    1.  ( L, L, R). The best response to this is  (u, d),   8   8 Since  π R(3, R, u) = 0 <  π R(3, R, d) =1 and 0.5 · π R(1, L, u) +  0.5 ·π R(2, L, u) = 1 > 0.5 · π R(1, L, d) + 0.5 ·π R(1, L, d) = 0

    • For type  1,  π S(1, L, u) = 1 > π S(1, R, d) = 0 i.e. type  1  will playL.

    • For type  2,  π S(2, L, u) = 2 > π S(2, R, d) = 1 i.e. type  2  will playL.

    • For type  3,  π S(3, L, u) = 1 < π S(3, R, d) = 2 i.e. type  3  will playR.

    Thus, this is a viable equilibrium.

    2.  ( L, R, L). The best response to this is  (u, u),  9   9 since  π R (2, R, u) =  1  >  π R(2, R, d) =0 and 0.5 · π R(1, L, u) +  0.5 ·π R(3, L, u) = 1 > 0.5 · π R(1, L, d) + 0.5 ·π R(3, L, d) = 0

    • For type  1,  π S(1, L, u) = 1 > π S(1, R, u) = 0 i.e. type  1  will playL.

    • For type  2,  π S(2, L, u) = 2 > π S(2, R, u) = 1 i.e. type  2  will playL, instead of  R .

    • For type  3,  π S(3, L, u) = 1 > π S(3, R, u) = 0 i.e. type  3  will playL.

    Thus, this is not a viable equilibrium.

    3.  ( R, L, L). The best response to this is  (u, u),  10   10 since  π R (1, R, u) = 1 < π R (1, R, d) =0 and 0.5 · π R(2, L, u) +  0.5 ·π R(3, L, u) = 1 > 0.5 · π R(2, L, d) + 0.5 ·π R(3, L, d) = 0

    • For type  1,  π S(1, L, u) = 1 > π S(1, R, u) = 0 i.e. type  1  will playL, instead of  R .

    • For type  2,  π S(2, L, u) = 2 > π S(2, R, u) = 1 i.e. type  2  will playL.

    • For type  3,  π S(3, L, u) = 1 > π S(3, R, u) = 0 i.e. type  3  will play

    L.Thus, this is a not viable equilibrium.

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    35/36

    35

    4.  ( L, R, R). The best response to this is  (u, u) and  (u, d)   11   11 since  π R (1, L, u) =  1 >  π R(1, L, d) =0 and 0.5 · π R(2, R, u) +  0.5 ·π R(3, R, u) =  0.5  =  0.5 · π R(2, R, d) +0.5 · π R(3, R, d) = 0.5

    Let’s test (u, u):

    • For type  1,  π S(1, L, u) = 1 > π S(1, R, u) = 0 i.e. type  1  will playL.

    • For type  2,  π S(2, L, u) = 2 > π S(2, R, u) = 1 i.e. type  2  will play

    L, instead of  R .

    • For type  3,  π S(3, L, u) = 1 > π S(3, R, u) = 0 i.e. type  3  will playL, instead of  R .

    Thus, this is not a viable equilibrium.

    Let’s test (u, d)

    • For type  1,  π S(1, L, u) = 1 > π S(1, R, d) = 0 i.e. type  1  will playL.

    • For type  2,  π S(2, L, u) = 2 > π S(2, R, d) = 1 i.e. type  2  will playL, instead of  R .

    • For type  3,  π S(3, L, u) = 1 < π S(3, R, d) = 2 i.e. type  3  will playR.

    Thus, this is not a viable equilibrium.

    5.  ( R, L, R). The best response to this is either  (u, u) or  (u, d)   12   12 since  π R (2, L, u) =  1 >  π R(2, L, d) =0 and 0.5 · π R(2, L, u) +  0.5 ·π R(3, L, u) =  0.5  =  0.5 · π R(2, L, d) +0.5 · π R(3, L, d) = 0.5

    Let’s test (u, u).

    • For type  1,  π S(1, L, u) = 1 > π S(1, R, u) = 0 i.e. type  1  will playL, instead of  R .

    • For type  2,  π S(2, L, u) = 2 > π S(2, R, u) = 1 i.e. type  2  will playL.

    • For type  3,  π S(3, L, u) = 1 > π S(3, R, u) = 0 i.e. type  3  will playL, instead of  R .

    Thus, this is not a viable equilibrium.

    Let’s test (u, d)• For type  1,  π S(1, L, u) = 1 > π S(1, R, d) = 0 i.e. type  1  will play

    L, instead of  R .

    • For type  2,  π S(2, L, u) = 2 > π S(2, R, d) = 0 i.e. type  2  will playL.

    • For type  3,  π S(3, L, u) = 1  =  π S(3, R, d) = 1 i.e. type  3  can playR.

    Thus, this is not a viable equilibrium.

    6.  ( R, R, L). The best response to this is  (u, u),   13   13 since  π R (3, L, u) =  1 >  π R(3, L, d) =0 and 0.5 · π R(1, R, u) + 0.5 ·π R(2, R, u) =  0.5  >  0.5 · π R(1, R, d) +0.5

    ·π 

    R(2, R, d) = 0

    • For type  1,  π S(1, L, u) = 1 > π S(1, R, u) = 0 i.e. type  1  will playL, instead of  R

    • For type  2,  π S(2, L, u) = 2 > π S(2, R, u) = 1 i.e. type  2  will playL, instead of  R

    • For type  3,  π S(3, L, u) = 1 > π S(3, R, u) = 0 i.e. type  3  will playL.

    Thus, this is not a viable equilibrium.

    The only other perfect Bayesian Equilibrium is(L, L, R), (u, d), p, q0 =

     1

    3, q1  =

     1

    3

    Answer 4.6 Type  2  will always play R since  π S(2, R, a) > π S(2, R, u) and  π S(2, R, a) >

    π S(2, R, d). Thus if the Receiver gets the message  L, he knows thatit can only be type  1. In such a case, the Receiver plays  u14, creating   14 in fact,  π R (x, L, u)  >  π R(x, L, d) for

     both types, so the Receiver will alwaysplay u

  • 8/20/2019 GIBBONS & KUMAR - The Unofficial Solution Manual to a Primer in Game Theory - Unfinished Draft

    36/36

    36   Dynamic Games of Incomplete Information

    a payoff of  (2, 1). This gives type  1  a higher payoff than if he played

    R, which would have given him a payoff of  1. Thus, the perfect

    Bayesian Equilibrium is

    [(L, R), (u, a), p  =  1, q =  0]