equilibrium of repeated games with cost of implementation

5
JOURNAL OF ECONOMIC THEORY 58, 105-109 (1992) Equilibrium of Repeated Games with Cost of Implementation* ALEJANDRO NEME AND LUIS QUINTAS IMASL, CONICET-Universidad Nacional de San Luis, 5700 San Luis, Argentina; and Universitat Autonoma de Barcelona, 08193 Bel/aterra, Barcelona, Spain Received July 6, 1990; revised September 19, 1991 We investigate the nature of the payoff discontinUIty observed by Abreu and Rubinstein [Econometrica 56 (1988), 1259-1281]. We show that if we relax the assumption about the use of finite automata and refine the complexity measure then a "full" folk theorem holds. Journal of Economic Literature Classification Number: C73. © 1992 AcademiC Press, Inc 1. INTRODUCTION In the standard approach to the study of Nash equilibrium in repeated games it is assumed that strategies are implemented under no cost consideration and that the classical folk theorem holds. Instead, a different formulation was given by Rubinstein [5] and Abreu and Rubinstein [1] where the players are assumed to care about their payoffs and the com- plexity of the strategies they use. If the players can only use strategies implemented by finite automata, then a substantial reduction of the out- comes supported by Nash equilibrium was observed in a two-person repeated game when complexity costs were incorporated into the model (see Abreu and Rubinstein for finding examples of how the folk theorem looks in some classical games) The strategies supporting equilibrium consisted of automata where all the states were used on the equilibrium path (including the "punishment states"). ,. This work was partially done while the authors were vIsIting at the Department of Managenal Economics and Decision Sciences, Kellogg Graduate School of Management, Northwestern University, Evanston, IL 60208. The authors gratefully acknowledge support from CONICET: Republica Argentina and Grant 33/86 from TWAS: Third World Academy of Sciences. We thank E. Kalai, I. Gilboa, and an anonymous referee for helpful comments. All the errors are our own. 105 0022-0531/92 $5.00 642/58/1·8 Copynght © 1992 by AcademIC Press. Inc. All nghts of reproduction In any form reserved

Upload: alejandro-neme

Post on 25-Aug-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Equilibrium of repeated games with cost of implementation

JOURNAL OF ECONOMIC THEORY 58, 105-109 (1992)

Equilibrium of Repeated Games with Cost of Implementation*

ALEJANDRO NEME AND LUIS QUINTAS

IMASL, CONICET-Universidad Nacional de San Luis, 5700 San Luis, Argentina; and Universitat Autonoma de Barcelona,

08193 Bel/aterra, Barcelona, Spain

Received July 6, 1990; revised September 19, 1991

We investigate the nature of the payoff discontinUIty observed by Abreu and Rubinstein [Econometrica 56 (1988), 1259-1281]. We show that if we relax the assumption about the use of finite automata and refine the complexity measure then a "full" folk theorem holds. Journal of Economic Literature Classification Number: C73. © 1992 AcademiC Press, Inc

1. INTRODUCTION

In the standard approach to the study of Nash equilibrium in repeated games it is assumed that strategies are implemented under no cost consideration and that the classical folk theorem holds. Instead, a different formulation was given by Rubinstein [5] and Abreu and Rubinstein [1] where the players are assumed to care about their payoffs and the com­plexity of the strategies they use. If the players can only use strategies implemented by finite automata, then a substantial reduction of the out­comes supported by Nash equilibrium was observed in a two-person repeated game when complexity costs were incorporated into the model (see Abreu and Rubinstein for finding examples of how the folk theorem looks in some classical games)

The strategies supporting equilibrium consisted of automata where all the states were used on the equilibrium path (including the "punishment states").

,. This work was partially done while the authors were vIsIting at the Department of Managenal Economics and Decision Sciences, Kellogg Graduate School of Management, Northwestern University, Evanston, IL 60208. The authors gratefully acknowledge support from CONICET: Republica Argentina and Grant 33/86 from TWAS: Third World Academy of Sciences. We thank E. Kalai, I. Gilboa, and an anonymous referee for helpful comments. All the errors are our own.

105 0022-0531/92 $5.00

642/58/1·8 Copynght © 1992 by AcademIC Press. Inc. All nghts of reproduction In any form reserved

Page 2: Equilibrium of repeated games with cost of implementation

106 NEME AND QUINT AS

In the present article we investigate the nature of the severe discontinuity in the set of equilibrium outcomes observed in the Abreu-Rubinstein model when the complexity cost goes to zero. If we do not restrict to strategies implemented by finite automata, then a full folk theorem holds for the case of an infinitesimal implementation cost (i.e., it coincides with the standard folk theorem without cost).

This result relies on the idea that the players have to count the game's period in order to synchronize their actions. They hold one state as a punishment threat which is never used at equilibrium. The usual com­plexity measure assigns infinite complexity to the above-described strategy and also to that strategy without the punishment state, so that there is no gain by omitting it. This certainly sounds counterintuitive.

When the strategies have infinite many states we need a refinement of the strategy complexity measure that makes preferable those strategies without the punishment state that is never used on the equilibrium path. However, under this new complexity criterion we can also derive a full folk theorem.

2. THE MODEL AND RESULT

Let G = (A, u) be an n-person game in normal form. A = X7~ 1 A, denotes the action combinations of n-player. A, is a finite set. u = (u" ... , un) is a vector of utility functions. For each i= 1, ... , n, U,: A ~ IR is a real-valued function,

U= {u(a)withaEA},

U* = Convex Hull of U and V = {x E u* with x, > vJ,

where v, is the minimax payoff for player i. Without lost of generality, we assume that v, = 0 for all i.

The set of histories of length 0 is a singleton set denoted by HO. Its single element will be denoted bye. Let H m = A x ... X m _ tImes A be the set of histories of length m. H = U;:: = ° H m is the set of all histories. For each player i = 1, ... , n, a strategy I, is a function fz: H ~ A,. Let F, be the set of all strategies for player i and F = X7 ~ 1 F,.

Given a strategy fz E F, and a history hE H we denote by fz I h the strategy defined by fz I h (h') = fz (h. h') for any h' E H, where h. h' is the concatenation of hand h'. This means to play the history h followed by the history h'.

We denote by compfz the complexity of a strategy fz. It is the cardinality of the set F,(fz) = {I, I h; hEH}. We emphasize that compii is the car­dinality of the smallest automata implementing fz (see Kalai and Stanford [3] ).

Page 3: Equilibrium of repeated games with cost of implementation

COST OF IMPLEMENTATION 107

If we do not assume that the players use only finite complexity strategies, then by forcing them to count the game's period for synchronizing their actions we can get a full folk theorem using infinite complexity strategies as follows: Choose the vector of payoffs to be supported. Construct an equilibrium path generating these payoffs which requires infinitely many states. If for instance the payoffs can be generated by a finite cycle of actions, say P*, let the equilibrium path be P*, a, P*, P*, a, P*, P*, P*, ... etc., where a differs in every component from the first action tuple in P*. The actions a have to be included in the path in order to avoid one player's using his opponent's complexity to generate complex paths (for instance, by just copying his opponent's previous action) without having an infinite complexity strategy. This path generates the appropriated payoffs and requires each player to use infinitely many states. By adding a punishment state we build trigger strategies. They clearly define a Nash equilibrium (with or without complexity cost considerations).

Note that if a player switches to any finite complexity strategy, he is minimaxed forever. Hence any less complex strategy yields to strictly lower payoffs.

The above-described strategies without the punishment states look intuitively simpler. We will address this counterintuitive fact by introducing the following criteria:

We denote compnU;) = # {j; I h: hE Hk k ~ n].

DEFINITION. j; is simpler than g, (we denote f, >- g,) iff there exists ii EN such that, for all n > ii compnU,) < compn(g,).

It is immediately obvious that if both I, and g, are finite complexity strategies then comp (f,) > comp(gJ iff I, >- g,.

With this complexity criterion grim-trigger strategies are not longer Nash equilibria because without the punishment state they become simpler. However, we can still derive a full folk theorem by building equilibrium strategies where the punishment states are used on the equilibrium path.

THEOREM. For all x E V there exists a strategy I which is a Nash equilibrium with the order >-, and uU) = x.

Proof

Case 1. Let X=2.::~1 arYj with xjEA; aj>O, and 2.::~1 aj = 1. Suppose that aj E iIJ for all h. Then x = 2.:: ~ I rrY)b, where rj and b are non-negative integer numbers.

Let P* be the following path of actions:

P* = (aI, .......... , aI' .......... , a b .......... , ak ),

rt-tImes

Page 4: Equilibrium of repeated games with cost of implementation

108 NEME AND QUINT AS

where u(aJ

) = xr Let y be such that

(u,(P*(l))+ ... +ui(P*(/))+m)/1+1+y<x"

where m=max(m 1 ,mZ ) with m,=maxaEAu,(a). We define an infinite sequence P of actions,

P= (s, .......... , s, P*, a, P*, P*, a, P*, P*, P*, a, ... , a, P*, .......... , P*, a ... ), ')I-tImes n-tlmes

where s = (Sl' sz) are minimax strategies and a differs in every component from a 1 • It is necessary, in order to avoid one player's using his opponent's complexity (by copying, for instance) to generate a complex sequence of actions with a small complexity strategy. Without loss of generality we assume a=s.

We define a strategy I, by the following rules:

(a) Play P until player -i deviates from P.

(b) Start again with P if the player - i deviates from P.

The strategy 1= (f1 ,12) is a Nash equilibrium. Note that for any strategy g, prescribing finitely many deviations from P,

u(f _" g,) = x. This is so because after finitely many stages the path P is played.

If player i chooses a strategy 1. which always deviates in some period i, we have that for some 1 < i

u(f,,!_,) < [qbx,+u,(P*(l)+ ... +ui(P*(/))+m]/y

+ qb + 1 + 1 < [qbx, + (I + I + y) x, ]/y + qb + 1 + 1 = Xi'

where q is the number of times that P* is played (other cases of infinitely many deviations follow from the previous one).

In order to complete the proof of Case 1, we only need to show that a strategy g, such that u,(giJ-d=x, with g, simpler than I, does not exist.

If the above equality holds then comp gi cannot be finite because it has to count the game period in order to be able to synchronize the actions. If comp g, is not finite then compn(g,) ~ nand compn(f;) = n for all n.

Case 2. Assume X cannot be obtained from a finite cycle of one shot game actions.

The proof of this case is obtained as in the previous case by replacing the P*s in P for the elements of a sequence of finite paths P! which payoffs converge to x. Q.E.D.

Page 5: Equilibrium of repeated games with cost of implementation

COST OF IMPLEMENTATION 109

3. CONCLUDING REMARKS

The idea of making the players spend computational recourses in syn­chronizing their actions (which gave rise to the folk theorem presented here) also underlay in the papers by Neyman [4] and Zemel [6] when they focused on ways of explaining cooperative behavior in finitely repeated games.

The folk theorem obtained in the present article indicates that if monitoring is free then, even if punishment is costly, the Abreu-Rubinstein idea can be used to derive a full folk theorem.

REFERENCES

1. D. ABREU AND A. RUBINSTEIN, The structure of Nash equilibrium in repeated games with finite automata, Econometrica 56 (1988), 1259-1281.

2. E. KALAl, Artificial decisions and strategic complexity III repeated games, in "Essays III

Game Theory," Academic Press, New York, 1991. 3. E. KALAl AND W STANFORD, Finite rationality and interpersonal complexity in repeated

games, Econometrica 56 (1988), 397-410. 4. A. NEYMAN, Bounded ratIOnality justifies cooperation in the finitely repeated prisoner's

dilemma game, Econ. Lett. 19 (1985), 227-229. 5. A. RUBINSTEIN, Finite automata play the repeated prisoner's dilemma, J. Econ. Theory 38

(1986), 83-96. 6. E. ZEMEL, Small talk and cooperation: A note on bounded rationality, J. Econ. Theory 49

(1989), 1-9.