the advantage that wasn’t: preemption and the reduction of ... · the advantage that wasn’t:...
TRANSCRIPT
The Advantage that Wasn’t: Preemption and the Reduction of
First-Mover Advantages under Stochastic Payoffs∗
Jacco J.J. Thijssen†
March 24, 2013
Abstract
This paper studies a general model in which two players compete in continuous time for a first-
mover advantage the value of which evolves randomly over time. It is shown that for a large class of
strongly Markovian stochastic processes Markov equilibria can be formulated in terms of stopping
sets. Equilibrium existence requires rent-equalization whenever players want to preempt each other.
In addition, equilibria can be of only three types: sequential, preemptive, or collusive. Preemption
erodes the value of the first-mover advantage to players. If the stochastic process exhibits positive
jumps, preemptive equilibria have two additional featuresin asymmetric games: (i) the expected
value of the first-mover advantage to the player with the higher one gets eroded even further and (ii)
there are scenarios where this player is less likely to be thefirst-mover than his opponent.
Keywords:Timing Games, Real Options, Preemption
JEL classification:C73, D43, D81
∗The author gratefully acknowledges helpful comments Kuno Huisman, Peter Kort, Frank Riedel, Marco Scarsini and
Magdalena Trojanowska, and seminar participants at the University of York, City University, London, LUISS, Rome, the
2010 Workshop on Stochastic Games in Erice, Italy, the 2011 SAET conference in Faro, Portugal, and the 2012 Real Options
conference in London, UK.†Department of Economics & Related Studies, University of York, Heslington, York YO10 5DD, UK. Email:
1
1 Introduction
In many competitive timing situations the first mover is expected to have an advantage: the firm that first
adopts a new technology, the first developer of a new real-estate opportunity, the first pharmaceutical
that develops a new drug, etc. Typically, such an advantage takes the form of an excess rent that the
first mover can obtain over the rent that is available to the second mover. The presence of a first-mover
advantage can lead to the phenomenon ofpreemption: an agent may move sooner, at a sub-optimal time,
just to beat his opponents to the additional rent to be extracted. This of course depletes some of that
additional rent. In fact, competition for the first mover role may be so fierce as to completely erode the
first-mover advantage and lead torent equalizationbetween the first and second mover. This point has
been made in a deterministic context by, among other, Posner(1975) and Fudenberg and Tirole (1985).
This paper presents a continuous time model of a two player timing game with a first-mover advan-
tage, where payoffs are driven by a (strongly) Markovian stochastic process. Its main aim is to provide
a complete picture of equilibrium existence in a general stochastic set-up. At the heart of the strategic
conflict analysed in this paper lies a simple timing problem:players have to decide when to undertake a
certain action. These timing problems can be formulated asoptimal stopping problems. For a large class
of stochastic processes the solution to an optimal stoppingproblems takes the form of atrigger policy:
stop as soon as the underlying process reaches a certain trigger.
For this class of stochastic processes it will be shown that rent equalization is, in fact, a necessary
condition for equilibrium existence. In addition, it is shown that all equilibria are of one of three types.
First, there aresequential equilibria. These are equilibria where one of the players moves at the same
point in time at which she would have moved if she had (exogenously) been assigned to be the first
mover. Secondly,preemptive equilibriacan arise. These are equilibria where one of the players moves
at a sub-optimal time in order to prevent being preempted. Infact, I show that sequential and preemptive
equilibria cannot occur in the same game. The strategies in these equilibria are not necessarily trigger
policies. Instead, the equilibrium stopping set of a playercan be a disconnected set. Compared to
sequential equilibria, the expected payoff to the first mover in a preemptive equilibrium is lower. In
that sense, preemption reduces rents. Finally, in some games collusive equilibriaexist. These are
equilibria where both players wait and stop simultaneouslyat a much later date than in either sequential
or preemptive equilibria.
The paper pays attention to the difference between games driven by spectrally negative processes
(i.e. processes without positive jumps) and those which exhibit positive jumps. Preemption always
destroys value, because players stop, in expectation, earlier than at the optimal time. This effect is called
the preemption effect. In addition to this effect, there can be ajump effectin the presence of positive
jumps. This happens in cases where one player, say Player 1, has a bigger first-mover advantage than
Player 2, but not big enough to guarantee that Player 2 will not try to preempt. In such a game Player 1
is guaranteed to move first, in equilibrium, if the stochastic process is spectrally negative. If, however,
there are positive jumps, then there is a positive probability event where Player 1 actually becomes
the second mover. In fact, if the path of the stochastic process jumps straight into the region where both
players wish to preempt, rent equalization forces Player 2 to stop with a higher probability than Player 1.
2
It is even possible (with positive probability) that both players stop simultaneously along the equilibrium
path. These possibilities further reduce the expected payoff to Player 1.
This illustrates a fundamental difference between games driven by spectrally negative processes and
games where positive jumps are possible. In the former, equilibrium play is completely determinedex
ante: it is know which player moves first (except in a symmetric case where each player is the first to
stop with probability 1/2). If there are positive jumps, then even though Player 1 may have a bigger
first-mover advantage, any configuration of realized stopping scenarios is possible. This has important
implications for, e.g., competition policy: judging whether a market is competitive based on a study of
observed investment behaviour becomes very complicated.
The paper fits in a growing literature on game theoretic real options models.1 This literature tends
to focuss on specific models with symmetric players (Pawlinaand Kort, 2006 is an exception) where
payoffs are driven by geometric Brownian motion in the context of two firms competing in a duopolistic
market. In fact, most of these papers are special cases of themodel presented here. They focus, by and
large, on a stochastic interpretation of Fudenberg and Tirole (1985), which introduces a game theoretic
model that can be used to solve a coordination problem that can arise in continuous time games, because
in continuous time it is impossible to distinguish between simultaneous action and immediate reaction.
See also Thijssen et al. (2012). In this paper the coordination problem is solved exogenously, although
one could easily adapt the model to include a coordination device along the lines of Fudenberg and
Tirole (1985). The paper is also related to Huang and Li (1990) who prove existence of Nash equilibria
in timing games that are similar to those studied here. The main difference is that their paper uses a
martingale approach and focusses on existence, whereas in this paper the focus is on Markovian games
and Markovian equilibria. In addition, the existence proofs presented in this paper are constructive in
nature.
In the finance literature, following Leahy (1993) and Grenadier (2002), the focus of oligopolistic
timing of investment has been mainly in the tradition of incremental investment, rather than lumpy in-
vestment. As is shown by Back and Paulson (2009), such equilibria are open-loop or, in the terminology
of this paper, non-Markovian (see also Steg (2012)). In contrast, this paper investigates the effects of
lumpy investments that have a non-negligible effect on profits in the industry. In addition, the equilibria
are Markovian, i.e. self-enforcing along any sample path.
The paper is organized as follows. The basic ingredients of the model are described in Section 2.
Section 3 introduces the main strategy and equilibrium concepts, as well as the equilibrium existence
results. Section 4 discusses implications of the theory forpredictions on preemptive behavior under dif-
ferent spectrally negative stochastic processes and illustrates the preemption effect on players’ expected
payoffs. In Section 5 the analysis is extended to processes with positive jumps in order to illustrate the
jump effect. Some concluding remarks are made in Section 6.
1See, for example, Weeds (2002), Boyer et al. (2004), Pawlinaand Kort (2006, 2010), Bouis et al. (2009), Roques and Savva
(2009), and Mason and Weeds (2010). Chevalier-Roignant andTrigeorgis (2011) give an overview of the recent literature.
3
2 The Basic Set-Up
Consider a situation where two playersi ∈ 1, 2 have to decide on a stopping time over an infinite time
horizon. Their payoffs are influenced by a state-variable which takes values inE = (a, b) ⊆ R. Let
E denote the closure ofE (in the standard topology onR). For eachy ∈ E, the state variable evolves
according to a cadlag (right-continuous with left-limits) semimartingale(Yt)t≥0 on a probability space
(Ω,F ,Py), endowed with a filtration(Ft)t≥0, with Y0 = y, Py-a.s. The process(Yt)t≥0 is assumed
to be strongly Markovian with respect to(Ft)t≥0, both players discount payoffs at the constant and
common rater > 0, and upon stopping Playeri incurs a sunk costIi > 0. It is assumed throughout that
for all y ∈ E,
Ey
[∫ ∞
0e−rt|Yt|dt
]
< ∞.
The payoffs accruing to the players depend on their “stopping status”k ∈ 0, 1, which indicates
whether a player has stopped (k = 1) or not (k = 0). Let πikℓ(y), y ∈ E, denote the instantaneous
payoff to Playeri if her stopping status isk, the stopping status of Playerj, j 6= i, is ℓ, and the state
variable has valuey. Assuming thatπikℓ(·) is continuous onE, the expected present value of the stream
(
πikℓ(Yt)
)
t≥0, underPy is
Dikℓ(y) = Ey
[∫ ∞
0e−rtπi
kℓ(Yt)dt
]
.
We make the following assumption regarding these present value functions.
Assumption 1. It holds that
1. Dikℓ(·), k, ℓ = 0, 1, is continuous onE;
2. Di1ℓ(·)−Di
0ℓ(·), ℓ = 0, 1, is increasing;
3. Di10(·)−Di
00(·) > Di11(·) −Di
01(·) ≥ 0.
Without the flexibility of waiting players would stop as soonas the net present value of stopping is
non-negative. LetY iNPV,ℓ, ℓ = 0, 1, be the smallest value to solveDi
1ℓ(YiNPV,ℓ) −Di
0ℓ(YiNPV,ℓ) = Ii.
We use the convention thatY iNPV,ℓ = b if Di
1ℓ(y) −Di0ℓ(y) < Ii for all y ∈ E. From Assumption 1 it
follows that there is a first-mover advantage in the sense that Y iNPV,0 < Y i
NPV,1.
The timing game described in this paper is a game between two players to determine who is the first
and who is the second mover. These roles will from here on be referred to as theleaderand follower
roles, respectively. The players, therefore, care about the present values,Dikℓ(·), only insofar as they are
the building blocks of the expected payoffs of each of these two roles. In order to derive these, first note
that if Playeri stops at a time where the state variable equalsy ∈ E and the other player has already
stopped, then her payoff will be
M i(y) = Di11(y)− Ii. (1)
Before analyzing the game where players vie for the leader role, we first study a situation where
players’ roles are pre-determined. Assume that Playeri is the leader and Playerj, hence, is the follower.
Each player needs to choose a threshold at which (s)he moves.We want to allow for the possibilities
4
that Playeri stops immediately, no matter what the value of the state variable, and that Playerj never
stops.
The follower reacts to the timing decision of the leader. So,the follower’s threshold is independent
of that of the leader. The value of the follower role, is the solution to the optimal stopping problem
F j(y) = Dj01(y) + sup
τEy
[
e−rτ(
Dj11(Yτ )−Dj
01(Yτ )− Ij)]
. (2)
SinceDj11(·)−Dj
01(·)− Ij is increasing, Proposition 4 in Appendix A applies and the optimal stopping
time takes the form of a first hitting timeτ(Y ∗) := inft ≥ 0|Yt ≥ Y ∗, for someY ∗ ∈ E, where
τ(a) = 0 andτ(b) = ∞, Py-a.s, for ally ∈ E.2
In order to solve the optimal stopping problem we introduce the infinitesimal generator:
LY g(y) = limt↓0
Ey[g(Yt)]− g(y)
t,
and the following assumption.
Assumption 2. There exists an increasing and convex functionϕ ∈ C2(E)∩C1(E), such thatLY ϕ =
rϕ andϕ(a) = 0. Denoteνy(Y ∗) = ϕ(y)/ϕ(Y ∗).
This assumption allows for a large class of stochastic processes. For example, any diffusion allows
for an increasing solution toLY ϕ = rϕ (cf. Borodin and Salminen, 1996). In addition, for time-
homogeneous diffusions
dYt = µ(Yt)dt+ σ(Yt)dBt,
for whichµ(y)−ry is non-increasing this increasing solution is also convex (cf. Alvarez, 2003). So, the
model applies to many diffusions and, as we will be shown in Sections 4 and 5 to many jump-diffusions
as well.
To write the optimal stopping problem in a more amenable form, observe that Proposition 5 in
Appendix B can be applied. This means that the follower problem can be written as3
F j(y) = Dj01(y) + max
Y j∈Eνy(Y
j)[Dj11(Y
j)−Dj01(Y
j)− Ij ]
=
Dj01(y) + νy(Y
jF )[D
j11(Y
jF )−Dj
01(YjF )− Ij] if y < Y j
F
Dj11(y)− Ij if y ≥ Y j
F ,
(3)
for some triggerY jF ∈ E.
The leader will take into account that the follower will moveat the stopping timeτ(Y jF ). Therefore,
if the leader moves aty ∈ E, her expected payoff is
Li(y) = Di10(y) + νy(Y
jF )
[
Di11(Y
jF )−Di
10(YjF )
]
− Ii, (4)
2It is understood throughout the paper that stopping times are under the measurePy . In order to keep the notation as simple
as possible, the explicit dependence of stopping times ony is suppressed.3Here we use the conventionτ (b) = ∞, Py-a.s.
5
which is a continuous function. So, the leader’s optimal stopping problem becomes
Li(y) :=Di00(y) + sup
τEy
[
e−rτ(
Li(Yτ )−Di00(Yτ )
)]
,
=Di00(y) + max
Y i∈Eνy(Y
i)[
Li(Y i)−Di00(Y
i)]
,(5)
which, again, has a solution of the trigger type. So, the leader optimally stops at timeτ(Y iL) for some
Y iL ∈ E. Because of the first mover advantage it holds thatY i
L < Y iF if Y i
L < b. It is possible that
Y iL = Y i
F = b, in which case the problem loses its economic content. It will, therefore, implicitly be
assumed throughout thatY iL < b.
Example 1(Investment in a duopoly). Consider a duopoly with two firms, which are currently producing
quantitiesQ0 at a cost ofc0 per period. Both firms have an option to upgrade production toa quantity
Q1 > Q0 at a per period cost ofc1. The sunk costs of this upgrade areI > 0. Both casesc1 > c0 and
c1 < c0 are allowed, as long asI > (c0 − c1)/r. The casec1 < c0 describes a situations where, for
example, the expansion is made possible because of technological improvements that allows for higher
production at lower costs. Total revenues at timet to firm i if firm i producesQk and firmj produces
Qℓ equalYtRkℓ, where it is assumed that
R10 > R11 > R00 ≥ R01, and R10 −R00 > R11 −R01.
For eachy ∈ E, denoteµ(y) = Ey[∫∞0 e−rtYtdt]. Assume thatµ(·) is an increasing function.
The present value functionDkℓ is then given by
Dkℓ(y) = Ey
[∫ ∞
0e−rt(YtRkℓ − ck)
]
= Rkℓµ(y)−ckr,
which satisfies Assumption 1. Note thatYNPV,ℓ is such that
µ(YNPV,ℓ) =
(
I +c1 − c0
r
)
1
R1ℓ −R0ℓ,
which implies thatYNPV,0 < YNPV,1, so that there is a first-mover advantage. The follower valuecan
now be derived:
F (y) = supτ
Ey
[∫ τ
0e−rt(R01Yt − c0)dt+
∫ ∞
τe−rt(R11Yt − c1)dt− e−rτ I
]
=supτ
Ey
[∫ ∞
0e−rt(R01Yt − c0)dt+
∫ ∞
τe−rt((R11 −R01)Yt − (c1 − c0))dt− e−rτI
]
=R01µ(y)−c0r
+ supτ
[
e−rτ
(
(R11 −R01)µ(Yτ )−c1 − c0
r− I
)]
=D01(y) + supY ∗
νy(Y∗)[D11(Y
∗)−D01(Y∗)− I].
If (Yt)t≥0 follows, for example, a geometric Brownian motion
dYt = µYtdt+ σYtdBt,
6
wherer > µ and(Bt)t≥0 is a standard Brownian motion, thenµ(y) = y/(r − µ) and
νy(Y∗) =
( y
Y ∗
)β1
,
whereβ1 > 1 solves the quadratic equation12σ2β1(β1 − 1) + µβ1 − r = 0. Solving the follower’s
optimal stopping problem gives the investment threshold
YF =β1
β1 − 1
r − µ
R11 −R01
(
I +c1 − c0
r
)
.
In a similar way it can be shown that the leader threshold is
YL =β1
β1 − 1
r − µ
R10 −R00
(
I +c1 − c0
r
)
< YF .
⊳
3 Equilibrium Analysis
In games with a first-mover advantage players may try to preempt each other. Such preemptive situations
arise whenever the value of becoming the leader exceeds the value of being the follower, while it is not
optimal for either player to stop. Since the purpose of the paper is to investigate this competition and
since any reasonable concept of equilibrium must have the follower stopping atY iF , it will be implicitly
assumed in the remainder that the follower’s strategy is to move at that threshold.
At each point in time a player has to compare current expectedpayoffs of the leader and follower
roles with the expected payoffs of these roles at some later date. Therefore, we denote the present values
of becoming the leader or follower at the first hitting time ofsome triggerY ∗, underPy, by
Liy(Y
∗) := Di00(y) + νy(Y
∗)[
Li(Y ∗)−Di00(Y
∗)]
and
F iy(Y
∗) := Di00(y) + νy(Y
∗)[
F i(Y ∗)−Di00(Y
∗)]
,
respectively, for ally ≤ Y ∗ ∈ E. Note thatLiy(Y
iL) = Li(y). It can easily be seen thatLi
y(Y∗) ≥
F iy(Y
∗) iff Li(Y ∗) ≥ F i(Y ∗), for all y ∈ E. SinceLi(·) andF i(·) are continuous, there exists
Y iP < Y i
L such thatLi(Y iP ) = F i(Y i
P ). In fact, due to the monotonicity assumptions in Assumption1,
Y iP is unique. This point is called Playeri’s preemption pointand it is the lowest value ofy at which
Playeri would want to preempt Playerj. Hence, the region in which Playeri would wish to preempt
Playerj is [Y iP , Y
iF ). Let thepreemption regionbe defined asSP := [Y 1
P ∨Y 2P , Y
1F ∧Y 2
F ). We will focus
on models withSP 6= ∅. Obviously, Playeri will only want to preempt Playerj if there is a threat that
Playerj might preempt him, i.e. wheny ∈ SP .
Combining this with the results from the previous section, the payoff structure of the model can be
summarized as follows.
Lemma 1. Under Assumptions 1–2 it holds that for every playeri ∈ 1, 2 there exist unique thresholds
1. Y iF ∈ E such thatmaxY ∗ F i
y(Y∗) = F i(y), for all y ≥ Y i
F ;
7
2. Y iL < Y i
F such thatmaxY ∗ Liy(Y
∗) = Li(y), for all y ≥ Y iL;
3. Y iP < Y i
L such thatLi(y) ≥ F i(y), for all y ≥ Y iP .
A plot of typical value functions in an asymmetric case is given in Figure 1, where(Yt)t≥0 follows
a geometric Brownian motion and present value functions areas in Example 1, withc1 = c0 = 0.
0−I
0
Y
Player 1
0
−I
0
exp
ecte
d p
ayo
ff
Player 2
M1(y)
F1(y)
L1(y)
M2(y)
F2(y)
L2(y)
YP1
YP2 Y
L1 Y
L2 Y
F1 Y
F2
Figure 1: Value functions for leader, follower, and simultaneous stopping.
3.1 Strategies and Payoffs
The main difference between deterministic timing games – where time is the state variable – and games
where the state variable is a stochastic process is that in the latter case an agent’s strategy, however
defined, can not be forward looking. In its most general form astrategy for Playeri will be a non-
decreasing process(
Xit
)
t≥0taking values in[0, 1], whereXi
t is the probability with which Playeri
moves at or before timet. For our purposes it suffices to restrict attention to strategies that are driven by
stopping times. For any stopping timeτ , thestopping strategy induced byτ is given by
Xi(τ) :=
0 if t < τ ,
1 if t ≥ τ .
As already remarked, given the strong Markovian nature of(Yt)t≥0, the infinite time horizon and the
monotonicity and concavity of payoff functions, all optimal stopping problems in Section 2 take the
form of trigger policies. Therefore, it stands to reason to focus onthreshold strategies. These consist
8
of a single thresholdY i ∈ E, with the convention that Playeri moves at the first-hitting timeτ(Y i). It
turns out that this is too restrictive, because it only allows for stopping sets of the form[Y ∗, b). In order
to keep the Markovian flavour of the model, but allow for more general stopping sets, we take as the
strategy space the set of all Borel sets onE, denoted byB(E). We then use the convention that players
move at the first hitting time
τ(Si) = inft ≥ 0|Yt ∈ Si, all Si ∈ B(E).
This allows for disconnected stopping sets, but reflects theidea that players move when a certain thresh-
old is reached.
Let (S1, S2) ∈ B(E)×B(E) and defineτ := τ(S1)∧ τ(S2). Then the expected payoff to Playeri
of the pair of strategies(Si, Sj) is given by
V iy (S
i, Sj) =Di00(y) + Ey
[
e−rτ(
1[τ(Si)<τ(Sj)]Li(Yτ ) + 1[τ(Si)>τ(Sj)]F
i(Yτ )
+ 1[τ(Si)=τ(Sj)]Wi(Yτ )−Di
00(Yτ ))]
,(6)
for all y ∈ E. HereW i(·) is a tie-breaking rule giving the expected payoff if both players move at the
same time. To allow for some generality, this function is assumed to be given by
W i(y) = pi(y)Li(y) + pj(y)F
i(y) + p3(y)Mi(y), all y ∈ E,
for some(p1(y), p2(y), p3(y)) ≥ 0, with p1(y) + p2(y) + p3(y) = 1. The most natural choice of tie-
breaking rule is to setp3(y) = 1 for all y ∈ E: if both players move at the same time, they stop at
the same time and, hence, Playeri gets the payoffM i(y). It will be shown below, however, that for
y ∈ SP this implies that no equilibrium exists. In fact, it will be shown that no equilibrium exists for any
tie-breaking rule that does not satisfy therent-equalizationproperty. This property states that for any
y ∈ SP , the probabilities(p1(y), p2(y), p3(y)) are such thatW i(y) = F i(y), for all i. It is easy to see
that for eachy ∈ SP \ ∂SP these probabilities are uniquely determined and thatp3(y) ∈ (0, 1), i.e. that
there is a positive probability that both players stop even if both players strictly prefer this situation not
too occur.4 Since the tie-breaking rule is only important in the preemption region we use the convention
thatp3(y) = 1 for all y ∈ E \ SP .5
Note that, ifY 1P = Y 2
P ≡ YP , rent-equalization requires thatp3(YP ) = 0, but anyp1(YP ) ∈ [0, 1]
and p2(YP ) = 1 − p1(YP ) can be used. The following assumption pins down the most intuitively
appealing choice.
Assumption 3. If Y 1P = Y 2
P ≡ YP , thenp1(YP ) = p2(YP ) = 1/2.
Finally if Y iP < Y j
P , then, again, rent-equalization requires thatp3(YjP ) = 0. This, in turn, means
that for Playeri rent-equalization is consistent with, for example,pi(YjP ) = 0 andpj(Y
jP ) = 1. This
is unsatisfactory, because atY jP Playeri strictly prefers to be the leader, whereas Playerj is indifferent.
Therefore, the following assumption is made.
4Fudenberg and Tirole (1985) refer to this as a “coordinationfailure”.5It is impossible to guarantee thatW i(y) > F i(y), i = 1, 2, for all y ∈ SP . So, rent-equalization is, in some sense the
best outcome that can be attained in the preemption region.
9
Assumption 4. If Y iP < Y j
P , thenpi(YjP ) = 1.
This assumption is made mainly for technical reasons and canbe dispensed with if one is willing
to work with ε-equilibrium. This would increase notational burden without adding much in the way of
economic insight.6
The tie-breaking rule leading to the payoffsW i can be endogenized through an appropriate coor-
dination device. Fudenberg and Tirole (1985), for example,use a repeated “grab-the-dollar” game to
coordinate which player becomes the leader in case both players want to move simultaneously in the
preemption region.7 Another approach is used by Thijssen (2010) who uses correlated equilibrium to
implement a coordination device. In both cases Assumptions3 and 4 are satisfied. A brief discussion of
the “grab-the-dollar” mechanism can be found in Appendix C.
A stopping gameis now defined as a tupleΓ =⟨
B(E), (V iy )y∈E
⟩
i∈1,2. A stopping gameΓ
is preemptiveif SP 6= ∅. A Markov perfect equilibrium(MPE) of the stopping gameΓ is a pair of
stopping sets(S1, S2) ∈ B(E) × B(E), such thatV iy (S
i, Sj) ≥ V iy (S
i, Sj), for all Si ∈ B(E), all
y ∈ E, and alli ∈ 1, 2.
In the remainder of the paper the focus will be on three different types of MPEs. An MPE(S1, S2)
is calledpreemptiveif Si ∩ (a, Y iL) 6= ∅, for at least onei ∈ 1, 2. That is, at least one player
would stop before it is optimal for her to do so. An MPE(S1, S2) is called asequentialequilibrium
if Si ∩ (a, Y iL] = Y i
L, andSj ∩ (a, Y jF ) = ∅. That is, one player waits until it is optimal for her to
stop, whereas the other player chooses,ex ante, to be the follower. Finally, an MPE(S1, S2) is called
collusiveif S1 = S2, andSi ∩ (a, Y 1F ∨ Y 2
F ) = ∅, i = 1, 2. In such equilibria no player stops in the
preemption region, not even at their optimal (leader) threshold.
3.2 Preemptive and Sequential Equilibria
From this point onwards it will be assumed (wlog) thatY 1L ≤ Y 2
L . The existence of equilibria depends
crucially on the ordering ofY 1L andY 2
P , and the tie-breaking ruleW i(·).
The following lemma shows that in any MPE in which one player’s stopping set intersects the pre-
emption region, then so does the other player’s.
Lemma 2. Let Γ be a preemptive stopping game satisfying Assumptions 1–2. Let YF = Y 1F ∨ Y 2
F .
Suppose that(S1, S2) is a Markov perfect equilibrium. IfSj is such thatSj ∩ (a, YF ] 6= ∅, then it holds
that Si ∩ (a, YF ] 6= ∅. Furthermore, all such equilibria are either preemptive orsequential.
Proof. Suppose that Playerj playsSj such thatSj ∩ (a, YF ] 6= ∅ and that Playeri chooses a strategy
Si ∈ B(E), with Si ∩ (a, YF ] = ∅. For each playerk, let Y k be the first point at which Playerk will
stop, i.e.Y k is such that(a, Y k] ∩ Sk = Y k. Let y ∈ [YF , Yi). Consider the following two cases.
1. Y j ≤ y.
In this case Playerj becomes the leader and, therefore, Playeri’s expected payoff isF i(y). Deviating
to Si = [YF , b) would lead to the expected payoffM i(y) = F i(y).
6See also Laraki et al. (2005) for an analysis ofε-equilibrium in deterministic timing games.7See also Thijssen et al. (2012) for an extension of this mechanism to a stochastic setting.
10
2. Y j > y.
In this case Playeri can become the leader and get an expected payoffLi(y) > F i(y) if y < Y jF .
Otherwise, Playeri gets the payoffM i(y) = F i(y).
So, deviating to[YF , b) leads to a higher payoff.
To prove that only sequential or preemptive equilibria exist, suppose that there exists an equilibrium
with Si∩(a, YF ] 6= ∅, all i = 1, 2, which is not preemptive or sequential. This implies thatSi∩(a, Y iL] =
∅, all i = 1, 2. Suppose (wlog) thatY 1L ≤ Y 2
L . Then Player 1 has an incentive to deviate to a stopping
set[Y 1L , b), which would lead to a sequential equilibrium.
In particular this lemma shows that there are no collusive equilibria when Si ∩ (a, YF ] 6= ∅, all
i = 1, 2. Such equilibria will be studied in Section 3.3. For notational convenience, I introduce the
following notation for stopping sets that are consistent with sequential and preemptive equilibria:
SiL = [Y i
L, YjF ) ∪ [Y i
F , b), SiF = [Y i
F , b), and
SiP ℓ = [Y ℓ
P , YjF ) ∪ [Y i
F , b).
These strategies allow for non-connected stopping sets. These are needed in cases whereY 1F 6= Y 2
F ,
because if the current state of the process is such that one player wants to preempt but the other player
is already in the region where she would follow immediately if she were not the first mover, then the
preemptive threat of the former player is not credible. Thisplayer would prefer to become the follower.
If W i does not satisfy the rent-equalization property, equilibrium may fail to exist.
Proposition 1. LetΓ be a preemptive stopping game satisfying Assumptions 1–4. Assume thatΓ does
not satisfy the rent-equalization property.
1. If Y 2P ≥ Y 1
L , then (S1L, S
2F ) is the unique sequential equilibrium. Furthermore, there are no
preemptive equilibria.
2. If Y 2P < Y 1
L , then no sequential or preemptive equilibria exist.
Proof. Suppose (wlog) thatY kF ≤ Y ℓ
F .
1. Consider the following cases.
i. y ≥ Y ℓF .
In this regionF i(y) = M i(y) = Li(y), for both players. Therefore, it is optimal for both to stop.
ii. Y kF < Y ℓ
F andy ∈ [Y kF , Y
ℓF ).
If Player ℓ does not stop then Playerk’s best response is to stop immediately, sinceY kL < y. If
Playerℓ stops, then playerk will follow immediately sincey > Y kF . So, Playerℓ’s payoff will be
M ℓ(y) or F ℓ(y), respectively. SinceM ℓ(y) < F ℓ(y) it is a dominant strategy for Playerℓ not to
stop. Therefore, it should hold thatSk ⊃ [Y kF , Y
ℓF ), andSℓ ∩ [Y k
F , YℓF ) = ∅, which is a property
that the proposed strategies satisfy.
iii. Y 1L ≤ y < Y k
F
Given that Player 1 moves andW 2(y) < F 2(y), Player 2 has no incentive to deviate. Conversely,
given that Player 2 does not move immediately it is optimal for Player 1 to move. So, it should
11
hold thatS1 ⊃ [Y 1L , Y
kF ), andS2 ∩ [Y 1
L , YkF ) = ∅.
iv. y < Y 1L
Given that Player 2 does not move beforeY 2F is hit it is optimal for Player 1 to wait untilY 1
L is
reached. Conversely, sinceY 2L ≥ Y 1
L andY 2P ≥ Y 1
L it holds for ally ≤ Y < Y 1L that
νY (Y1L )[F
2(Y 1L )−D2
00(Y1L )] ≥ F 2(Y )−D2
00(Y ) > L2(Y )−D200(Y ),
which, in turn, implies thatL2y(Y ) < F 2
y (Y1L ). So, Player 2 prefers to become the follower at
Y 1L rather than to preempt and become leader at somey ≤ Y < Y 1
L . Finally, sinceW 2(Y 1L ) <
F 2(Y 1L ), it holds that
F 2y (Y
1L ) > D2
00(y) + νy(Y1L )[W
2(Y 1L )−D2
00(Y1L )].
Hence, Player 2 has no incentive to deviate to anyy ≤ Y ≤ Y 1L . For anyY > Y 1
L , it holds that
V 2y (S
2, S1) = V 2y (S
2, S1) = F 2y (Y
1L ).
To show that there are no preemptive equilibria suppose, on the contrary, thatS1 ∩ (a, Y 1L ) 6= ∅.
Player 2’s best response is to chooseS2 = [Y 2F , b). But then Player 1 is better off by deviating to
S1 = [Y 1L , Y
2F ) ∪ [Y 1
F , b), becauseL1(Y 1) < L1Y 1(Y
1L ), for anyY 1 ∈ S1 ∩ (a, Y 1
L ).
2. Let y ∈ (Y 2P , Y
1L ). Suppose, by contradiction, that(S1, S2) is a preemption or sequential equi-
librium. If y ∈ Si, i = 1, 2, then both players stop simultaneously andV 2y (S
2, S1) = W 2(y) <
F 2(y), which implies that Player 2 wants to deviate. Ify ∈ S1 andy 6∈ S2, then there exists
Y 1 ∈ (y, Y 2 ∧ Y 1L ) such thatL1
y(Y1) > L1
y(Y1) = L1(y). This holds becauseL1
y(·) is increas-
ing on (y, Y 2 ∧ Y 1L ). So, Player 1 wishes to deviate. A similar reasoning appliesto Player 2 if
Y 2 ≤ y < Y 1.
Note that non-existence of equilibrium occurs in the preemption region. Fory ≤ Y 1P ∧ Y 2
P , a Nash
equilibrium always exists because of Assumptions 3 and 4:(Y 1P , Y
1P ) if Y 1
P ≤ Y 2P and (Y 2
P , Y2P ) if
Y 1P > Y 2
P . However, fory ∈ SP \ ∂SP , no equilibrium exists. So, it is the requirement that equilibria
are Markovian that drives non-existence. This is the same problem that is also alluded to in the literature
on deterministic timing games (see, for, example Fudenbergand Tirole, 1985). There one can always
find an open-loop Nash equilibrium, but not a closed-loop (i.e. subgame perfect) equilibrium unless
rent equalization is allowed. Here, the state variable isy, rather than time, and it is the Markovian
requirement that imposes time consistency on strategies. See also Back and Paulson (2009) and Steg
(2012).
If a preemption game satisfies the rent-equalization property, the picture looks very different.
Proposition 2. LetΓ be a preemptive stopping game satisfying Assumptions 1–4 and the rent-equalization
property.
1. Suppose thatY 2P ≤ Y 1
L , and thatY 1P 6= Y 2
P . The following holds:
12
(a) if Y 1P < Y 2
P , then the unique preemptive equilibrium is(S1P2, S
2P2);
(b) if Y 2P < Y 1
P , then the unique preemptive equilibrium is(S1P1, S
2P1).
Furthermore, no sequential equilibria exist.
2. If Y 2P > Y 1
L , then all sequential equilibria are of the form(S1L, S
2), with S2 ⊃ [Y 2F , b) and
S2 ∩ [Y 2P , Y
1F ) 6= ∅. Furthermore, there are no preemptive equilibria.
3. If Y 1P = Y 2
P ≡ YP , then(S1P , S
2P ) is the unique preemptive equilibrium. Furthermore, no sequen-
tial equilibria exist.
Proof. First note that in light of Lemma 2 and the proof of Proposition 1 it is obvious that the statements
are true fory ≥ Y 1F ∧ Y 2
F . Suppose thatY kF ≤ Y ℓ
F .
1. We first show that no sequential equilibria exist. SinceY 1L ≤ Y 2
L it suffices to argue that(S1, S2),
with S1 ∩ (a, Y 1L ] = Y 1
L andS2 ∩ (a, Y 2F ) = ∅ is not an (sequential) equilibrium. Because of
continuity there exists aY 2 < Y 1L such thatL2(Y 2) > F 2
Y 2(Y 1
L ). So, fory < Y , Player 2 has an
incentive to deviate to a strategyS2 with S ⊃ [Y , Y 1L ).
(a) The preemptive equilibrium is established as follows. Note thatY 2P < Y k
F , sinceSP 6= ∅.
Consider the following cases.
i. y ∈ [Y kF , Y
ℓF ).
In this caseV ky (S
k, Sℓ) = Lk(y), andV ℓy (S
ℓ, Sk) = F ℓ(y). Playerℓ has no incentive to
deviate, since any deviation toSℓ > y would lead to a payoffF ℓ(y).
ii. y ∈ [Y 2P , Y
kF ).
Note thatV iy (S
i, Sj) = W i(y) = F i(y), for i = 1, 2. So, for neither player would a
deviation lead to a higher payoff.
iii. y < Y 2P .
As in case (ii), Player 2 has no incentive to deviate to anyS2 with S2 ∩ (a, Y 2P ] = ∅. Let
Y 2 < Y 2P , and consider a strategyS2 with S2 ⊃ [Y 2, Y 2
P ). It also holds that
L2y(Y
2) ≤ L2y(Y
2P ) = F 2
y (Y2P ),
where the first inequality holds becauseY 2 < Y 2P < Y 2
L andL2y(·) is non-decreasing on
(a, Y 2L ), and the second equality holds by definition. So, Player 2 hasno incentive to deviate.
ConsiderS1 with S1 ⊃ [Y 1, Y 2P ), for someY 1 < Y 2
P . BecauseY 2P < Y 1
L it holds that
L1y(Y
1) < L1y(Y
2P ). SinceY 1
L ≥ Y 2P > Y 1
P , it also holds thatF 1y (Y
2P ) < L1
y(Y2P ). However,
Assumption 4 ensures thatV 1y (S
1, S2) = L1y(Y
2P ). So, Player 1 has no incentive to deviate.
(b) The proof is analogous to that of the previous case.
2. It is dominant for Player 1 to stop whenevery ≥ Y 1L . Given thatY 2
P > Y 1L , it is weakly dominant
for Player 2 not to preempt Player 1, since for anyy ≤ Y 2 ≤ Y 1L , it holds that
L2y(Y
2) ≤ L2y(Y
1L ) ≤ F 2
y (Y1L ).
13
So, any MPE(S1, S2) must haveS1 ∩ (a, Y 1L ] = Y 1
L andS2 ∩ (a, Y 1L ] = ∅. Therefore, there
are no preemptive equilibria. In addition, for anyy ∈ [Y 1L , Y
2P ) it is optimal for Player 2 to
become follower rather than leader sinceW 2(y) = L2(y) < F 2(y). So, S2 ∩ (a, Y 2P ) = ∅.
For all y ≥ Y 2P , however, Player 2 is indifferent between stopping immediately and not stopping
immediately becauseW 2(y) = F 2(y) due to rent equalization. So, anyS2 with S2∩[Y 2P , Y
1F ) 6= ∅
leads to a sequential equilibrium.
The non-existence of preemptive equilibria follows from Proposition 1.1.
3. Because of rent equalization, atYP each player is indifferent between stopping immediately and
waiting. If one player choosesSj with Sj ⊃ [YP , YkF ), then the other player has no incentive not
to chooseSi with Si ∩ (a, YP ) 6= ∅, because for allY i < YP , it holds that
Liy(Y
i) < F iy(Y
i) ≤ F iy(YP ).
So,(S1, S2) induces a Nash equilibrium.
Suppose, however, that there is another preemption equilibrium (S1, S2), with, say,YP < Y 1 ≤
Y 2, whereY i = inf Si. Let y ≤ YP . Because of continuity ofL2y(·) andF 2
y (·), Player 2 can find
δ > 0, such that
νY 1−δ(Y1)[F 2(Y 1)−D2
00(Y1)] < L2(Y 1 − δ)−D2
00(Y1 − δ).
This, in turn, implies thatL2y(Y
1 − δ) > F 2y (Y
1), and, thus, that Player 2 should deviate to a
strategyS2 with S2 ⊃ Y 1 − δ.
3.3 Collusive Equilibria
Apart from preemptive and sequential equilibria, there mayexist equilibria in which both players stop
simultaneously. As is made clear in Section 3.2, such equilibria can never be preemptive. In fact, they
only exist if the value of becoming the leader at any point in the preemption region is exceeded by the
expected payoff of simultaneous stopping at some later date. Define, forY ∗ > Y iF ,
M iy(Y
∗) = Di00(y) + νy(Y
∗)[
M i(Y ∗)−Di00(Y
∗)]
. (7)
It follows from Proposition 4 that (7) has a unique maximizerY iM .
A first observation is the following.
Lemma 3. LetYF = Y 1F ∨ Y 2
F . Any equilibrium(S1, S2) with Si ∩ (a, YF ] = ∅, i = 1, 2 is collusive.
Proof. Follows immediately from the fact thatF i(y) = M i(y) = Li(y) for all y ≥ YF .
A sufficient condition for the existence of collusive equilibria is given in the following proposition.
14
Proposition 3. LetΓ be a preemptive stopping game satisfying Assumptions 1–4. If Y ∗ > Y 1F ∨ Y 2
F is
such that
Li(y) ≤ M iy(Y
∗),
for all y ∈ [Y 1P ∧ Y 2
P , Y1F ∨ Y 2
F ] andi ∈ 1, 2, then([Y ∗, b), [Y ∗, b)) is a (collusive) MPE.
Proof. Consider the following cases.
1. y > Y ∗.
Given that Playerj stops immediately, the best response of Playeri is to stop immediately as well, since
Y ∗ > Y iF .
2. y ≤ Y ∗.
Suppose that Playeri deviates toSi, with Si ∩ (a, Y ∗) 6= ∅. Then Playeri will stop at the first hitting
time of someY i < Y ∗. If Y i ≥ Y jF , Playerj will then stop immediately as well. So,V i
y (Si, Sj) =
V iy (S
i, Sj). If Y i < Y jF , the Playeri becomes the leader, but sinceLi(Y i) ≤ M i
Y i(Y ∗) has no incentive
to make this deviation.
4 Examples with Spectrally Negative Levy Processes
In this section some examples are given that illustrate the applicability of the results derived so far.
Attention is mainly focussed on preemptive and sequential equilibria in games that satisfy the rent-
equalization property. ALevy processis an adapted process with independent and stationary incre-
ments.8 Each Levy process has a cadlag version, which is the one wewill work with. For Borel sets
U with 0 6∈ U , thePoisson random measureof (Yt)t≥0 is given byN(t, U) :=∑
0<s≤t 1U (∆Ys). So,
N(t, U) is the number of jumps with a jump size inU . The corresponding compensated Poisson random
measure is denoted byN(t, U), i.e.
N(t, U) = N(t, U)−m(U)t, where m(U) = Ey[N(1, U)],
is theLevy measureof (Yt)t≥0. In differential form a time homogeneous Levy process can be written as
dYt = µ(Yt)dt+ σ(Yt)dBt +
∫
R
γ(Yt, z)N (dz, dt), (8)
where(Bt)t≥0 is a standard Brownian motion. Assume that(Yt)t≥0 takes values inE = (a, b) and that
a− y < γ(y, z) ≤ 0, for all (y, z) ∈ E ×R. (9)
This assumption ensures that(Yt)t≥0 has no upwards jumps and is, hence, spectrally negative. Such
processes are useful to model situations where “success comes on foot and leaves on horseback”.
The generator of the process(Yt)t≥0, defined onC2, is given by partial integro-differential equation
LY g(y) =1
2σ2(y)g′′(y) + µ(y)g′(y)
+ λ
∫
R
[
g(y + γ(y, z)) − g(y)− g′(y)γ(y, z)]
m(dz)− rg(y),(10)
8See, for example, Øksendal and Sulem (2007).
15
whereλ is the intensity of the Poisson process that governs the occurrence of the jumps of(Yt)t≥0.
Suppose thatLyg = rg has an increasingC2 solutionϕ(·), such thatϕ(a) = 0. ForY ∗ ≥ y, recall that
νy(Y∗) = ϕ(y)
ϕ(Y ∗) . In the case of spectrally negative Levy processes, Dynkin’s formula can be used to
show that
νy(Y∗) = Ey
[
e−rτ(Y ∗)]
,
so thatνy(Y ∗) can be interpreted as theexpected stochastic discount factorof the first hitting time of
Y ∗.
All diffusions, i.e. Levy processes without jumps, are spectrally negative. For several well-known
classes of spectrally negative Levy processes the stochastic discount factorνy(·) can be computed ex-
plicitly. The most often used example in economics and finance is the geometric Brownian motion
(GBM), which takesµ(y) = µy, σ(y) = σy, λ = 0 in (8), with r > µ. It then holds that
νy(Y∗) =
( y
Y ∗
)β1
, y ≤ Y ∗,
whereβ1 > 1 is the positive root of the quadratic equation
1
2σ2β(β − 1) + µβ − r = 0.
If one adds Beta distributed negative jumps to a GBM, i.e.λ > 0, γ(y, z) = −yz, and
m′(z) =Γ(a+ b)
Γ(a)Γ(b)za−1(1− z)b−1, a, b > 0,
whereΓ(·) is the Gamma function, it follows that (see Alvarez and Rakkolainen, 2010)
νy(Y∗) =
( y
Y ∗
)β1
, y ≤ Y ∗,
whereβ1 > 0 is the positive root of the equation
Q(β) ≡1
2σ2β(β − 1) +
(
µ+λa
a+ b
)
β − (r + λ) + λΓ(a+ b)Γ(b+ β)
Γ(b)Γ(a+ b+ β)= 0.
In order for the optimal stopping problems to have a solutionit should hold thatyβ1 is a convex function,
which is satisfied only ifβ1 > 1. This can easily be seen to be equivalent toQ(1) < 0, which in turn is
equivalent to the parameter restrictionr > µ.
The results in this paper also apply to processes that exhibit mean-reversion. Consider, for example,
the diffusion
dY = η(Y − Y )dt+ σdB,
onR+, whereY is the long-run value ofY andη determines the speed of mean-reversion. The generator
of this process is
LY g =1
2σ2g′′ + η(Y − Y )g′.
So,
LY ϕ− rϕ = 0 ⇐⇒1
2
σ2
ηϕ′′(y)− (y − Y )ϕ′(y)−
r
ηϕ(y) = 0. (11)
16
Introducing the change of variable
z = −
√
σ2
2η(y − Y ),
and postulating thatϕ(y) = exp(z2/4)h(z), the PDE (11) can be written as
LY ϕ− rϕ = 0 ⇐⇒ h′′(z) +
[
1
2−
r
η−
1
4z2]
h(z) = 0.
This is the Weber differential equation, which has as its general solution the parabolic cylinder function
D−r/η(z) = 2−r/η−1e−z2/4
(
1
2z2;
1 + r/η
2,3
2
)
,
where
H(x; a, b) =
∞∑
n=0
Γ(a+ n)/Γ(a)
Γ(b+ n)/Γ(b)
xn
n!,
is the generalized hypergeometric function. This implies that
νy(Y∗) =
H(
ησ (y − Y )2; 1+r/η
2 , 32
)
H(
ησ (Y
∗ − Y )2; 1+r/η2 , 32
) , y ≤ Y ∗.
These results can be used to find the optimal thresholdsY iF andY i
L. Fork = 0, 1, these thresholds
are obtained as the solution to the optimization problem
maxY ∗
νy(Y∗)[Di
1k(Y∗)−Di
0k(Y∗)− Ii],
which leads to the first order condition (assuming differentiability and concavity ofDikℓ(·)):
∂νy(Y∗)
∂Y ∗[Di
1k(Y∗)−Di
0k(Y∗)− Ii] + νy(Y
∗)∂
∂Y ∗[Di
1k(Y∗)−Di
0k(Y∗)] = 0. (12)
For GBM with beta distributed negative jumps, and mean-reversion we have
∂νy(Y∗)
∂Y ∗= −
β1Y ∗
νy(Y∗), and
∂νy(Y∗)
∂Y ∗= −2−r/η−1 1 + r/η
3
σ2
2ηH
(
η
σ(y − Y )2;
3 + r/η
2,5
2
)
,
respectively.
For these stochastic processes the preemption region is plotted in Figure 2 as a function of volatility.
The present values are taken to be similar to those in Example1, with c1 = c0 = 0. For GBM it has
already been established that
Dikℓ(y) =
Rkℓy
r − µ.
For GBM with Beta distributed negative jumps and mean-reversion it is shown in Appendix D that the
equivalent functions are given by
Dikℓ(y) =
Rkℓy
r − µ, and Di
kℓ(y) = Rkℓ
(
Y
r+
y − Y
r + η
)
,
17
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.80
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
volatility
pre
em
ptio
n r
eg
ion
GBMGBM with downward jumps
(a)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.80.5
1
1.5
2
2.5
3
3.5
4
4.5
5
5.5
volatility
pre
em
ptio
n r
eg
ion
(b)
Figure 2: Preemption region as function of volatilityσ for different stochastic processes. General para-
meter values areI = 100, r = .1, R10 = .1I, R11 = .03I, R00 = .02I, andR01 = .01I. (a) GBM
(solid lines) and GBM with negative Beta jumps (dashed lines) with µ = .04, λ = .2, a = 1.5, b = 2;
(b) exponential mean-reversion withY = 15 andη = .07.
respectively. It is assumed thatRkℓ are constants such thatR10 > R11 ≥ R00 ≥ R01 andR10 −R00 >
R11 − R01. As can be seen, the preemption region tends to get wider in the case of higher volatility
for the processes with an exponential growth trend. This happens because a higher volatility does not
influence the present values whereas it increases option values. These option values have a bigger impact
on the follower threshold than on the preemption threshold.After all, preemptive pressure erodes the
option value for the leader. This implies that in games with higher levels of uncertainty, it is more likely
that a preemptive situation occurs.
Another useful comparison to make is to look at the regions inwhich sequential or preemptive
equilibria occur. From Proposition 2 it is clear that these equilibria can not occur simultaneously. The
ordering of the thresholdsY 1P , Y 2
P , andY 1L determines what type of equilibrium exists. These thresholds,
in turn, are determined by the underlying present value functionsDikℓ(·). Figure 3 uses the same setting
and numerical parameter values as the one used to generate Figure 2. The volatility is now kept fixed at
σ = .25, and, instead,R110 andR2
01 are varied. The equilibrium regions are plotted in Figure 3.This
figure assumes thaty < Y 1P ∧Y 2
P . As can be seen the qualitative picture is the same for the three different
stochastic processes. Note thatY 1L < Y 2
L for all values ofR110 andR2
01. So, in a sequential equilibrium
Player 1 would stop first at the first hitting time ofY 1L and Player 2 would follow as soon asY 2
F is hit.
For small values ofR110 andR2
01, the first-mover advantage is biggest for Player 2, which implies that
Y 2P < Y 1
P and, thus, that Player 2 preempts Player 1 at the first hittingtime ofY 1P . For larger values of
R110 andR2
01 the case is reversed (Y 1P < Y 2
P < Y 1L ) and Player 1 preempts Player 2 at the first hitting
time of Y 2P . In both cases the equilibria are preemptive, even though there is a clear prediction which
player is the first to stop. In between these two regions thereis a set whereR110 andR2
01 are such that
Y 1P = Y 2
P ≡ YP . Assumption 3 then guarantees that each player stops with probability 1/2 and that
18
p3(YP ) = 0, i.e. that a “coordination failure” does not occur. In the space of parameters, the set where
this happens has Lebesgue measure zero. Finally, for large values ofR110 the first-mover advantage of
Player 1 is so large that it dominates the preemptive pressure that Player 2 provides (Y 1L < Y 2
P ) and,
thus, Player 1 acts as a leader in these instances and stops atthe first hitting time ofY 1L . This shows
that in a competitive situation one should not necessarily expect to see preemption. Interestingly, for the
mean reverting process there are no values ofR110 that lead to a sequential equilibrium.
40 45 50 55 60 65 70 75 800
10
20
R110
R2 0
1
GBM
40 45 50 55 60 65 70 75 800
10
20
R110
R2 0
1
GBM with negative jumps
40 45 50 55 60 65 70 75 800
10
20
R110
R2 0
1
Mean reversion
Preemption,P1 stops first
Preemption,P1 stops first
Preemption,P2 stops first
P1 stops optimally
P1 stops optimallyPreemption,
P1 stops firstPreemption,P2 stops first
Preemption,coordination
Preemption,P2 stops first
Figure 3: Equilibrium regions for different stochastic processes.
Note that the nature of the stochastic process only determines the relative size of these equilibrium
regions. In particular, the case of mean reversion makes it most likely that a preemptive equilibrium
prevails, whereas the case of a geometric Brownian motion makes it most likely that a sequential equi-
librium prevails. This is intuitively clear since, for the same values ofRikℓ the geometric Brownian
motion gives a higher present value than the mean reverting process due to its exponential growth rate.
If y ∈ SP = (Y 1P ∨Y
2P , Y
1F ∧Y
2F ), then the tie-breaking rule determines which player moves first. The
probabilities with which each player stops are such thatp3(y) > 0, which implies that a “coordination
failure” can occur that is consistent with equilibrium. Foreach player, namely, the rents extracted upon
becoming the leader outweigh the risk of simultaneous stopping. Finally, if y > Y 1F ∨ Y 2
F , then both
players stop immediately.
19
5 Positive Jumps
If (Yt)t≥0 is spectrally negative andY 1P < Y 2
P , Player 1 can wait untilY 2P is reached and stillguarantee
that she becomes the leader. If, however,(Yt)t≥0 has positive jumps (for example, because of technolog-
ical shocks) it is possible that the process jumps overY 2P , which means that Player 1 expects less than
the leader value if she choosesY 2P as her threshold. Since from Proposition 2 it follows that choosing
Y 2P is, in fact, an equilibrium strategy, it follows that the only effect that positive jumps have is on the
realizationof timing decisions and, thus, on players’ expected payoffs. As we will see below this effect
can be substantial.
Take, again, a game whereY 1L < Y 2
P . In such a model, fory < Y 1L , the equilibrium strategies
prescribe that Player 1 stops at the first hitting time ofY 1L and Player 2 stops at the first hitting time of
Y 2P . If (Yt)t≥0 is spectrally negative, this means that Player 1 knowsex antethat she will become the
leader. Her expected payoff is, therefore, the leader valuemultiplied by the expected discount factor
νy(Y1L ). If, however,(Yt)t≥0 has positive jumps, then it is possible that the first hittingtime ofY 1
L is the
same as the first hitting time ofY 2P . If at this hitting time the process takes a value in the preemption
region, then Player 1’s expected payoff (along this sample path) is her follower value, which is lower
than the leader value. In fact, in order to keep Player 2 indifferent between stopping and not stopping it
is possible that she stops first with a smaller probability than the probability with which Player 2 stops
first.9 The following example illustrates this point.
Example 2. Suppose that(qt)t≥0 is a Poisson process with intensityλ > 0, independent of a standard
Brownian motion(Bt)t≥0. Suppose that the process(Yt)t≥0 follows the stochastic differential equation
(SDE)
dYt = µYtdt+ σYtdBt + αYtdqt,
which is a geometric Brownian motion augmented with jumps ofrateα > 0, occurring at Poisson
random times. It is easily obtained that the solution to thisSDE, starting atY0 = y, is given by
Yt = ye(µ−.5σ2)t+σBt+αqt .
Suppose that the present value functions are given by
Dikℓ(y) = Ey
[∫ ∞
0e−rtRkℓYtdt
]
,
whereRikℓ are constants such thatR10 > R11 > R00 ≥ R01 andR10 − R00 > R11 −R01. In that case
standard computations yield that
Dikℓ(y) =
Rkℓy
r − (µ + αλ),
provided thatr > µ+ αλ. These present value functions satisfy Assumption 1.
For the stochastic process(Yt)t≥0 the generator (onC2) is
LY g =∂g
∂Y(µ+ αλ)Y +
1
2
∂2g
∂Y 2σ2Y.
9This happens ifL1(Yτ(Y 2
P))− F 1(Yτ(Y 2
P)) > L2(Yτ(Y 2
P))− F 2(Yτ(Y 2
P)).
20
r = .1 µ = .03 σ = .1
λ = 2 α = .03 y = .45
D110 = 12.5 D1
11 = 5 D100 = 4
D101 = 2.5 D2
10 = 7.5 D211 = 5.5
D200 = 4 D2
01 = 2.5 I = 75
Table 1: Parameter values.
Trigger YP YL YF
Player 1 0.3528 0.9311 3.1657
Player 2 0.9316 2.2612 2.6381
Table 2: Triggers.
As usual we look for convex functionsϕ such thatLY ϕ = rϕ, which gives the general solutionϕ(y) =
Ayβ1 +Byβ2, whereβ1 > 0 andβ2 < 0 are the roots of the quadratic equation
1
2σ2β(β − 1) + (µ + αλ)β − r = 0.
To find the optimal triggers we need thatϕ(0) = 0, so thatB = 0, and thatϕ is convex, which holds
only if β1 > 1, i.e. if r > µ+ λα. In other words, the positive jumps can not be too large. The solution
to the follower problem, provided thatr > µ+ λα, follows immediately:
F i(y) =
R01yr−(µ+λα) +
(
yY iF
)β1[
R11−R01r−(µ+λα)Y
iF − Ii
]
if y < Y iF
R11yr−(µ+λα) − Ii if y ≥ Y i
F ,
where
Y iF =
β1β1 − 1
r − (µ+ αλ)
D11 −D01Ii,
is the optimal threshold. Similarly, it can be found that theleader threshold equals
Y iL =
β1β1 − 1
r − (µ+ αλ)
D10 −D00Ii.
In addition, there is a unique preemption thresholdY iP < Y i
L.
The main difference with the diffusion case (λ = 0) is that it is now possible that a coordination
failure takes place, even ify < Y 1P ∧ Y 2
P . To show this numerically, consider the model with parameter
values as given in Table 1. This leads to triggers as reportedin Table 2. From Table 2 and Proposition 2
it is clear that, if rent equalization is possible, a non-collusive MPE is given by the pair
(S1, S2) =(
[Y 1L , Y
2F ) ∪ [Y 1
F ,∞), [Y 2P , Y
1F ) ∪ [Y 2
F ,∞))
.
So, if (Yt)t≥0 were a diffusion (i.e. ifλ = 0) or spectrally negative (i.e. ifα ∈ (0, 1)) then for
the given starting valuey = .45 it would be clear that Player 1 would become the first mover as soon
21
as .9311 is hit, and Player 2 would follow as soon as 2.6381 is hit. Due to the presence of positive
jumps, however, it is possible thatYτ(Y 1L) > Y 2
P and, thus, that both players try to preempt each other.
For the parametrization in Table 1, the probability of a preemptive outcome can be established through
simulations. After generating 1,000 sample paths and conditioning on the 703 sample paths for which
τ(S1) ∧ τ(S2) ≤ 10, 21.91% of these end up with Player 1 being the first mover atY 1L and 78.09%
result in preemption. Along the sample paths resulting in preemption, the probability that Player 1
(Player 2) becomes the first mover is .0029 (.9923). This implies that the probability of a coordination
failure is .0048. Combining these results implies that Player 1 becomes the first mover with probability
.2191+(.7809)(.0029)=.2214.
This illustrates that the combination of preemption and positive jumps lowers the value of the op-
tion to Player 1: it reduces the probability of Player 1 becoming the leader from 100% to 22.14%.
Player 1 suffers from a double whammy: (i) due to the presenceof positive jumps the likelihood of
preemption occurring is higher (especially because in thisexampleY 1L and Y 2
P are very close) and
(ii) rent equalization forcesp1(Yτ(Y 1L)) (p2(Yτ(Y 1
L))) down (up) becauseL1(Yτ(Y 1
L)) − F 1(Yτ(Y 1
L)) >
L2(Yτ(Y 1L))− F 2(Yτ(Y 1
L)). The fact that Player 1 can be preempted destroys some value for this player.
In a sequential equilibrium her value is
Vseq(y) = D100(y) + Ey
[
e−rτ(Y 1L)(L1(Yτ(Y 1
L))−D1
00(Yτ(Y 1L)))
]
= 220.42,
whereas the value of the preemption equilibrium to Player 1 is
Vpreemp(y) =D100(y) + Ey
[
e−rτ(Y 1L)(
1[Yτ(Y 1
L)≤Y 2
P]L
1(Yτ(Y 1L))
1[Yτ(Y 1
L)>Y 2
P]F
1(Yτ(Y 1L))−D1
00(Yτ(Y 1L)))]
= 200.50.
So, the presence of positive jumps reduces Player 1’s value by approximately 9%. ⊳
6 Concluding Remarks
This paper studies equilibria in timing games in continuoustime with a first-mover advantage where
payoffs are driven by a stochastic process. There are three types of equilibria: preemptive, sequential,
and collusive. Equilibrium existence has been studied under fairly general assumptions on the payoff
functions: continuity and monotonicity. For a large class of strongly Markovian semimartingales equi-
librium existence can be guaranteed, but only if there is a mechanism to ensure rent-equalization in the
preemption region. If rent-equalization cannot be guaranteed, then neither can equilibrium existence.
The main value of the paper lies in the results it provides forapplied work in, for example, game
theoretic real options models (see, for example, Chevalier-Roignant and Trigeorgis, 2011). For a large
class of games it has been established that optimal policiesand equilibria can be formulated in terms
of triggers (Proposition 4). If rent-equalization is assumed possible, then a clear-cut equilibrium result
exists (Proposition 2). Under rent-equalization, preemption destroys value for the leader, because her
expected payoff is lower than in the case where she can guarantee being the first mover. Since the
expected payoff of the second mover is unchanged this means that preemption unequivocally destroys
22
value. Finally, if the stochastic process allows for positive jumps, then this value destruction can be even
bigger.
This last point shows the main difference between games withstochastic payoffs and deterministic
timing games. When one compares deterministic timing gameswith games where payoffs are stochastic
but spectrally negative, then the main difference is that inthe former it is known precisely when pre-
emption will occur. In the latter game it is known which valueof the underlying process will trigger
preemption, but it is not knownex antewhen that will be. In computing players’ payoffs this means that
a known discount factor (in the deterministic case) has to bereplaced by an expected discount factor (in
the stochastic case). If, however, positive jumps are possible, then one can also not predict the realized
equilibrium scenario precisely. For example, it is possible that a preemptive equilibrium leads to a re-
alization where the player with the higher preemption trigger actually stops first (if the process jumps
over both this preemption trigger and the other player’s leader trigger). It is also possible that players
stop simultaneously, if the process jumps over the interval[Y 1L , Y
1F ∨ Y 2
F ). This additional uncertainty
destroys even more value.
Appendix
A Optimal Stopping and Trigger Policies
Proposition 4. Let(Yt)t≥0 be a strong Markovian process on a state spaceE with sample paths that are
right-continuous and left-continuous over stopping times. Assume thatG : E → R is a non-decreasing
function and that there existsY ∈ E such thatG(Y ) < 0. If the problem
G∗(y) = supτ
Ey
[
e−rτG(Yτ )]
,
has a solution, then the optimal stopping time is
τC = inf t ≥ 0 Yt 6∈ C ,
for some connected setC ⊂ E, withC ⊃ (a, Y ).
Proof. Note that, in principle, the solution of the optimal stopping problem could depend explicitly on
time:
G∗(t, y) = supτ≥t
E(t,y)
[
e−rτG(Yt)]
.
Let
C = (t, y) ∈ R+ × E G∗(t, y) > G(y) ,
so that
E \ C = (t, y) ∈ R+ × E G∗(t, y) = G(y) .
23
We first establish thatC is time invariant, i.e. thatC + (t0, 0) = C. First observe that
G∗(s− t0, y) = supτ
E(s−t0,y)
[
e−rτG(Yτ )]
= supτ
Ey
[
e−r(τ+s−t0)G(Yτ )]
= ert0 supτ
Ey
[
e−r(τ+s)G(Yτ )]
= supτ
E(s,y)
[
e−rτG(Yτ )]
= ert0G∗(s, y).
Time invariance ofC then follows directly from:
C + (t0, 0) = (t+ t0, y) (t, y) ∈ C = (s, y) (s− t0, y) ∈ C
=
(s, y) e−r(s−t0)G(y) < G∗(s− t0, y)
=
(s, y) e−r(s−t0)G(y) < ert0G∗(s, y)
=
(s, y) e−rsG(y) < G∗(s, y)
= C.
So, we can write
G∗(y) = supτ≥t
Ey
[
e−rτG(Yt)]
, and C = y ∈ E G∗(y) > G(y) .
Suppose that this problem has a solution. From Peskir and Shiryaev (2006, Theorem 2.4) we know that
G∗ is the least superharmonic majorant ofG onE and that the first exit time ofC,
τC = inf t ≥ 0 Yt 6∈ C ,
is the optimal stopping time.
1. We first show that(a, Y ) ⊂ C. Let y ≤ Y and let
τ = inf t ≥ 0 G(Yt) ≥ 0 .
Note that it is possible thatPy(τ = ∞) > 0. It holds that
Ey
[
e−rτG(Yτ )]
≥ 0 > G(Y ).
So, it cannot be optimal to stop aty and, hence,(a, Y ] ∈ C.
2. We now show thatC is connected. Suppose not. Then there exist points
y1 > Y , and y2 > y1,
such that
y1 ∈ E \ C, and y2 ∈ C.
Let τ = inft ≥ 0|Yt ≤ y1. SinceG∗ is a superharmonic majorant ofG it holds that
G(y2) < G∗(y2) ≤ Ey2
[
e−rτG∗(Yτ )]
≤ Ey2
[
e−rτ]
G∗(y1)
≤ G∗(y1) = G(y1).
But this contradicts the fact thatG is a non-decreasing function.
24
B Optimal Stopping and Maximization Problems
Proposition 5. Let (Yt)t≥0 be a strong Markovian process on a state spaceE with sample paths that
are right-continuous and left-continuous over stopping times. Assume thatG : E → R is a continuous
function and that Assumption 2 holds. TakeY ∗ ∈ E. Then
Ey
[
e−rτ(Y ∗)G(Yτ(Y ∗))]
=
νy(Y∗)G(Y ∗) if y < Y ∗
G(y) if y ≥ Y ∗,
where
νy(Y∗) =
ϕ(y)
ϕ(Y ∗).
Proof. The result is obvious fory ≥ Y ∗. Suppose thaty < Y ∗. Consider the processXt = (s+ t, Yt).
Then it holds that
LXe−rtg(y) = e−rtLY g(Yt).
So, from Dynkin’s formula it now follows that
Ey
[
e−rτ(Y ∗)ϕ(Yτ(Y ∗))]
= ϕ(y) + Ey
[
∫ τ(Y ∗)
0e−rt
LY ϕ(Yt)dt
]
= ϕ(y).
We can always chooseϕ such thatϕ(Y ∗) = G(Y ∗), because
LY Aϕ(y)− rAϕ(y) = LY ϕ(y) − rϕ(y) = 0,
for any constantA. Therefore,
Ey
[
e−rτ(Y ∗)ϕ(Yτ(Y ∗))]
= ϕ(y) =ϕ(y)
ϕ(Y ∗)G(Y ∗).
C Non-Cooperative Coordination and Rent-Equalization
If one views the use of continuous time simply as a modeling tool that opens up the toolkit of stochastic
calculus, then it is no great step to allow players to coordinate “in between two instantaneous points in
time”. In discrete time this can be modeled fairly straightforwardly. For example, for everyt ∈ Z+
coordination can take place at times[t, t + 1) ∩ Q. This idea can be extended to continuous time by
using the techniques introduced in Dutta and Rustichini (1995). They view time as the two-dimensional
setT = R+×Z+, endowed with the lexicographic ordering, denoted by≥L, and the standard topology
induced by≥L. That is, a typical time element is a pairs = (t, z) ∈ T , which consists of a continuous
and a discrete part. In the remainder,t refers to the continuous part andz to the discrete component.
One can think of the continuous part of time as “process time”in which the stochastic environment
evolves and the discrete part as “coordination time” in which players coordinate their actions. The great
25
advantage of using this set-up is that it allows each part of the model to be analyzed in its most suitable
way: stochastic evolution in continuous time and strategicinteraction in discrete time.
Obviously, the stochastic structure that has been used so far needs to be adapted to this new definition
of time. Since we essentially want to keep the stochastic process(Yt)t≥0 defined on the continuous part
of time only, this is a fairly straightforward exercise. A filtration on(Ω,F ) is now a sequence ofσ-fields,(
F(t,z)
)
(t,z)≥L(0,0), such that
F(t,z) ⊆ F(t′,z′) ⊆ F ,
whenever(t, z) ≤L (t′, z′). For all y ∈ R, let Py be a probability measure on(Ω,F ) and define the
process(
Y(t,z)
)
(t,z)≥L(0,0)such thatY(t,z) = Yt, for all t ∈ R+ andz ∈ Z+. So, the extended process
only moves in “process time” and is constant in “coordination time”. This way, stochastic integrals can
also be extended trivially to operate onT .
In this framework, the threshold strategies introduced in Section 3.1 are not so much the thresholds
at which players stop, but the thresholds at which they are willing to engage in a coordination game.
As argued by Fudenberg and Tirole (1985), this coordinationgame is most conveniently modeled as a
“grab–the–dollar” game. This is an infinitely repeated game, the stage game of which is depicted in
Figure 4. That is, play continues until at least one player “grabs the dollar”, where mixed strategies in
Grab Don’t grab
Grab M1(y),M2(y) L1(y), F 2(y)
Don’t grab F 1(y), L2(y) play again
Figure 4: The coordination game.
the stage game are allowed. Given that the “grab–the–dollar” game is (potentially) infinitely repeated
we can restrict attention to stationary strategies and denote the probability with which Playeri grabs the
dollar in the stage game byαi. The payoff to Playeri in this repeated game depends on the probability
that she is the first to grab the dollar. For a given pair(α1, α2), the probability that Playeri grabs the
dollar first is denoted bypi(y) and is equal to
pi(y) =αi(1− αj) + αi(1− αi)(1− αj)2 + · · ·
=αi∞∑
z=1
(1− αi)z−1(1− αj)z
=αi(1− αj)
αi + αj − αiαj.
(C.1)
Similar computations show that the probabilities that Player j grabs the dollar first, denoted bypj(y),
and that both players grab the dollar simultaneously, denoted byp3(y), are equal to
pj(y) =αj(1− αi)
αi + αj − αiαj, and p3(y) =
αiαj
αi + αj − αiαj, (C.2)
respectively.
The expected payoff to Playeri in the repeated game then equals
W iy(α
i, αj) = pi(y)Li(y) + pj(y)F
i(y) + p3(y)Mi(y). (C.3)
26
It is obvious that it is a weakly dominant strategy to setαi = 1 whenevery ≥ Y 1F ∨ Y 2
F andαi = 0,
whenevery < Y iP or y ∈ [Y j
F , YiF ) in caseY j
F < Y iF . Furthermore, for eachy ∈ SP \ ∂SP , there is a
unique mixed strategy equilibrium where
αi =Lj(y)− F j(y)
Lj(y)−M j(y). (C.4)
The expected payoffs in this equilibrium are easily confirmed to beW iy(α
i, αj) = F i(y). In addition,
coordination takes place in finite “coordination time”. So,in SP \ ∂SP , this way of modeling the
coordination process automatically leads to rent-equalization.
The situation is different fory ∈ ∂SP . First suppose thatY iP < Y j
P . The mixed strategy derived
above would give thatαi(Y jP ) = 0 andαj(Y j
P ) > 0, which results inpi(YjP ) = 0 andpj(Y
jP ) = 1. This
is a very unsatisfactory outcome. However, the pair(αi(Y jP ), α
j(Y jP )) = (1, 0) also constitutes a Nash
equilibrium. This Nash equilibrium would givepi(YjP ) = 1 andpj(Y
jP ) = 0, which gives a justification
for Assumption 4.
A fully degenerate case occurs whenY 1P = Y 2
P ≡ YP . Thenα1(YP ) = α2(YP ) = 0 andp1(YP ) =
p2(YP ) = p3(YP ) = 0/0. However, ifDikℓ(·) andνy(·) areC1, then an application of L’Hopital’s rule
gives:
p1(YP ) = p2(YP ) = 1/2, and p3(YP ) = 0.
This can be thought of as a limiting case where both players use the same infinitesimally small prob-
ability ε > 0 of grabbing the dollar. Sincep3(YP ) is of orderε2 andp1(YP ) andp2(YP ) are of order
ε, the probability of both players stopping simultaneously vanishes at a faster rate. This gives some
justification for Assumption 3.
D Present Value Functions for GBM with Beta Negative Jumps and Mean-
Reversion
From Ito’s lemma it follows that the solution to
dYt
Yt−= µdt+ σdBt −
∫ 1
0zdN (dt, dz),
is
Yt =y exp
(
µ−1
2σ2
)
t+ σBt +
∫ t
0
∫ 1
0log(1− z)N (ds, dz)
+
∫ t
0
∫ 1
0[log(1− z) + z]m(dz)ds
.
27
Because of independence betweenBt andN(t, dz) it follows that
Ey(Yt) =y exp(µt)Ey
[
exp
∫ t
0
∫ 1
0log(1− z)N (ds, dz)
]
×
× exp
∫ t
0
∫ 1
0[log(1− z) + z]m(dz)ds
=y exp(µt) exp
∫ t
0
∫ 1
0[1− z − 1− log(1− z)]m(dz)ds
×
× exp
∫ t
0
∫ 1
0[log(1− z) + z]m(dz)ds
=y exp(µt).
For
dYt = (Y − Yt)dt+ σdBt,
it is well-known that
Ey(Yt) = Y + (y − Y )e−ηt.
The present value functions now follow immediately from
Dikℓ(y) = Ey
[∫ ∞
0RkℓYtdt
]
= Rkℓ
∫ ∞
0Ey(Yt)dt.
References
Alvarez, L.H.R. (2003). On the Properties ofr-Excessive Mappings for a Class of Diffusions.The
Annals of Applied Probability, 13, 1517–1533.
Alvarez, L.H.R. and T.A. Rakkolainen (2010). Investment Timing in Presence of Downside Risk: a
Certainty Equivalent Characterization.Annals of Finance, 6, 317–333.
Back, K. and D. Paulson (2009). Open-Loop Equilibria and Perfect Competition in Option Exercise
Games.Review of Financial Studies, 22, 4531–4552.
Borodin, A. and P. Salminen (1996).Handbook on Brownian Motion – Facts and Formulae. Birkhauser,
Basel.
Bouis, R. , K.J.M. Huisman, and P.M. Kort (2009). Investmentunder Uncertainty in Oligopoly: The
Accordion Effect.International Journal of Industrial Organization, 27, 320–331.
Boyer, M. , P. Lasserre, T. Mariotti, and M. Moreaux (2004). Preemption and Rent Dissipation under
Price Competition.International Journal of Industrial Organization, 22, 309–328.
Chevalier-Roignant, B. and L. Trigeorgis (2011).Competitive Strategy: Options and Games. MIT Press,
Cambridge, Mass.
Dutta, P.K. and A. Rustichini (1995).(s, S) Equilibria in Stochastic Games.Journal of Economic
Theory, 67, 1–39.
28
Fudenberg, D. and J. Tirole (1985). Preemption and Rent Equalization in the Adoption of New Tech-
nology. Review of Economic Studies, 52, 383–401.
Grenadier, S.R. (2002). Option Exercise Games: An Application to the Equilibrium Investment Strate-
gies of Firms.Review of Financial Studies, 15, 691–721.
Huang, C.-F. and L. Li (1990). Continuous Time Stopping Games with Monotone Reward Structures.
Mathematics of Operations Research, 15, 496–507.
Laraki, R. , E.Solan, and N. Vieille (2005). Continuous TimeGames of Timing.Journal of Economic
Theory, 120, 206–238.
Leahy, J.V. (1993). Investment in Competitive Equilibrium: The Optimality of Myopic Behavior.The
Quarterly Journal of Economics, 108, 1105–1133.
Mason, R. and H. Weeds (2010). Investment, Uncertainty, andPre-Emption.international Journal of
Industrial Organization, 28, 278–287.
Øksendal, B. and A. Sulem (2007).Applied Stochastic Control of Jump Diffusions(Second ed.).
Springer–Verlag, Berlin.
Pawlina, G. and P.M. Kort (2006). Real Options in an Asymmetric Duopoly: Who Benefits from your
Competitive Disadvantage.Journal of Economics and Management Strategy, 15, 1–15.
Pawlina, G. and P.M. Kort (2010). Strategic Quality Choice under Uncertainty: A Real Options Ap-
proach.The Manchester School, 78, 1–19.
Peskir, G. and A. Shiryaev (2006).Optimal Stopping and Free-Boundary Problems. Birkhauser Verlag,
Basel.
Posner, R.A. (1975). The Social Costs of Monopoly and Regulation. Journal of Political Economy, 83,
807–827.
Roques, F.A. and N. Savva (2009). Investment under Uncertainty with Price Ceilings in Oligopoly.
Journal of Economic Dynamics and Control, 33, 507–524.
Steg, J.-H. (2012). Irreversible Investment in Oligopoly.Finance and Stochastics, 16, 207–224.
Thijssen, J.J.J. (2010). Preemption in a Real Option Game with a First Mover Advantage and Player-
Specific Uncertainty.Journal of Economic Theory, 145, 2448–2462.
Thijssen, J.J.J. , K.J.M. Huisman, and P.M. Kort (2012). Symmetric Equilibrium Strategies in Game
Theoretic Real Option Models.Journal of Mathematical Economics, 48, 219–225.
Weeds, H.F. (2002). Strategic Delay in a Real Options Model of R&D Competition.Review of Economic
Studies, 69, 729–747.
29