Scholarly article on topic 'Nonzero-Sum Games of Optimal Stopping for Markov Processes'

Nonzero-Sum Games of Optimal Stopping for Markov Processes Academic research paper on "Mathematics"

CC BY
0
0
Share paper
Academic journal
Applied Mathematics and Optimization
OECD Field of science
Keywords
{""}

Academic research paper on topic "Nonzero-Sum Games of Optimal Stopping for Markov Processes"

Appl Math Optim

DOI 10.1007/s00245-016-9388-7

CrossMark

Nonzero-Sum Games of Optimal Stopping for Markov Processes

Natalie Attard1

© The Author(s) 2016. This article is published with open access at Springerlink.com

Abstract Two players are observing a right-continuous and quasi-left-continuous strong Markov process X. We study the optimal stopping problem V1 (x) = supT M1 (T,a) for a given stopping time a (resp. V2(x) = supa M^(T,a) for given t) where M* (r,a) = Ex [G1 (Xx)I (t < a) + H1 (Xa)I (a < t)] with G1, H1 being continuous functions satisfying some mild integrability conditions (resp. M2 (T,a) = Ex [G2(Xa)I (a < t) + H2(Xt)I (t < a)] with G2, H2 being continuous functions satisfying some mild integrability conditions). We show that if a = aD2 = inf{t > 0 : Xt e D2} (resp. t = tdi = inf{t > 0 : Xt e D1}) where D2 (resp. D1) has a regular boundary, then V1 (resp. V2 ) is finely continuous. If D2 (resp. D1) is also (finely) closed then t^2 = inf{t > 0 : Xt e D^2} (resp. a*D1 = inf{t > 0 : Xt e D^1}) where D^2 = V = G1} (resp. D2TD1 = {V2 = G2}) is optimal for player one (resp. player two). We then derive a

y2 ={ t d1 artial superh

in examples to construct a pair of first entry times that is a Nash equilibrium

partial superharmonic characterisation for V^ (resp. V2 ) which can be exploited

Keywords Nonzero-sum optimal stopping game • Nash equilibrium • Markov process • Double partial superharmonic characterisation • Principle of double smooth fit • Principle of double continuous fit

Mathematics Subject Classification Primary 91A15 • 60G40; Secondary 60J25 • 60G44

B Natalie Attard

natalie.attard@postgrad.manchester.ac.uk

1 School of Mathematics, The University of Manchester, Oxford Road, Manchester M13 9PL, UK

Published online: 07 November 2016

<1 Springer

1 Introduction

Optimal stopping games, often referred to as Dynkin games, are extensions of optimal stopping problems. Since the seminal paper of Dynkin [14], optimal stopping games have been studied extensively. Martingale methods for zero-sum games were studied by Kifer [32], Neveu [44], Stettner [55], Lepeltier and Maingueneau [41] and Ohtsubo [45]. The Markovian framework was initially studied by Frid [22], Gusein-Zade [26], Elbakidze [18] and Bismut [5]. Bensoussan and Friedman [2] and Friedman [20,21] considered zero-sum optimal stopping games for diffusions and developed an analytic approach by relying on variational and quasi-variational inequalities. Ekstrom and Peskir [16] proved the existence of a value in two-player zero-sum optimal stopping games for right-continuous strong Markov processes and construct a Nash equilibrium point under the additional assumption that the underlying process is quasi-left continuous. Peskir in [51] and [52] extended these results further by deriving a semi-harmonic characterisation of the value of the game without assuming that a Nash equilibrium exists a priori. In particular, a necessary and sufficient condition for the existence of a Nash equilibrium is that the value function coincides with the smallest superharmonic and the largest subharmonic function lying between the gain and the loss function which, in the case of absorbed Brownian motion in [0,1], is equivalent to 'pulling a rope' between 'two obstacles' (that is finding the shortest path between the graphs of two functions). Connections between zero-sum optimal stopping games and singular stochastic control problems were studied in [23,31] and[4]. Cvitanic and Karatzas [11] showed that backward stochastic differential equations are connected with the value function of a zero-sum Dynkin game. Advances in this direction can be found in [27]. Various authors have also studied zero-sum optimal stopping games with randomised strategies. For further details one can refer to [39] and the references therein. Zero-sum optimal stopping games have been used extensively in the pricing of game contingent claims both in complete and incomplete markets (see for example [15,17,24,25,30,33,35,36,38] and [19]).

Literature on nonzero-sum optimal stopping games is mainly concerned with the existence of a Nash equilibrium. Initial studies in discrete time date back to Morimoto [42] wherein a fixed point theorem for monotone mappings is used to derive sufficient conditions for the existence of a Nash equilibrium point. Ohtsubo [46] derived equilibrium values via backward induction and in [47] the same author considers nonzero-sum games in which the lower gain process has a monotone structure, and gives sufficient conditions for a Nash equilibrium point to exist. Shmaya and Solan in [54] proved that every two player nonzero-sum game in discrete time admits an e-equilibrium in randomised stopping times. In continuous time Bensoussan and Friedman [3] showed that, for diffusions, a Nash equilibrium exists if there exists a solution to a system of quasi-variational inequalities. However, the regularity and uniqueness of the solution remain open problems. Nagai [43] studies a nonzero-sum stopping game of symmetric Markov processes. A system of quasi-variational inequalities is introduced in terms of Dirichlet forms and the existence of extremal solutions of a system of quasi-variational inequalities is discussed. Nash equilibrium points of the stopping game are then obtained from these extremal solutions. Cattiaux and Lepeltier [8] study special right-processes, namely Hunt processes in the Ray topology, and they prove existence

of a quasi-Markov Nash Equilbrium. The authors follow Nagai's idea but use probabilistic tools rather than the theory of Dirichlet forms. Huang and Li in [29] prove the existence of a Nash equilibrium point for a class of nonzero-sum noncyclic stopping games using the martingale approach. Laraki and Solan [40] proved that every two-player nonzero-sum Dynkin game in continuous time admits an e-equilibrium in randomised stopping times. Hamadene and Zhang in [28] prove existence of a Nash equilibrium using the martingale approach, for processes with positive jumps. One application of nonzero-sum optimal stopping games is seen in the study of game options in incomplete markets, via the consideration of utility-based arguments (see [34]). Nonzero-sum optimal stopping games have also been used to model the interaction between bondholders and shareholders in the study of convertible bonds, when corporate taxes are included and when the company is allowed to claim default (see [9]).

In this work we consider two player nonzero-sum games of optimal stopping for a general strong Markov process. The aim is to use probabilistic tools to study the optimal stopping problem of player one (resp. player two) when the stopping time of player two (resp. player one) is externally given. Although this work does not deal with the question of existence of mutually best responses (that is the existence of a Nash equilibrium) the results obtained can be exploited further in various examples, to show the existence of a pair of first entry times that will be a Nash equilibrium. Indeed, the results derived here will be used in a separate work (see [1]) to construct Nash equilibrium points for one dimensional regular diffusions and for a certain class of payoff functions.

This paper is organised as follows: In Sect. 2 we introduce the underlying setup and formulate the nonzero-sum optimal stopping game. In Sect. 3 we show that if the strategy chosen by player two (resp. player one) is aD2 (resp. tDi ), the first entry time into a regular Borel subset D2 (resp. D1) of the state space, then the value function of player one associated with aD2 (resp. the value function of player two associated with tDj), which we shall denote by VjD(resp. VD), is finely continuous. In Section 4 we

shall use this regularity property of VJD2 (resp. VD) to construct an optimal stopping time for player one (resp. player two). In Sect. 5 we shall use the results obtained in Sects. 3 and 4 to provide a partial superharmonic characterisation for Vj (resp.

VD). More precisely if D2 (resp. Di) is also a closed or finely closed subset of the state space then VjD^ (resp. VT) can be identified with the smallest finely continuous function that is superharmonic in D'2 (resp. in D\) and that majorises the lower payoff function. In Section 6 we shall consider stationary one-dimensional Markov processes and we shall assume that there exists a pair of stopping times (ta*, a B*) of the form ta* = inf{t > 0 : Xt < A*} and aB* = inf{t > 0 : Xt > B*} where A* < B*, that is a Nash equilibrium point. We first show that VjB (resp. V^A ) is continuous at A* (resp. at B*). Then for the special case of one dimensional regular diffusions we shall use the results obtained in Sect. 5 to show that VjB (resp. V^A ) is also smooth at A* (resp. B*) provided that the payoff functions are smooth. This is in line with the principle of smooth fit observed in standard optimal stopping problems (see for example [49] for further details).

2 Formulation of the Problem

In this section we shall formulate rigorously the nonzero-sum optimal stopping game. For this we shall first set up the underlying framework. This will be similar to the one presented by Ekstrom and Peskir (cf. [16, p. 3]). On a given filtered probability space (V, F, (Ft)t>0, P x) we define a strong Markov process X = (Xt)t>o with values in a measurable space (E, B), with E being a locally compact Hausdorff space with a countable base (note that since E has a countable base then it is a Polish space) and B the Borel a-algebra on E. We shall assume that Px (X0 = x) = 1, that the sample paths of X are right-continuous and that X is quasi-left-continuous (that is XPn ^ Xp Px-a.s. whenever pn and p are stopping times such that pn \ p Px-a.s.). All stopping times mentioned throughout this text are relative to the filtration (Ft )t >0 introduced above, which is also assumed to be right-continuous. This means that entry times in open and closed sets are stopping times. Moreover F0 is assumed to contain all Px-null sets from F^ = a (Xt : t > 0), which further implies that the first entry times to Borel sets are stopping times. We shall also assume that the mapping x ^ Px (F) is (universally) measurable for each F e F so that the mapping x ^ Ex [Z] is (universally) measurable for each (integrable) random variable Z.

Remark 2.1 Note that a subset F of a Polish space E is said to be universally measurable if it is ¡-measurable for every finite measure ¡x on (E, B), where B is the Borel-a algebra on E .By ¡-measurable we mean that F is measurable with respect to the completion of B under If B* is the collection of universally measurable subsets of E then a function f : E ^ R is said to be universally measurable if f-1(A) e B* for all A e B(R), where B(R) is the Borel-a algebra on R.

Finally we shall assume that V is the canonical space E[0,to) with Xt (m) = m(t) for m e V. In this case the shift operator 0t : V ^ V is well defined by 0t (m)(s) = m(t+s) for m e V and t, s > 0.

The Markovian version of a nonzero-sum optimal stopping game may now be formally described as follows. Let G1, G2, H1, H2 : E ^ R be continuous functions satisfying Gi < Hi and the following integrability conditions;

Ex [sup\Gi (Xt )|] < and Ex [sup | Hi (Xt )|] < to (2.1)

for i = 1, 2. Suppose that two players are observing X. Player one wants to choose a stopping time t and player two a stopping time a in such a way as to maximise their total average gains, which are respectively given by

M1 (t, a) = Ex [G1 (Xt) I (t < a) + H (X a) I (a < t)] (2.2)

M2 (t, a) = Ex [G2 (Xa) I (a <t) + H2 (Xr) I (t < a)]. (2.3)

For a given strategy chosen by player two, we let

V1(x) = sup M1 (r,a) (2.4)

and for a given strategy t chosen by player one, we let

v2(x) = sup M2 (T,a). (2.5)

We shall refer to Vj (resp. Vt2) as the value function of player one (resp. player two) associated with the given stopping time a (resp. t) of player two (resp. player one). We shall assume that the stopping times in (2.4) and (2.5) are finite valued and if the terminal time T is finite we shall further assume that

Gi (Xt) = Hi (Xt) Px-a.s. (2.6)

for i = 1, 2. In this case one can think of X as being a two dimensional process ((t, Yt))t>0 so that Gi and Hi will be functions on [0, T]x E (cf. [49, p. 36]). (We note that if the terminal time T is infinite and the stopping times t and a are allowed to be infinite our results will still be valid provided that lim suptGi (Xt) = limsupt^ Hi (Xt) Px-a.s).

The game is said to have a solution if there exists a pair of stopping times (t*, a*) which is a Nash equilibrium point, that is M](t, a*) < M1 (t*, a*) and M^(t*, a) < M2 (t* , a*) for all stopping times t, a. This means that none of the players will perform better if they change their strategy independent of each other. In this case V^ (x) = M1 (t*, a*) is the payoff function of player one and Vt (x) = M^ (t*, a*) the payoff function of player two in this equilibrium. So V^ and V^ can be called the value functions of the game (corresponding to (t*, a*)). In general, as we shall see in Sect. 6, there might be other pairs of stopping times that form a Nash equilibrium point, which can lead to different value functions.

3 Fine continuity property

In this section we show that if the strategy chosen by player two (resp. player one) corresponds to the first entry time into a subset D2 (resp. D1) of E, whose boundary dD2 (resp. dD1) is regular, then V^ (resp. VT ) is continuous in the fine topology (i.e. finely continuous). For literature on the fine topology one can refer to [6,10] and [13]. We first define the concept of a finely open set and a regular boundary of a Borel subset of E.

Definition 3.1 An arbitrary set B c E is said to be finely open if there exists a Borel set A c B such that Px (pA2 > 0) = 1 for every x e A, where pA2 = inf {t > 0 : Xt e A2} is the first hitting time in A2.

Definition 3.2 The boundary dD of a Borel set D c E is said to be regular for D if Px (pD = 0) = 1 for every point x e dD, where pD = inf{t > 0 : Xt e D}.

We now introduce preliminary results which are needed to prove the main theorem of this section.

Lemma 3.3 For any given stopping time a (resp. t), the mapping x ^ Vj (x), (resp. x ^ V^(x)) is measurable.

Proof To prove measurability of the mapping x ^ VT (x) one can follow the proof in [16, p. 5, pt. 3] by replacing G 1 and G2 with G2 and H2 respectively (note that our payoff functions are assumed to be continuous, hence finely-continuous). So we shall only prove the result for Vj. Ekstrom and Peskir [16, p. 5, pt. 3] proved that for any given stopping time a, the function VJ of the optimal stopping problem supT Mx (t, a) = supT Ex [G1 (Xt)I (t < a) + H1(Xa)I (a < t)] is measurable. The same method of proof can be applied in this setting with the following slight modification: Let G at,X = Gx( X T) I (t < a) + H1( Xa) I (a < t) and G a'1 = G x( Xt) I (t < a) + H1( Xa) I (a < t). Note that the mapping t ^ G a,x is not right-continuous. Now for any stopping time t in the optimal stopping problem (2.4) we let Tn = Jn on {t-1 < t < Jn}

k — 'k 1 < t < } for each n > 1. It is well known that Tn, for

each n is a stopping time on the dyadic rationals Qn of the form 2n and that Tn I t as n ^œ. Since G 1(X) is right-continuous we have that

lim G0'1 = Gï1 Px-a.s. (3.1)

n^œ ln t

Since G1 < H1 we get, upon using (3.1) andFatou's lemma (the required integrability condition for using Fatou's lemma can be derived from the integrability assumption (2.1)), that

Ex [G?'1] < Ex [G = Ex [ lim G T'1] < liminf Ex [G f;1]

l T j l t j Ln^œ Tn j n^œ l Tn j

< sup sup Ex [GÏ1] =: sup V?'1(x). (3.2)

n>1teQn n>1

Taking the supremum over all t it follows that V^(x) < supn>1 Vn'X(x). On the other hand, V„,1(x) < V^(x) for all n > 1 so we get that V^(x) = supn>1 VH'l(x) for all x e E. Measurability of V^(x) follows from the measurability property of supn>1 V?'1(x) as in [16]. □

Lemma 3.4 Let D be a Borel subset of E and let x e d D, where d D is a regular boundary for D. Suppose that (pn is a sequence of stopping times such that pn i 0 Px-a.s. as n ^œ. Set aPn = inf{t > pn : Xt e D}. Then oPn \ 0 Px-a.s. as n ^œ.

Proof Let x e dD. By regularity of dD for any e > 0 there exists t e (0, e) such that Xt e D Px-a.s. Since oPn is a sequence of decreasing stopping times then oPn i j3 Px-a.s. for some stopping time j3. So suppose for contradiction that j3 > 0. Now Pn i 0 Px-a.s. and for each n we have oPn > j Px-a.s. So for any given œ e Q\N where Px (N) = 0 we have that Xt (œ) e D for all t e (0, j(œ)) and this contradicts the fact that d D is regular for D.

The next lemma and theorem, which we shall exploit in this study, provide conditions for fine continuity. The proofs of these results can be found in [13].

Lemma 3.5 A measurable function F : E ^ R is finely continuous if and only if

lim F(Xt) = F(x) Px-a.s. (3.3)

for every x e E. This is further equivalent to the fact that the mapping

t ^ F(Xt (m)) is right-continuous on R+ (3.4)

for every m e N where Px (N) = 0 for all x e E.

Theorem 3.6 Let F : E ^ R be a measurable function and suppose that K i c K2 C K3 c ... is a nested sequence of compact sets in E. Suppose also that pKn I 0 Px-a.s. asn ^ m, where pkh = inf{t > 0 : Xt e Kn}. If Ex [F(XPKn)] = F(x)

then F is finely continuous.

We next state and prove the main result of this section, that is the fine continuity property of V^ (resp. V^).

Theorem 3.7 Let (pn )£l i be any sequence of stopping times such that pn I 0 Px-a.s. as n ^ <x>. Suppose that Di, D2 are Borel subsets of E having regular boundaries dDi and dD2 respectively. Then

A™ Ex K2 (Xp„)] = <2 (x) 0.5)

Ex [< (XPn )] = < (x) (3.6)

where aD2 = inf{t > 0 : Xt e D2} and tDi = inf{t > 0 : Xt e Di}.

Proof We will only prove (3.5) as (3.6) follows by symmetry.

1° From Lemma 3.3 we know that Vj is measurable. This implies that

<2 (Xp„) = sUP MXpn (t, °D2) (3.7)

is a random variable. By the strong Markov property of X we have that

MXpn (t, °D2) = Ex [G i (XTp„ )I(tp„ < a pn) + Hi (XGpn)I (opn < Tpn)\FPn] (3.8)

where we set Tpn = pn + t ° 6pn and apn = pn + aD ° 6pn. It is well known that Tpn and aPn are stopping times (see for example [ i 0, Section i .3, Thoerem i i ]). Let us set

Mx (t, a\Fp) = Ex [G i ( Xt) I (t < a) + Hi (Xa) I (a < t)\Fp ] (3.9) for given stopping times t, a and p. Then from (3.7) and (3.8) we get that

VaD (XPn ) = esssupT Mx (TPn ,aPn\FPn ) = esssupT>pn Mx aPn\FPn )■ (3. ! 0)

The last equality follows from the fact that for every stopping time t > pn, there exists a function TPn : ^ x ^ ^[0, to] such that

TPn is FPn ® — measurable (3.11)

tf ^ TPn (m, tf) is a stopping time (3.12)

t(m) = Pn + TPn (M,0pn (m)) (3.13)

for all m e Note that the latter assertion can be derived from Galmarino's test. In particular if t is the first entry time of X into a set, then t = a + t o 0Pn and TPn can be identified with t in the sense that TPn (M,tf) = T(tf) for all m and tf. Taking expectations on both sides in (3.10) we get

Ex[<(Xpn)] = Ex[esssupT>pn M1 (t, aPn\Fpn)]. (3.14)

2o We next show that the family {M_x (t, aPn\FPn) : t > Pn} is upwards directed. For this we show that for any two stopping times t1 ,t2 > Pn there exists t3 > Pn such that M1 (T3, aPn \F0n) > M1 (T1, aPn\FPn) v M_x (T2, aPn\FPn). So let T1, T2 > Pn be any two stopping times given and fixed and define the set A = {m : M_x (t1; aPn\FPn )(m) > M1 (T2,aPn \^Pn)(m)}. Now A e FPn because M_x (t ,aPn\Fpu) for i = 1, 2 are FPn —measurable. Let t3 = t1 Ia + t2IAc. Since t1; t2 > pn it follows that t3 > pn. Also, {T3 < t} = {{T1 < t} n A} U {{T2 < t} n Ac} = {{T1 < t} n A ^{pn < t}} U {{T2 < t} n Ac n {pn < t}} e Ft. This follows from the fact that the sets A and Ac belong to FPn and that {ti < t}c {pn < t} for i = 1, 2. So t3 is a stopping time and hence,

M1 (T3, apn\Fpn) = Mx (T1 Ia + T2 A, aPn\Fpn) (3.15)

= M1 (t1, apn \Fpn) I a + M1 (t2 , apn\Fp,J Iac = M1 (T1,apn \Fpn) v M1 (T2,apn\FPn)

3o We next prove that if d D2 is a regular boundary for D2 then

<2 (x) < « Ex [Va1D2 (XPn)]' (3.16) For this we first show that

M1 (t, a pn) — M1 (t v pn, apn) = Ex [G 1(X t Apn) — G 1( Xpn)] (3.17)

Since aPn > pn we have that

MX (t, a pn) = Ex [(G1 (Xr) I (t < a pn) + Hi (X0pn) I (opn < t)) I (t < pn)] + Ex [(G i (X t Vpn) I (t V pn < a pn) + Hi (X apn) I ( a pn < t V pn )) I (t > pn)] = Ex[G i(Xx)I(t < pn)] + MX(t V pn, apn)

- Ex [(G i (Xpn) I (pn < apn) + Hi (Xapn) I (apn < pn))) I (t < pn)] = Ex [(Gi(Xr) - Gi(Xpn))I(t < pn)] + MX(t V pn, aPn) = Ex [Gi (XrApn) - Gi (Xpn)] + MX (t V pn,apn) (3. i 8)

fromwhich(3. i 7) follows. By considering separately the sets {a D2 < pn}, {a D2 > pn} (note that apn = aD2 on the set {aD2 > pn}), {aD2 > 0}, {aD2 = 0}, {t = 0} and {t > 0}, and by using Lemma 3.4 we get

Mx (t, aD2) - Mx (T,apn) = Ex[-Gi(xt)I(t < apn)I(0 < pn)I(aD2 = 0)I(t > 0)

+ (Hi (X0) - Hi (Xapn) I (apn < t)) I (0 < pn) I (aD2 = 0)I (t > 0)] = Ex [(Hi (X0) - Hi (Xapn))I (0 < pn) I (aD2 = 0) I (t > 0)] (3. i 9)

for n sufficiently large. The last equality in (3. i 9) can be seen as follows: If x e intD2 the interior of D2 then by the right-continuity property of the sample paths it follows that apn = 0 Px-a.s. for n sufficiently large. If on the other hand x e dD2 then by Lemma 3.4, we have that apn I 0 Px-a.s. Note that in the case x e D2 U dD2 then aD2 > 0 Px-a.s. and so the terms in the right-hand side of (3. i 9) vanish. Combining (3. i 7) and (3. i 9) we get

Mx (t, a D2) - Mx (t V pn, apn) = Mx (t, aD2) - Mx (t, apn) + Mx (t, apn) - Mx(t V pn, apn) < Ex[sup \Gi(X;^) - Gi(Xpn)|]

+ Ex [\Hi (X0) - Hi (Xapn )\I (0 < pn) I (aD2 = 0) I (t > 0)]. (3.20) for n sufficiently large. So

< (x )

< lim inf sup Mx (t V pn, apn)

= liminf sup Mx(T,apn) = liminf Ex[vi (XPn)]. (3.2 i )

T>pn L Upn J

The first inequality follows from (3.20). Indeed, since G and H are continuous, the composed processes G ( X) and H ( X) are right-continuous and so both terms

on the right hand side of (3.20) tend to zero as n ^ m (note that if x e dD2 this follows from Lemma 3.4 whereas if x e intD2 or x e D U dD the result follows as explained in the text before (3.20)). The last equality in (3.2i) follows from the fact that the family {Mx (t, aPn\FPn) : t > pn} is upwards directed (see step 2°), and so we can interchange the expectation and the essential supremum in (3.i4).

4° We show that ViD (x) > lim supn^m Ex [Vi (Xpn)]. From (3.i7) and (3.i9)

we get 2

Mx(t, aD2) > Mi(t V pn, aPn) - Ex[sup \Gi(Xt) - Gi(XPn)\] (3.22)

- Ex [\Hi(X0) - Hi(Xapn )\]

for any stopping time t. From this, together with Lebesgue dominated convergence theorem (upon recalling assumption (2.i)) and the continuity property of Gi and Hi we conclude that

Vj (x) > lim sup sup MX(t v pn, aPn) = lim sup Ex[Vi (Xpn

2 n^m T >pn n^m

We next present an example to show that if d D is not regular for D then V^ may not be finely continuous.

Example 3.8 Let E = R and B the Borel a-algebra on R. Suppose that X is the deterministic motion to the right, that is the process starts at x e R and Xt = x+1 Px -a.s. for each t > 0. In this case the fine topology coincides with the right-topology (cf. [53]), so a function is finely continuous if it is right-continuous. Define the functions G i (x) = ^r I (x < i ) + ^ I (x > i ) and Hi (x) = i I (x < i ) + £ I (x > i). Let D = {i } and aD = inf{t > 0 : Xt e D}. Then dD is not regular for D because if pD = inf{t > 0 : Xt e D} then pD = m P i-a.s. We show that for any given e > 0 the stopping time ts = (t{i ,m) + e) ia + T[i ,m) ia, where t{i ,m) is the first entry time of X in [i , m) and A = {t[x = aD}, is optimal for player one given the strategy aD chosen by player two. For each e > 0 we have that Te = T[i ,m)I(x > i) + (T[i ,m) + e)I(x < i ) = 01(x > i) + ( i - x + e)I(x < i ). So Mx (Te, aD) = Hx ( i )I (x < i ) + Gx (x)I (x > i ). On the other hand, for any stopping time t one can see that

Mx (t, aD) < Ex [G i (Xr) I (t < aD) + Hi (X aD) I (aD < t)]

< (Hi( i )I(t < i - x) + Hi( i )I( i - x < t))I(x < i ) + Gi(x + t)I(x > i) < Mx(Te, aD)

where the last inequality follows from the fact that Gi is decreasing in [i , m). Taking the supremum over all t we get that V^ (x) < Mx (Te, aD) for all x. Since on the other hand Vid (x) > Mx(Te, aD), it follows that V^(x) = Mx(Te, aD) for all x and so we must have that Te is optimal for player one, provided that player two selects strategy

aD. The value function is thus given by V^ (x) = H1(x) I (x < 1) + G 1(x) I (x > 1) which is not right-continuous.

4 Towards a Nash equilibrium

The main result of this section is to show that if aD2 (resp. tD1 ) is externally given as the first entry time in D2 (resp. D1), a set that is either closed or finely closed, and has a regular boundary, then the first entry time t^2 = inf{t > 0 : Xt e D^2} (resp. aTD1 = inf{t > 0 : Xt e d\D1 }) where DaD2 = {V1^ = G1} (resp. D^1 = {V^ = G2}) solves the optimal stopping problem Vj^ (x) = supT M_x (t, aD2) (resp. vtdd (x) = supa M2 (tD1, a)). The proof of this result will be divided into several lemmas and propositions.

Proposition 4.1 Let D1, D2 be Borel subsets ofE having regularies boundaries dD1 and d D2 respectively. Set tD1 = inf{t > 0 : Xt e D1} andaD2 = inf{t > 0 : Xt e D2}. Then,

V1d2 (x) < M1 (t!d2, aD2) + e (4.1)

V?D1 (x) < M2 (td1 ,aeD1) + e (4.2)

for any e > 0, where t1°2 = inf {t > 0 : Xt e D^2 ,e} and alD1 = inf {t > 0 : Xt e Dxd,e} with Dad2,e = {V1d2 < G1 + e} and D^1 ,e = {V?Di < G2 + e}.

Proof We shall only prove (4.1) as for (4.2) the result follows by symmetry. The proof will be carried out in several steps.

1o Consider the optimal stopping problem

V1 (x) = sup M1 (t, a D2) (4.3)

MM 1 (T, a D2) = Ex [G1(Xt)I(T < a D2) + H1(Xad2)I(aD2 < t)]. (4.4)

Recall that the mapping x ^ VJ^ (x) is measurable (cf. [16,p. 5] )andso VJ^ (XP) =

supT MMX (t, aD2) is a random variable for any stopping time p. By the strong Markov property of X it follows that for any stopping time P given and fixed

MM Xp(t, a D2) = Exp [G1( Xr) I (t < a D2) + H1( X ad2) I (aD2 < t)] (4.5) = Ex[G 1(Xp+to0p)I(p + t o Op < p + a D2 o Op) + H1 (Xp+ad2oOp)I(P + aD2 o Op < p + T o Op)\Fp]

and so we have that

VaD2 (Xp) = ess sup Ex[Gi(Xp+ro6p)I(p + t o 0p < p + aD2 o 0P)

+ Hi(Xp+od2o6p)I(p + ap, o Op < p + t o 6p)\fp] =: ess sup M/lX(tp, apDl\fp) (4.6)

where ap^ = inf {t > p : Xt e D2} and tp = p + t o Op. The gain process

G atD211 = G 1( Xt ) I (t < aD2 ) + H1( Xap2 ) I (a D2 < t ) is right-continuous, adapted and satisfies the integrability condition

Ex [sup\GatD2,1\] = Ex [sup\G i( Xt ) I (t < ap2 ) + Hi(X ap, ) I (ap2 < t )\] t>0 t>0

< Ex [sup\G 1( Xt ) I (t < ap2 )\+ sup \ Hi( Xap, ) I (ap, < t )\] < to

t>0 t>0

where the last inequality follows from assumption (2.1). So the martingale approach in the theory of optimal stopping (cf. [49, Theorem 2.2]) can be applied in this setting to deduce that there exists a right-continuous modification of the supermartingale

Sf2 = ess sup MM X (t, ap2\f ), (4.7)

known as the Snell envelope (for simplicity of exposition we shall still denote the right-continuous modification by SfD2), such that the stopping time Tt := Tf D = inf{s > t : S°sD2 = GT2 J} is optimal. It is known (cf. [49, Theorem 2 p. 29]) that the stopped process (S^? )s>t is a right-continuous martingale and so

Ex [SpaD2 ] = Ex [ ~SapD2% ] = Ex [SapDA%Ai0 ] = C = V1d2 (x) (4.8)

for every stopping time p < Te where Te := fl®2 = inf{t > 0 : S^2 < G^ ^ + e}. Using the fact that a^2 = aD2 for any stopping time p < aD2,that Px-a.s., the essential supremum and its right-continuous modification are equivalent at stopping times and that the essential supremum is attained at hitting times (cf. [49]) it follows, from (4.6), that

V1D2 (Xp) = SPD2 Px-a.s. (4.9)

for every stopping time p < aD2.

2o. We next show that Vj^ (x) = Vj (x). Since G1 < H1 we have that

V1d2 (x) > MM 1 (t, a D2) > M1 (T, a D2) (4.10)

for all stopping times t and for all x e E. Taking the supremum over all t we get that

VL (x) > V> (x). (4.11)

To prove the reverse inequality we will show that MX (t, aD2 ) < V^ (x) for all stopping times t so that supT MMX (t, aD2) = VJ^ (x) < V^ (x). By definition of Vj we have that VjD (x) > MX (t, aD2 ) for all stopping times t. Now take any stopping time t and set Te = (t + e) 1 a + t 1 Ac where A = {t = aD2}. (Note that Te is a stopping time since A e ft aod2 c ft ). If the time horizon T is finite, then we shall replace t + e in the definition of Te with (t + e) A T). Then we have that

Mx(Te, 0D2)

= Ex[(G 1 (XTe)I(Te < od2) + Hi(XaD2)I(od2 < Te))I(t = T)I(od2 = T) + (Gi(XTe)I(Te < od2) + Hi(XaD2)I(OD2 < Te))I(t < T)I(od2 = T) + (Gi(XTe)I(Te < od2) + Hi(XaD2)I(od2 < Te))I(T = T)I(od2 < T) + (G i(X Te) I (Te < od2 ) + Hi(XffD2) I (OD2 < Te))I (T < T )I (od2 < T)]

= Ex[(Gi(Xt)I(T < OD2) + Hi(X„D2)I(od2 < T))I(T = T)I(OD2 = T) + (Gi(XT)I(t < od2) + Hi(X„d2)I(°d2 < t))i(t < T)I(od2 = T) + (Gi(XT)I(t < od2) + Hi(X„D2)I(od2 < t))i(t = T)I(od2 < T) + (Gi(Xt)I(t < 0D2) + Hi(XfjD2 )(1 a + I(°d2 < t))I(t < T)I (od2 < T)]

= mm x (t, 0D2)

The first and third expressions in the second equality follow from assumption (2.6). The second expression also follows from assumption (2.6) together with the fact that I (Te < aD2) = I (t < aD2) on the set {t < T}. The last expression follows from the fact that I(Te = aD2) = 0 and I (aDl < Te) = 1A + I(aDl < t) on the set {t < T} n {aD2 < T}. So for any given stopping time t we have V¿D2 (x) >

M1 (Te, a D2) = MM 1 (t, aD2).

3°. We show that V^ (x) = Ex VDi (XaD2 Ate)] where Te := t^2 = inf{t > 0 : Xt e d1D2'£} with d1 = {VaD2 < G1 + e}. From step 2° it is sufficient to prove that VJ^ (x) = Ex VD (XaD2Ate)] where te := = inf{t > 0 : Xt e DDaD2,e} with D"^2'8 = {V1D2 < G1 + e}. By definition of te, for any given t < aD2 A te, we have

V^D (Xt) > G1(Xt) + e

= G1(Xt) I (t < aD2) + H1(XaD2) I (aD2 < t) + e

= G aD2,1 + e (4.12)

where the first equality follows from the fact that I (aD2 < t) = 0. Since aD2 A te < aD2, by (4.9) it follows that

^ (XaD2 ate) = Cafe Px-a.s. (4.13)

Using (4.9) again we get

V1d2 (Xt) = St D2 Px-a.s. (4.14)

So by (4.12) we can conclude that

sID2 > GaD2,1 + e Px-a.s. (4.15)

Recall, from step 2°, that Te = inf {t > 0 : S^2 < G^2,1}. By using the definition of Te together with (4.15) one can see that aD2 A fe < Te .By using (4.8), (4.9) and that (SsAT )s>0 is a martingale we further get that

C = % (x) = Ex [Z^A* Aie ] = Ex [K^A* ] = Ex [V^ (X a ®2 A*)] (4.16)

From (4.16) together with the fine continuity property of Vj^ (upon using Lemma 3.5), the right-continuity property of the composite process G(X) and the fact that

vGD2 = Va1D2,wehave

V1d2 (x) = Ex [S^Afe] = Ex [STD2 I (Te < a®2) + ~SZl I (aD2 < Te)] = Ex [VD (Xfe)I (Te < a D2) + H1(XaD2 )I (aD2 < Te)]

< Ex [(G 1(Xfe) + e)I(Te < a®2) + H1(XaD2)I(a®2 < fe)]

< M1 (Te, a®2) + e (4.17)

for any e > 0, where the third equality follows from the fact that

S^D, = ess sup Ex[G 1(Xr)I(t < a®2) + H1(XaDl)I(a®2 < T)\FaDl] f > a®2

= ess sup Ex[H1 (XaD2 )\FaD2] = H1 (XaD2) (4.18)

f > aD2

Lemma 4.2 Let {pnbe a sequence of stopping times such that pn t P Px-a.s. For a given Borel set D c E define the entry times aPn = inf {t > pn : Xt e D} and aP = inf {t > p : Xt e D}. If either

Disclosed, or (4.19)

D is finely closed with regular boundary d D, (4.20)

then apn t ap Px-a.s.

Proof Since pn is an increasing sequence of stopping times, aPn is increasing and thus aPn t P Px-a.s. for some stopping time ji. We need to prove that ft = aP Px — a.s.

1° We first show that j > p Px-a.s. Suppose, for contradiction, that the set Ô := [a e Œ : ft(rn) < p(œ)} is of positive measure. Since pn \ p Px-a.s. then for every œ e £2\N1, where Px(N\) = 0-a.s., there exists n0(œ) e N such that

pn(a) > ft(a) (4.21)

for all n > n0(a). But for each n e N we have that aPn < ft Px-a.s., so for every a e £2\N2 where Px (N2) = 0 there exists n1(a) e N such that

apn (a) < Pn (a) (4.22)

for all n > m(a). Combining (4.21) and (4.22) it follows that for every a e Q\(N1 U N2) there exists n(a) e N such that aPn(a) < pn(a) for all n > n(a). But this contradicts the fact that aPn > pn Px-a.s. So we must have that ft > p Px-a.s.

2° Let Q1 = {a e Q : ft(a) > p(a)} and Q2 = {a e Q : ft(a) = p(a)}. We prove that there exists a set N with Px (N) = 0 such that ft (a) = ap (a) for every a e (Q1 U Q2)\N.

(i.) Suppose first that Px (Q1) > 0. Since ap>n \ ft Px-a.s. then for every a e Q1 \N3, where Px(N3) = 0, there exists n2(a) e N such that apn (a) > p(a) for all n > n2(a). Moreover, since ap(a) > p(a) for every a e Q1\N4 where Px(N4) = 0 it follows that for every a e Q1\N, where N = N3 U N4, there exists n3(a) e N such that ap>n (a) = ap(a) for all n > n3(a). From this it follows that ft = ap Px-a.s. on Q1.

(ii.) Now suppose that Px (Q2) > 0. Let us consider first the case when (4.19)holds. So we have that ap>n (a) \ p(a) for every a e Q2\N5 where Px (N5) = 0. The fact that D is closed implies that Xapn e D for all n e N. Moreover, by the quasi-left-continuity property of X it follows that Xapn {a) (a) ^ Xp(a) (a) for every a e Q2\N6 where Px(N6) = 0. Again using the fact that D is closed we have that Xp(a) e D for every a e Q2\N7 where Px(N7) = 0. From this it follows that ap(a) = p(a) for every a e Q2\N with N = N6 U N7. By definition of Q2, this implies that ap = ft Px-a.s.on Q2.Now letus consider the case when (4.20)holds. Again by quasi-left-continuous of X it follows that Xapn (a)(M) ^ Xp(a)(a) for each a e Q2\N8 with Px (N8) = 0. Since D is not necessarily closed we have that X p(a)(a) e D U d D. Suppose first that Xp(a)(a) e D. This means that ap(a)(a) — p(a) and so we have that apna)(a) \ ap(a)(a). From this we can conclude that ft(a) = ap(a)(a). To prove that ap = p Px-a.s. on the set Q' := {a e Q : Xp(a) e dD} it is sufficient to show that Px ({ap > p}n Q'). Let nD = inf{t > 0 : Xt e D}. By the strong Markov property of X we have that

exp [ I (n d > 0)] = Ex [ I (p + n d ° 0P > p)\Fp ] (4.23)

Multiplying both sides in (4.23) by IQ and taking Ex expectation on both sides (note that Xp is Fp measurable) we get that Px ({ap > p}n Q') = E x [Iq' Exp [I (nd > 0)]] = 0 by the regularity property of X. □

Proposition 4.3 Let D1, D2 be either closed or finely closed subsets of E. Suppose also that their respective boundaries d D1 and d D2 are regular. Let tDi and aD2 be the first entry times into D1 andD2 respectively. Set t°°2 = inf {t > 0 : Xt e d[D2 } and aJDi = inf{t > 0 : Xt e D,Di 'e} where Df2,s = {Vi^ < Gi + s} and DT2Di,s = {Vt\ < G2 + s}. Then TSaD2 t t°D2 Px-a.s. and'a^1 t af1 Px-a.s., where t*D2 = inf{t > 0 : Xt e DaD2} with Df2 = {V a'1 = G1} and a*Di = inf{t > 0 : Xt e D,Di} with D,Di = {V^ = G2}.

Proof We shall only prove that t^2 t t*D2 Px-a.s. as the other assertion follows by symmetry. Recall the definition of V^D from (4.3)-(4.4). From step 2o in the proof of Proposition 4.1 it is sufficient to prove that 2 s t 2*, where we recall that 2s = inf{t > 0 : Xt e Df2,s} with DaD2'S = {Vi^ < Gi + s} and t^2 = inf{t >

0 : Xt e PID2} with DaD2 ={V1D = Gi}. For each s > 0 we have that 2s < t^2 .

Since fe increases as e decreases, Te t P as e I 0 where j < x°D2 Px-a.s. To prove that j = T^2 we first show that

Ex [V^ (Xp)] < liim^inf Ex [Vff1D2 (Xfe)] (4.24)

For stopping times ap = inf{t > j : Xt e D2} and a*e = inf{t > Te : Xt e D2} we have

Mx (f,ap) — Mx (f,aie)

= Ex [G1(XT)I (t < ap) + H1(Xap)I (ap < t) +H1(Xap)I (ap = t, ap = a fe) — G 1(XT)I(t < aie) —H1( Xae)I (a*e <t) — H1( X a*e) I (a fe = t, aie = ap)

+H1(Xap)I(afe <t) — H1(Xap)I(afe < t)]

< Ex [G1(Xr)(I(t < ap) — I(t < a*e) — I (aie = t, aie = ap))] + Ex[H1( Xap)( I (ap < t ) — I (a fe < t) + I (ap = T,ap = a%)] + Ex [(H1( Xap) — H1( Xa*e)) I (a*e < T)]

= Ex [(G 1(Xf) — H1(Xap)) I (a*e < f < ap)] + Ex [(H1(Xap) — H1(Xa*e))I(afe < T)]

< Ex [(sup \G1 (Xt)\ + sup \H1(Xt)\)I(afe < f < ap)]

+ Ex [\H1(Xap) — H1 (Xafe) \] (4.25)

where the first inequality follows from the fact that G1 < H1. By Lemma 4.2 we have that afe t ap as e I 0 and so the first term on the right hand side of the above expression tends to zero uniformly over all t. Since H1(X) is quasi-left-continuous, the second expression also tends to zero. By the strong Markov property of X (recall

that the expectation and the essential supremum in (3.14) can be interchanged) it follows that Ex [V 1D (X ft)] = supT >ft Mx (t, a ft). By (4.25) we have

sup M1 (t, ap) < liminf sup M1 (t, aie) < liminf sup M1 (t, aie)

T >j T >j e^0 t >te

= Ex WD (xO] (4.26)

Using again the fact that V^ = V^ so that V1D is finely continuous, together with the fact that G1 (X) is left-continuous over stopping times, we get, from Lemma 3.5, that V1D (X Te) < G 1( X Te) + e Px -a.s. Hence it follows that

Ex Vd, (XP)] < limff Ex [kD (x^2 )]

< liminf Ex[Gi(XTcD, ) + e] = Ex[Gi(Xp)]. (4.27)

e|0 L Te 2 J

Combining (4.27) with the fact that V1 (Xft) > G 1(Xft) Px-a.s. we conclude that

Vi (Xft) = G1 (Xft) Px-a.s. (4.28)

But T* 2 — inf {t > 0 : Xt e D1 2} and so we must have that ft > Tq 2. This fact together with ft < T* 2 proves the required result. □

We now state and prove the main result of this section. Theorem 4.4 Given the setting in Proposition 4.3, we have

v!d2 (x) = Ml (T^2 ,CD2), (4.29)

Vt2Di (x) = M, (tdJ , cTDl ). (4.30)

Proof We shall only prove (4.29) as the proof of (4.30) follows by symmetry. Recall, from Proposition 4.1, that Vj^ (x) < Mx(t^2 , aD2) + e. We show that

lim supe|0 Mx(t^2 , cd2) < Mi (t"D2 , cd2) so that V^ (x) < limsupe|0(M_x (tT2,

aD2 ) + e) < M1 (t^, a D2 ). For simplicity of exposition let us set Te := t1°2 and

f* := t1®2 . Now

M1 (f*, a®2) — M1 (fe, a®2) = Ex[G1(Xf*)I (t* < a®2) + H1(Xad2)I(a®2 < f*) + G1 (Xa®2) I (aD2 = T*, T* = Te)

— G 1(Xfe) I (fe < a®2 ) — H1(Xad2 ) I (a®2 < ?e)

— G 1(XaD2) I (ad2 = Te, Te = T* ) + G1( X?e) I (f* < a®2)

— G 1(x fe) I (f* < aD2 ) + G1 (X fe) I (f* = aD2 , T* = Te)

— G 1(Xfe)I(t* = a®2, T* = Te)] > Ex [(G 1(X)

— G 1(Xfe))I(T* < a®2)] + G 1(Xfe)(I(t* < a®2)

— I (fe < aD2 ) I (t* = a®2, T* = Te)) + G 1(Xf*)I(t* = a®2, T* = Te)

— G 1(Xfe) I ( T* = a D2 , T* = Te) + H1(XaD2 )(I (aD2 < T*)

— I (ad2 < Te) — I (fe = aD2, Te = T*))]

= Ex [(G1 (X) — G 1(Xfe))I(t* < a®2) + (H1(X a®2)

— G 1(X fe)) I (fe < a®2 < T*) + G 1(X f* ) I (f* = aD2 , T* = Te)

— G 1(Xfe)I(T* = a D2, T* = Te)] > —2Ex[\G 1(X)

— G 1(Xfe)^ + Ex[(H1(Xid2 ) — G 1(X fe))I (fe < aD2 < T*)] > —2Ex [\G1(Xf*) — G 1(Xfe)\] — Ex [(sup \H1(Xt)\

+ sup\G 1( Xt )\) I (fe < a®2 < T*)] (4.31)

The first inequality follows from the assumption —G1 > — H1 whereas the second equality follows from the fact that

I (ad2 < T*) — I (a d2 < Te) — I (fe = a®2, Te = T*) = —(I (f* < a®2)

— I (fe < aD2) + I (t* = aD2, T* = Te)) = I (fe < a®2 < T*) (4.32)

The penultimate inequality follows from the fact that

(G 1(Xf*) — G 1(Xfe))I(T* < ao2) + (G1(X?*) — G 1(XI(t* = a®2, f* = fe)

> —2\G 1(X ) — G1( Xfe)\ (4.33)

whereas the last inequality follows from the fact that

(H1( Xid2 ) — G1( Xfe)) I (fe < a®2 < f*) > —((sup \ H1( Xt )\

+ sup\G 1( Xt )\) I (fe < a®2 < f*)) (4.34)

Letting e I 0in(4.31) we get, from Proposition 4.3, that I (fe < aD2 < t*) converges to zero uniformly over aD2. Moreover, by using the quasi-left-continuity property of

G 1(X) we conclude that Mx(taD2) > limsupe^0 Mx(te, aD2) and this completes the proof. □

5 Partial superharmonic characterisation

The purpose of the current section is to utilise the results derived in Sects. 3 and 4 to provide a partial superharmonic characterisation of V^ (resp. VT ) when the stopping time aD2 (resp. tD1 ) of player two (resp. player one) is externally given. This characterisation attempts to extend the semiharmonic characterisation of the value function in zero-sum games (see [51] and [52]) and can informally be described as follows: Suppose that G2 = —ex in (2.3). Then the second player has no incentive of stopping the process and so (2.4) reduces to the optimal stopping problem VX (x) = supT Ex [G1(Xr)]. By results in optimal stopping theory VX admits a superharmonic characterisation. More precisely VX can be identified with the smallest superharmonic function that dominates G1 (see [49, p. 37 Theorem 2.4]). However, if G2 is finite valued then there might be an incentive for the second player to stop the process. This raises two questions: (i) is the superharmonic characterisation of Vj still valid before the second player stops the process, (ii) does Vj coincide with H1 at the time the second player stops the process? If the second player selects the stopping time a := aD2 = inf {t > 0 : Xt e D2} where D2 is a closed or finely closed subset of the state space E having a regular boundary d D2 then the above questions can be answered affirmatively and we will say that the value function of player one associated with the stopping time aD2 admits a partial superharmonic characterisation. To be more precise let us consider the set

SupD2 (G1, K1) = {F : E ^[G 1, K1] : F is finely continuous, F = H1 in D2,

F is superharmonic in D2} (5.1)

where K1 is the smallest superharmonic function that dominates H1 and [G1, K1] means that G 1(x) < F (x) < K1(x) for all x e E. Then the value function of player one can be identified with the smallest finely continuous function from SupD2 (G1, K1).

Likewise, suppose that player one selects the stopping time tD1 = inf {t > 0 : Xt e D1} where D1 is a closed or finely closed set having a regulary boundary dD1, and consider the set

SupD1 (G2, K2) = {F : E ^ [G2, K2] : F is finely continuous, F = H2 in D1,

F is superharmonic in D\} (5.2)

where K2 is the smallest superharmonic function that dominates H2 and [G2, K2] means that G2(x) < F(x) < K2(x) for all x e E. Then the value function of player two associated to tD1 can be identified with the smallest finely continuous function from SupD, (G2, K2).

^DC* Dr

Fig. 1 The double partial superharmonic characterisation of the value functions in a nonzero-sum optimal stopping game for absorbed Brownian motion

The above characterisation of V1 and Vf;D can be used to study the existence of a Nash equilibrium. Indeed suppose that one can show the existence of finely continuous functions u and v such that:

(i.) u lies between G1 and K1, u is identified with H1 in the region D2 = {v = G2} and u is the smallest superharmonic function that dominates G1 in the region {v > G2}

(ii.) v lies between G2 and K2, v is identified with H2 in the region D1 = {u = G1} and v is the smallest superharmonic function that dominates G2 in the region {u > G1}.

Then under the assumption that D1 and D2 have regular boundaries and are either closed or finely closed, u and v coincide with Vj and V^ respectively. In this

case we shall say that together, Vj and Vf-^ admit a double partial superharmonic characterisation (see Fig. 1) and can be called the value functions of the game (2.4)-(2.5). Moreover, the pair (td1 , a D2) will form a Nash equilibrium point.

To prove the partial superharmonic characterisation of Vj (resp. Vf;D^) we first

show that for any stopping time a (resp. t), Vj (resp. Vf2) is bounded above by K1 (resp. K2). For this we define the concept of superharmonic functions.

Definition 5.1 Let C be a measurable subset of E and D = E\C. A measurable function F : E ^ R is said to superharmonic in C if Ex [F (XpAaD)] < F (x) for every stopping time p and for all x e E, where aD = inf{t > 0 : Xt e D}. F is said to be superharmonic if Ex[F(Xp)] < F(x) for every stopping time p and for all x e E.

Lemma 5.2 Let Sup(H1) = {F : E ^ R : F > H1, F is superharmonic} be the collection of superharmonic functions that majorise H1. Then for any given stopping time a we have

Vl < inf F. (5.3)

u F eSup( Hi)

Similarly let Sup( H2) = {F : E ^ R : F > H2, F is superharmonic}. Then for any given stopping time t we have

v2 < inf F. (5.4)

t F eSup(H2)

Proof We shall only prove (5.3) as (5.4) can be proved in exactly the same way. Take any stopping time a and any F e Sup( Hi). Then

Mi (t, a) < Ex [F (Xt) I (t < a) + F (Xa) I (a < t)] = Ex [F (XtAa)] (5.5)

for all stopping times t . The first two inequalities follow from the fact that G i < Hi < F. F is superharmonic, so Ex [F (Xp)] < F (x) for any stopping time p and for all x e E. In particular Ex [F (XtAa)] < F (x) for every stopping time t. Thus

MX (T,a) < F (x) (5.6)

for all stopping times t and for all x e E. Taking the infimum over all F in Sup(Hi) on the right hand side and the supremum over all t on the left hand side of (5.6) we get the required result. □

Theorem 5.3 i. Suppose that Ki is the smallest superharmonic function that dominates Hi. Let v > G 2 be a finely continuous function, with D2 = {v = G 2}, such that u := inf F eSupi (Gi k ) F where SupD2 (G i, Ki) is the collection of functions given

in (5.1), exists. If the boundary d D2 of D2 is regular for D2, then

u(x) = VJD2 (x) (5.7)

for all x e E where aD2 = {t > 0 : Xt e D2}.

ii. Similarly suppose that K 2 be the smallest superharmonic function that dominates H2. Let u > Gi be a finely continuous function, with Di = {u = G such that v := inf F eSup2^ (G2 K2) F where Sup2Dl (G2, K2) is the collection of functions defined

in (5.2), exists. If the boundary d D i of D i is regular for D i , then

v(x) = Vt2Di (x) (5.8)

for all x e E where tDi ={t > 0 : Xt e Di}.

Proof We shall only prove (i.) as (ii.) follows by symmetry. We first show that u > VjD2. Take any F e SupD2 (G i, Ki). We know that F is superharmonic in D2 so

F (x) > Ex [f (xtAaDz)] = Ex[F(Xt)I(t < a D2) + F(X„d2)I(aD2 < t)]

Gi (Xt) I(t < a D2) + F (XaD2) I(aD2 < t)

for every stopping time t and for all x e E, where the last inequality follows from the fact that F > G i. Since v is finely continuous and G2 is continuous (hence finely continuous), D2 is finely closed and thus by the definition of SupD2 (Gi, Ki), upon using (3.4) (since F is finely continuous and Hi is continuous hence finely continuous) it can be seen that F(X ) = Hi(Xa^). Thus for any F e SupD2(Gi, Ki) we have

F (x) > Ex

Gi (Xr) I(t < od2) + Hi (xod2) I(od2 < t)

(5.i0)

for every stopping time t and for all x e E. Taking the infimum over all F and the supremum over all t we get that u(x) > V^ (x) for all x e E. We next show that

u < VjD2. For this it is sufficient to prove that Vj^ e SupD2 (G i, Ki) as the result

will follow by definition of u. Recall, from Theorem 3.7, that VjD^ is finely continuous

since the boundary d D2 is assumed to be regular for D2. The fact that VjD< Ki

follows from Lemma 5.2. To show that V^D is bounded below by Gi we note that

Vad > M_x (t, od2 ) for any t in particular for t = 0. Since M_x (0, a d2) = G i (x ) the

result follows. To prove that V^ = Hi in D2 we take any x e D2 so that aD2 = 0.

Then by selecting any stopping time t > 0 we get that V0i(x) > M_x(t, 0) = Hi(x). On the other hand

V0i(x) < sup(Hi(x)Px(t = 0) + Hi(x)Px(t > 0)) = Hi(x). (5.ii)

From this we conclude that V0i(x) = Hi(x). It remains to prove that VjD^ is super-harmonic in D2 By the strong Markov property of X we have

Ex [ vod2 (xp^d2 )]

= E4 sup M\pAaD2 (t, aD2 )]

= E^ ess sup M_x (p A aD2 + T o 9pAaD2 , p A CTD2 + od2 ◦ 0pAaD2 \FpAaD2 )]

= Ex[ ess sup M_x(t, OD2\FpAOD2)]

= sup M_x (t, a D2)

T >pA a^2

< sup Mx (t, aD2) = vJ (x) (5.i2)

for any stopping time p where we recall that Mx(t, a\Fp) = Ex[Gi(XT)I(t < a) + Hi( Xa) I (a < t)\Fp ] for stopping times t, a and p. □

6 The case of stationary one-dimensional Markov processes

In this section we shall assume that the Markov process X takes values in R and is such that Law(X\Px) = Law(Xx\P). We shall also assume that there exist points A*

and B* satisfying — OO < A* < B* < o such that (i.) for given D2 of the form [B *, 0), the first entry time ta* = inf{t > 0 : Xt < A*} (as obtained from Theorem 4.4) is optimal for player one and (ii.) for given D 1 of the form (—0, A*], the first entry time aB* = inf{t > 0 : Xt > B*} (as obtained from Theorem 4.4) is optimal for player two.

So in this section we will assume the existence of a pair (ta* , aB*) that is a Nash equilibrium.

6.1 The principle of double continuous fit

We prove that Vjg (resp. V2A ) is continuous at A* (resp. B *). We shall refer to this result as the principle of double continuous fit. For this we shall further assume that the following time-space conditions hold:

X-A+T ^ XA* P - a.s. (6. 1 )

Xfl- ^ xf* P - a.s. (6.2)

as s, h i 0 and

Xap:+s ^ XA* P - a.s. (6.3)

XB:-e ^ Xbp* P - a.s. (6.4)

whenever pn is a sequence of stopping times such that pn t p. Conditions (6. i )-(6.4) imply that the mapping x ^ Xx (stochastic flow) is continuous at A* and B *. Stochastic differential equations driven by Levy processes, for example, satisfy this property under regularity assumptions on the drift and diffusion coefficients (see for example ([37, p. 340])).

We shall first state and prove the following lemma.

Lemma 6.1 Let aA = inf {t > 0 : Xt > B*} be the optimal stopping time for player

two under PA* and let aA*+s = inf {t > 0 : Xt > B*} be the optimal stopping time of player two under P A*+e, for given e > 0. Then, if condition (6.3) is satisfied we have that aBA*+e t aBA* as e 0. Similarly, if tA* = inf{t > 0 : Xt < A*} is the optimal stopping time for player two under PB* and TA*-e = inf {t > 0 : Xt < A*} is the optimal stopping time of player one under P B*-e, for given e > 0, then if condition

B _e B

(6.4) is satisfied we have that tA * t tA * as e I 0.

Proof We shall only prove that aB,**+e t aD* as e ± 0. The fact that TB**-e t tAb** as e I 0 can be proved in the same way. Since Law(X |Px) = Law(Xx |P) we have that aA**+e is equally distributed as B A**+e := inf{t > 0 : xA*+e > B*} under P whereas aA* is equally distributed as o^ * := inf{t > 0 : X^A* > B*} under P. Now aA*+e t Y as e 0 for some stopping time y < &A*. So to prove the result it remains to show that y > &A*. By the time-space condition (6.3) we have that XA*A++s ^ XA*

* ° B*

P-a.s. as e I 0. Now since XA*A++e > B* for each e > 0 it follows that XA* > B*.

But this implies that y > aB and this proves the required result. □

Proposition 6.2 Suppose that the payoff functions Gi, Hi for i = 1, 2 are also assumed to be bounded. Then the value functions V1B and V^A are continuous at A* and B* respectively.

Proof We shall only prove the result for VjB as for V^A the result will follow by symmetry. To prove this it is sufficient to show that

So (Vc1b* (A*+ e} - V^b* (A*)) = 0 (6.5)

because lime^o(V'¿B* (A* - e) - VjB* (A*)) = lime^Gi(A* - e) - G1 (A*)) = 0 by continuity of G1. Since Vj- (x) > G 1(x) for all x e R and Vj- (A*) = G1(A*) we get that V1Bt (A* + e) - VjB* (A*) > G1 (A* + e) - G1(A*) for every e > 0. So by continuity of G1 we have that lim infe;0(Vj^ (A* + e) - Vj- (A*)) > 0. We next show that lim supe^0(VjB (A* + e) - VjB (A*)) < 0 so that we get the required

result. Given e > 0, let ta* = inf{t > 0 : Xt < A*} be the optimal stopping

time for player one under PA*+e. Then we have that T:*+e is equally distributed as

TAl+e = inf {t > 0 : xA*+e < A*} under P. Now from the optimality of TA**+e under P A +e we have that

<* (A* + e) - <* (A*) < MA*+e (rA*?*, <*+e) - MA* (tA*?*, <*)

= E a *+e [G 1(Xta:+e) i (rA:+e < oA:+e)] - Ea* [G 1(Xta:+e) i (rA:+e < <*)] + Ea* +e [H1(XA4te )I (o£:+e < rA*+e)] - Ea* [H1(X4: )I (<* < T^*)] = E[G 1(xA*C:e)l(Tt+e < *A**+e) - G^x^)I(*:**-*< <*)]

TA * TA*

+ E[ H1(xAA++ee) i (&B:+e < Tt+e) - H1(xAA*) i «* < )] (6.6)

x B„

The first expectation in the last expression on the right hand side of (6.6) can be written as

E[(G1(xA;*++ee)I(xA** < ¿A+e) - G1(xA:*+e)I(A+e < <*))I(ri:+e < <*)

TA* TA *

+ (G1(xA:+es)I(rt+e < aA+e) - G1(xA:*+e)I(f:A;+e < a^))i(rA+e = )

+ (G1(xA:*++ee) i (xt+e < *Bte) - G1(xA:*+e )i (at* < )) i (T:*+e > &A*)]

= e(GuxrA:*++ej -G1(xA:+))I(r:*+e <a:**)] (6.7)

TA * TA*

The last expression in (6.7) follows from the fact that P(TA*+e = a^*) = 0 for all e > 0 sufficiently small (upon assuming that the hitting times considered are finite).

and from the fact that aB*+e t a "A* as e 0 (see Lemma 6.1).

In a similar way one can show that the second expectation in the last expression on the right hand side of (6.6) can be written as

E[(Hi (XA*t+e) - Hi^A*))I«* < ^)] (6.8)

"b* "b*

Since Gi and Hi are continuous and bounded then by the time space conditions (6.1) and (6.3) together with Lemma 6.1 and Fatou's lemma we get the required result. □

Remark 6.3 The assumption of boundedness on Gi and Hi in Proposition 6.2 can be relaxed. For example, the result will also hold provided that G i (XAa++s) - Gi( XA* *+e)

T A * TA *

and Hi (XA*a++s ) - Hi (XA* ) are bounded above by some integrable random variables

aB* aB*

Zi and Z2 respectively. Similarly the boundedness assumption on G2 and H2 can be relaxed.

6.2 The principle of double smooth fit

In this section we will consider the special case when X is a one dimensional regular diffusion process and we shall assume that V^ and V^A are obtained from the double partial superharmonic characterisation as explained in Sect. 5. More precisely we shall assume that the functions u, v introduced in Theorem 5.3 (i.) coincide with those from Theorem 5.3 (ii.) so that a mutual response is assumed to exist. The aim is to use this characterisation to derive the so-called principle of double smooth fit. This principle is an extension of the principle of smooth fit observed in standard optimal stopping problems (see [49]). We note that in the case of more general strong Markov processes in R this principle may break down. As observed in standard optimal stopping problems this may happen for example when the scale function of X is not differentiable (see [50]) or in the case of Poisson process (see [48]). Carr et. al in [7], for example, also showed that this principle breaks down in a CGMY model.

Remark 6.4 Examples of nonzero-sum optimal stopping games for one dimensional regular diffusion processes, for which the optimal stopping regions are of the threshold type are given in [1] and [12]1. In particular the authors therein provide sufficient conditions for existence and uniqueness of Nash equilibria.

So suppose that X is a regular diffusion process with values in R. We shall also assume that the fine topology coincides with Euclidean topology so that fine continuity is equivalent to continuity in the classical sense. In this context we can define the scale

1 The second manuscript was available to the author after the first draft of the paper was published on The University of Manchester Website http://www.maths.manchester.ac.uk/our-research/research-groups/ statistics-and-its-applications/research-reports/.

function S of the process X, that is the mapping S : R ^ R which is a strictly increasing continuous function satisfying

S(d) - S(x) S(x) - S(c)

Px (Tc <Td) = ^-^-T and Px (Td < Tc) = -r-r--— (6.9)

S(d) - S(c) S(d) - S(c)

for any c < x < d where Ty = inf{t > 0 : Xt = y} for y e R. Since we are assuming that D2 = [B*, m) then for any given a, b e (-m, B*) such that a < b we have

M E r Y. i ^S(b) - S(x )t , ^S(x) - S(a) u(x) > Ex[u(x^ V = Ex[u(xra,bK = u(a) Sib) - sW + u(b) S(b) - S(a)

(6.10)

for all a < x < b, where Ta,b = inf{t > 0 : Xt / (a, b)}. The first inequality follows from the fact that u is superharmonic in D2 (recall Definition 5.1). This means that u is S-concave in every interval in (-m, B*) and as for concave functions this implies that the mapping

u(y) - u(x)

y ^ --(6.11)

' S(y) - S(x) y '

is decreasing provided that y = x .By symmetry we have that v is S-concave in every interval in (A*, m) and that the mapping y ^ v(y)-S(x) is decreasing provided

y = x.

From the results for the Dirichlet problem (see for example [49, (7.1.2)-(7.1.3)]) one can show that

Lxu = 0 and Lxv = 0 in C1 n C2u |3C1 = G1 and u \sc2 = H1 v \dc1 = H2 and v \dc2 = G2

where C1 = D\ and C2 = D2. The aim is to show that u' | A* = G\ | A* and v' |B* = G2 | B* These two conditions will be referred to as the principle of double smooth fit. Informally this principle states that the optimal stopping boundary points A* and B* must be selected in such a way that u and v are respectively smooth at these points. The proof of this result follows in a similar way as the proof of Theorem 2.3 in [50]. We shall first state the following lemma, the proof of which can be found in [50].

Lemma 6.5 Suppose that f, g : R+ ^ R are two continuous functions such that

f (0) = g(0) = 0, f (e) > 0 whenever e > 0, and g(S) > 0 whenever 8 > 0. Then

for every en I 0 as n ^ m, there exists enk I 0 and 8k I 0 as k ^ m such that

]im f {e"k) 1 limk^m g(8k) = 1

Proposition 6.6 Suppose that D1 isoftheform (-m, A*] andD2 oftheform [B*, m) for some points A*, B* such that A* < B*. Suppose that G1 is differentiable at A* and G2 is differentiable at B*. If the scale function S ofX is differentiable at A* and B*, then u'(A*) = G 1(A*) and v'(B*) = G2(B*).

Proof We shall first consider the case when S'(A*) = 0. Since u is superharmonic in (-to, B*) we have that Ex[u(XpAaBt)] - u(x) for all stopping times p and for all x e R. Define the exit time Te = inf {t > 0 : Xt e (A* — e, A* + e)} for e > 0 such that A* + e < B*. Then

E a* [u (X teAaD2 )] = E a* [u( X re)] < u( A*) = u (A*)P a* (Xte = A* + e) + G 1 (A*)P a* (Xte = A*— e) (6.12)

where the last equality follows from the fact that u(A*) = G 1(A*). On the other hand we have

E a* [u (X Te)] = u( A* + e)P A* (X Te = A* + e) + Gi( A* — e)P a* (X re = A* — e).

(6.13)

By combining (6.12) and (6.13) it follows that

(u(A* + e) — u(A*))Pa* (Xre = A* + e)

< (G 1(A*) — G1 (A* — e)) Pa* (Xre = A* — e). (6.14)

Since we are assuming that S and G1 are differentiable at A* and that S'(A*) = 0 we get, upon using the facts Pa* (Xre = A* — e) = Ss([A++)e—— SAA—I) and Pa* (Xre = A , S(A*) — S(A*—e) that

A* + e) = s(A*+e)—S(A*—e), that

(u( A* + e) — u (A*))( S( A*) — S( A* — e)) S( A* + e) — S( A*— e)

< (G 1(A*) — G 1(A* — e))(S(A* + e) — S(A*)) - S( A* + e) — S( A*— e)

which is equivalent to (since the scale function is increasing)

u(A* + e) — u(A*) (G 1(A*) — G 1(A* — e)) (S(A*+e)e—S(A*)]

e < e (S(A*)—S(A*—e)) ' (6.15)

Taking the limit as e I 0 on both sides of (6.15) and using the fact that G1 and S are differentiable at A* and that S'(A*) = 0 we get that u'(A*+) = lime|0 u(A*+e)e—u{A*] -G1 (A*). On the other hand we have

u(A* + e) — u(A*) > -. (6.16)

Taking limits on both sides of (6.16) as e I 0 we get that u'(A*+) > G 1(A*). So we can conclude that u'(A*+) = G1(A*). On the other hand, since u (A*—e) = G 1(A* — e) for all e > 0 sufficiently small we have that u'(A*—) = lime|0 u(A*—e)e—u{A*} = G 1(A*). So the result is proved in the case S'(A*) = 0. Now suppose that S'(A*) = 0.

Since G1 < u, we have, for any e, 8 > 0 sufficiently small, that

G1(A* + e) - G 1(A*) u(A* + e) - u(A*) u(A* - 8) - u(A*) "sATey-SAT < SA+ey-SiAAj < S(A* - 8) - S(A*) = G 1( A*- 8) - G 1( A*) S(A* - 8) - S(A*)

(6.17)

where the second inequality follows from the fact that the mapping y ^ s^-stx) is

decreasing. Multiplying both sides of (6.17) by S(A*+ee-S(A*) and using the fact that the scale function is increasing we get

G1(A* + e) - G 1(A*) u(A* + e) - u(A*) ee

g 1(A*-s--G1(a*) S(A* + e) - S(A*) (618)

< S( A*-8)-S( A*) e

Setting f (e) := S(A*+e)-S(aa) and g(8) := S(A*-S}-S(aa) in Lemma 6.5 we get that for any en I 0, there exists enk I 0 and 8k I 0 as k ^ m such that

u( A* + enk) - u( A*) ,. f (enk) G 1( A*- 8k) - G 1(A*) G\(A*) < lim -k- < lim ——k----

k^m enk k^m g(8k) -8k

= G 1( A*). (6.19)

From this it follows that that u'(A*+) = G1 (A*). On the other hand if we multiply (6.17) by S(A*--)-'S(aa) we get

G1(aa) s{A1_z8j—S(AA^< uiAi-^zuiAA^-G1(a*-8) - G 1(a*)

s( A^-Si a*) -8 < -8 = -8

Interchanging e and 8 in (6.20) and using Lemma 6.5 again with f (e) :=

~u( A dx

S(A*)-S(A*-e) and g(8) := S(A*+Sl-S(A*) we get that ^dA*1 = G[(A*). This asser-

tion together with d+ud(xA*) = G1 (A*) proves the required result. By symmetry one can show that v'(B*) = G 1(B*). □

Figure 2a, b show that there can exist more than one Nash equilibrium point. In this example, if both players cooperate and decide to stop the process in the regions D1 and D2 given in Fig. 2a, then their expected gains are higher than those earned if they stop the process in the regions given in Fig. 2b. However this is not the case in the example presented in Fig. 2c, d, where it is evident that nothing is gained if the players cooperate. For a more detailed study on the existence and uniqueness of a Nash equilibrium in the case of absorbed Brownian motion in [0,1] and when the class of payoff functions Gi are of a similar form to the one presented in Fig. 2, the reader is referred to [1].

Fig. 2 Examples for absorbed Brownian motion general

show that uniqueness of Nash equilibrium fails in

Acknowledgements The author is grateful to Professor Goran Peskir for introducing the topic of optimal stopping games, for the many fruitful discussions on the subtleties of Markov processes and the principles of smooth and continuous fit in zero-sum games, and for providing insight into the variational approach as a way of observing and understanding the principles of 'double smooth fit' and 'double continuous fit' in nonzero-sum games.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

References

1. Attard, N.: Nash equilibrium in nonzero-sum games of optimal stopping for Brownian motion. Research Report No. 2, Probab. Statist. Group Manchester (2015)

2. Bensoussan, F., Friedman, A.: Nonlinear variational inequalities and differential games with stopping times. J. Funct. Anal. 16, 305-352 (1974)

3. Bensoussan, F., Friedman, A.: Nonzero-sum stochastic differential games with stopping times and free boundary value problem. Trans. Am. Math. Soc. 213, 275-327 (1977)

4. Boetius, F.: Bounded variation singular stochastic control and Dynkin game. SIAM J. Control Optim. 44, 1289-1321 (2005)

5. Bismut, J.M.: Sur un problème de Dynkin. Z. Wahrsch. Verw. Geb. 39, 31-53 (1977)

6. Blumenthal, R.M., Getoor, R.K.: Markov Processes and Potential Theory. Academic Press, New York (1968)

7. Carr, P., Geman, H., Madan, D., Yor, M.: The fine structure of returns: an empirical investigation. J. Bus. 75, 305-332 (2002)

8. Cattiaux, P., Lepeltier, J.P.: Existence of a quasi-Markov Nash equilibrium for nonzero-sum Markov stopping games. Stoch. Stoch. Rep. 30, 85-103 (1990)

9. Chen, N., Dai, M., Wan, X.: A nonzero-sum game approach to convertible bonds: tax benefit, bankruptcy cost and early/late calls. Math. Financ. 23(1), 57-93 (2013)

10. Chung, K.L., Walsh, J.B.: Markov Processes, Brownian Motion, and Time Symmetry. Springer, New York (2005)

11. Cvitanic, I., Karatzas, I.: Backward SDEs with reflection and Dynkin games. Ann. Probab. 24, 20242056 (1996)

12. De Angelis, T., Ferrari, G., Moriarty, J.: Nash equilibria of threshold type for two-player nonzero-sum games of stopping. (2015) arXiv:1508.03989

13. Dynkin, E.: Markov Processes, Volume 1. Academic Press, New York (1965)

14. Dynkin, E.: Game variant of a problem of optimal stopping. Soviet Math. Dokl. 10, 16-19 (1969)

15. Ekström, E.: Properties of game options. Math. Methods Oper. Res. 63, 221-238 (2006)

16. Ekström, E., Peskir, G.: Optimal stopping games for Markov processes. SIAM J. Control Optim. 47, 684-702 (2008)

17. Ekström, E., Villeneuve, S.: On the value of optimal stopping games. Ann. Appl. Probab. 16,1576-1596 (2006)

18. Elbakidze, N.V.: Construction of the cost and optimal policies in a game problem of stopping a Markov process. Theory Probab. Appl. 21, 163-168 (1976)

19. Emmerling, T.J.: Perpetual cancellable American call options. Math. Financ. 22, 645-666 (2012)

20. Friedman, A.: Stochastic games and variational inequalities. Arch. Ration. Mech. Anal. 51, 321-341 (1973a)

21. Friedman, A.: Regularity theorem for variational inequalities in unbounded domains and applications to stopping time problems. Arch. Ration. Mech. Anal. 76, 134-160 (1973b)

22. Frid, E.B.: The optimal stopping rule for a two-person Markov chain with opposing interests. Theory Probab. Appl. 14, 713-716 (1969)

23. Fukushima, M., Taksar, M.: Dynkin games via Dirichlet forms and singular control of one-dimensional diffusions. SIAM J. Control Optim. 41, 682-699 (2002)

24. Gapeev, P.V.: The spread option optimal stopping game. In: Kyprianou, A., et al. (eds.) Exotic Option Pricing and Advanced Levy Models, pp. 205-293. Wiley, New York (2006)

25. Gapeev, P.V., Kühn, C.: Perpetual convertible bonds in jump-diffusion models. Stat. Decis. 23, 15-31 (2005)

26. Gusein-Zade, S.M.: On a game connected with a Wiener process. Theory Probab. Appl. 14(704), 701 (1969)

27. Hamadene, S., Hassani, M.: BSDEs with two reflecting barriers: the general result. Probab. Theory Related Fields 132(264), 237 (2005)

28. Hamadene, S., Zhang, M.: The continuous time nonzero-sum Dynkin game problem and application in game options. SIAM J. Control Optim. 5, 3659-3669 (2010)

29. Huang, C.-F., Li, L.: Continuous time stopping games with monotone reward structures. Math. Oper. Res. 15, 496-507 (1990)

30. Kallsen, J., Kühn, C.: Pricing derivatives of American and game type in incomplete markets. Financ. Stoch. 9, 261-284 (2004)

31. Karatzas, I., Wang, H.: Connections Between Bounded-Variation Control and Dynkin Games. Volume in Honor of Professor Alain Bensoussan, pp. 353-362. IOS Press, Amsterdam (2001)

32. Kifer, Y.: Optimal stopping in games with continuous time. Theory Probab. Appl. 16, 545-550 (1971)

33. Kifer, Y.: Game options. Financ. Stoch. 4, 443-463 (2000)

34. Kühn, C.: Game contingent claims in complete and incomplete markets. J. Math. Econom. 40, 889-902 (2004)

35. Kühn, C., Kyprianou, A.E.: Callable puts as composite exotic options. Math. Financ. 17, 487-502 (2007)

36. Kühn, C., Kyprianou, A.E., Van Schaik, K.: Pricing Israeli options: apathwise approach. Stochastics 79, 117-137 (2007)

37. Kunita, H.: Stochastic differential equations based on Levy processes and stochastic flows of diffeomor-phisms. In: Rao, M.M. (ed.) Real and Stochastic Analysis: New Perspectives (Trends in Mathematics), pp. 305-373. Birkhäuser, Boston (2004)

38. Kyprianou, A.E.: Some calculations for Israeli options. Financ. Stoch. 8, 73-86 (2004)

39. Laraki, R., Solan, E.: The value of zero-sum stopping games in continuous time. SIAM J. Control Optim. 43, 1913-1922 (2005)

40. Laraki, R., Solan, E.: Equilibrium in two-player nonzero-sum Dynkin games in continuous time. Stochastics 85, 997-1014 (2012)

41. Lepeltier, J.P., Maingueneau, M.A.: Le Jeu de Dynkin en théorie générale sans l'hypothése de Moko-bodski. Stochastics 13, 25-44 (1984)

42. Morimoto, Y.: Nonzero-sum discrete parameter stochastic games with stopping times. Probab. Theory Relat. Fields 72, 155-160 (1986)

43. Nagai, H.: Nonzero-sum stopping games of symmetric Markov processes. Probab. Theory Relat. Fields 75, 487-497 (1987)

44. Neveu, J.: Discrete-Parameter Martingales. North-Holland Publishing Company, Amsterdam (1975)

45. Ohtsubo, Y.: Optimal Stopping in sequential games with or without a constraint of always terminating. Math. Oper. Res. 12, 277-296 (1986)

46. Ohtsubo, Y.: A nonzero-sum extension of Dynkin stopping problem. Math. Oper. Res. 11, 591-607 (1987)

47. Ohtsubo, Y.: On a discrete-time nonzero-sum Dynkin problem with monotonicity. J. Appl. Probab. 28, 466-472(1991)

48. Peskir, G., Shiryaev, A.N.: Sequential testing problems for Poisson processes. Ann. Stat. 28, 837-859 (2000)

49. Peskir, G., Shiryaev, A.N.: Optimal Stopping and Free-Boundary Problems. Lectures in Mathematics. ETH Zürich, Birkhäuser, Basel (2006)

50. Peskir, G.: Principle of smooth fit and diffusions with angles. Stochastics 79, 239-302 (2006)

51. Peskir, G.: Optimal stopping games and Nash equilibrium. Theory Probab. Appl. 53, 558-571 (2008)

52. Peskir, G.: A duality principle for the Legendre transform. J. Convex Anal. 19, 609-630 (2012)

53. Sharpe, M.: General Theory of Markov Processes. Academic Press, New York (1988)

54. Shmaya, E., Solan, E.: Two-player nonzero-sum stopping games in discrete time. Ann. Probab. 32, 2733-2764 (2004)

55. Stettner, L.: Zero-sum Markov games with stopping and impuslive strategies. Appl. Math. Optim. 9, 1-24 (1982)